Download PDFOpen PDF in browserCurrent version

RA-BLS: a Sequential BLSs Integrated with Residual Attention Mechanism

EasyChair Preprint 15340, version 1

Versions: 12history
8 pagesDate: October 31, 2024

Abstract

Broad Learning System (BLS) has attracted the attention of many researchers because of its excellent performance. However, The number of random nodes becomes very large when the BLS copes with large complex datasets. Based on this problem, a novel residual attention mechanism is designed and introduced into the BLS to form a sequential BLSs integrated with residual attention mechanism (RA-BLS). The goal is to shrink the network structure by correcting random enhancement nodes. The RA-BLS feeds the residuals back to the enhancement nodes to get efficient and compact feature representations, further strengthening the approximation capability. The RA-BLS drastically reduces the node requirements and further improves the performance of BLS. Finally, experimental results show that RA-BLS achieves excellent performance on UCI, MNIST, FASHION-MNIST, and NORB datasets. Notably, an accuracy of 92.33\% is achieved on the NORB dataset, marking an improvement of 2.99\% with only 17\% of the model size. The source code is available all at https://github.com/arolme/RABLS-Residual-Attention-Broad-Learning-System.

Keyphrases: Attention Mechanism, Broad Learning System, Classification, machine learning, residual

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15340,
  author    = {Yanqiang Wu and Jing Wang and Wei Hu},
  title     = {RA-BLS: a Sequential BLSs Integrated with Residual Attention Mechanism},
  howpublished = {EasyChair Preprint 15340},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browserCurrent version