Download PDFOpen PDF in browser

Improving Performance Through Novel Enhanced Hierarchial Attention Neural Network

EasyChair Preprint 3722

7 pagesDate: July 3, 2020

Abstract

Big data and its classification have been the recent challenge in the evolving world. Data evolving needs to be classified in an effective way. For the classifying process, deep learning and machine learning models are evolved. Hierarchical Attention Network (HAN) is one of the most dominant neural network structures in classification. The major demerits which the HAN is facing are, high computation time and numerous layers. The drawback of HAN is vanquished by a new idea arrived from the mining methods that yield mixed attention network for android data classification. By this flow it could handle more complex request apart from the concept identified. The EHAN (Enhanced Hierarchical Attention Network) has two prototypes. The first one is the attention model to distinguish the features and the second one is the self-attention model to identify worldwide facts. By the demonstrated outcome we infer that the partitioning of task is constructive and therefore the EHAN features shows a significant growth on the news database. In addition to this paper we could add other subnetworks subsequently to assist this ability further.

Keyphrases: Artificial Intelligence, Enhanced Hierarchical Attention Network, deep learning, machine learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:3722,
  author    = {Natarajan Parthiban and Natarajan Sudha},
  title     = {Improving Performance Through Novel Enhanced Hierarchial Attention Neural Network},
  howpublished = {EasyChair Preprint 3722},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser