Download PDFOpen PDF in browser

Applying Mahalanobis-Augmented Vector Reconstruction in Autoencoders and Choosing a Scaler to Improve Anomaly Detection Performance

EasyChair Preprint 15318

8 pagesDate: October 28, 2024

Abstract

This study aims to enhance the efficacy of anomaly detection techniques through the application of autoencoders. Autoencoders, neural network models that compress and reconstruct input data by learning patterns from normal instances, typically struggle with reconstructing anomalous data. To address this limitation, we propose integrating Mahalanobis Distance, a method for measuring the distance between a data point and the distribution center, into the autoencoder's latent space. Our approach diverges from conventional methods by treating reconstruction error as a vector rather than a scalar value, allowing for more granular outlier information. We evaluate the model's performance across multiple metrics, including accuracy, precision, recall, F1 score, and ROC-AUC, utilizing five different scaling techniques. Experimental results indicate that RobustScaler offers superior performance due to its resilience to outliers, ensuring consistent results across varied data distributions. This research contributes to the advancement of anomaly detection methodologies, potentially enhancing their applicability in real-world scenarios.

Keyphrases: Autoencoders, Mahalanobis distance, anomaly detection, reconstruction error

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15318,
  author    = {Seung Bum Ha and Joon-Goo Shin and Yong-Min Kim and Chang Gyoon Lim},
  title     = {Applying Mahalanobis-Augmented Vector Reconstruction in Autoencoders and Choosing a Scaler to Improve Anomaly Detection Performance},
  howpublished = {EasyChair Preprint 15318},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser