Download PDFOpen PDF in browserLoss Functions: Evaluating Their Performance and the Need For An Adaptive ApproachEasyChair Preprint 69645 pages•Date: October 31, 2021AbstractMainstream machine learning is dominated by semi-supervised learning. Developments in this field has permitted scholars to harness large amounts of unlabeled data with typically smaller sets of labelled data. This study focuses on the need for an adaptive loss function which automatically adjusts itself by training the model on various datasets. Once semantic segmentation is embedded in the architecture of any model, deeper layers are needed to extract features from images, causing the gradient to be too small for training the network during the learning process, particularly when pixelwise cross entropy loss function is in high dimensional settings, with large number of classes larger objects often overlap with smaller objects causing inaccurate detection. The need is to overcome the impact of super imposed objects on accuracy of classification caused by model confusion owing to the large number of classes. Our research endeavors to deal with the imbalanced data set problem in neural networks by experimenting on various loss functions. The experiments conducted on two different data sets show that different loss functions produce varying results. We present results on Indian driving dataset (IDD) and Cityscapes. Keyphrases: Focal Loss, combo loss, cross-entropy, image process, loss function, neural networks, semantic segmentation
|