Download PDFOpen PDF in browser

Self-Perceptual Generative Adversarial Network for Synthetic Aperture Sonar Image Generation

EasyChair Preprint 8342

12 pagesDate: June 21, 2022

Abstract

Due to the shortage of Synthetic Aperture Sonar (SAS) image datasets, the development of many underwater tasks is hindered. To tackle this problem, coupling optical rendering and image-to-image translation is a novel and feasible way. However, because of the big gap between simulated optical images and real SAS images, the performances of existing works are not desired and have plenty of room for improvement. In this letter, we introduce a Self-Perceptual Generative Adversarial Network (SPerGAN) which can controllably generate SAS images with high fidelity. It utilizes a kind of self-perceptual loss to generate high-quality and diverse SAS images. Moreover, we introduce a novel evaluation method of SAS image that accords closely with human cognition. To evaluate the performance of our method, we first compare it against recent outstanding image-to-image translation methods on qualitative and quantitative aspects. Then we make ablation studies to explore the effects of different cycle consistency loss and hyper-parameter. The results show that our method surpasses all existing methods and is able to generate diverse and realistic SAS images.

Keyphrases: Generative Adversarial Network, SAS Image Generation, image-to-image translation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:8342,
  author    = {Yuxiang Hu and Wu Zhang and Baoqi Li and Jiyuan Liu and Haining Huang},
  title     = {Self-Perceptual Generative Adversarial Network for Synthetic Aperture Sonar Image Generation},
  howpublished = {EasyChair Preprint 8342},
  year      = {EasyChair, 2022}}
Download PDFOpen PDF in browser