Thick Cloud Removal with Optical and SAR Imagery via Convolutional-Mapping -Deconvolutional Network This publication appears in: IEEE Transactions on Geoscience and Remote Sensing Authors: Y. Li and J. C-W Chan Volume: 58 Issue: 4 Pages: 2865-2879 Publication Date: Apr. 2020
Abstract: In this article, we proposed a thick cloud removal method for remote-sensing imagery based on multisource estimation. A convolutional-mapping-deconvolutional (CMD) network is proposed to estimate the cloud-free image directly from multisource reference images. Synthetic aperture radar (SAR) image and low-resolution heterogeneous (LRH) image, namely, image from a different optical sensor with lower spatial resolution, are used as reference images to recover the missing information in the cloud-contaminated high-resolution (HR) image. The CMD net is composed of three functional components: The convolutional layers for encoding, the mapping layer for feature transferring, and the deconvolutional layers for decoding. In the training procedure, HR images from cloud-free regions and their corresponding LRH and SAR reference images are used to train the CMD net. When the CMD net is fully trained, it is able to estimate the HR images with their corresponding LRH and SAR reference images. The LRH and SAR reference images are first encoded by the convolutional layers before being transferred to the feature space at HR by the mapping layer. The transferred features are then decoded into cloud-free HR image by the deconvolutional layers. Cloud-free regions in the cloud-contaminated HR image are used to further improve the estimated image via intensity normalization. At last, the cloudy pixels are replaced by their corresponding pixels from the estimated cloud-free HR image. Comparisons with several recently proposed multisource cloud removal methods show that our proposed method is superior as validated by quantitative indexes and visual inspections.
|