[Research] Sang-ik Hyun, a Master's Program Student, Publishes a Paper on the ECCV 2020 International Conference
- College of Software and Engineering
- Hit1087
- 2020-10-06
Sang-ik Hyun, a Master's Program Student in the Research Lab of Prof. Jae-pil Heo,
Publishes a Paper on the ECCV 2020 International Conference
Hyun Sang-ik (first year of his master's degree program in Artificial Intelligence, Supervisor Jae-pil Heo) published a paper titled "VARSR: Variant Super-Resolution Network for Very Low Resolution Images" in the European Conference on Computer Vision (ECCV) 2020. ECCV is a top-tier academic conference in the field of computer vision, and this paper is the result of research conducted by Hyun Sang-ik when he was an undergraduate researcher.
In this study, we presented a deep learning model that performs super-resolution of extremely low resolution images. If existing ultra-resolution technologies assumed a 1:1 low resolution-high resolution image mapping relationship, the VarSR Network in this paper was designed to produce variable results by modeling a 1:N mapping relationship in which a single low-resolution image could correspond to multiple high-resolution images. The proposed model can be used in a variety of real-world applications, including the identification of ultra-low resolution faces and license plate images.
Sangeek Hyun and Jae-Pil Heo, “VarSR: Variational Super-Resolution Network for Very Low Resolution Images”, European Conference on Computer Vision (ECCV), 2020.
Abstract:
As is well known, single image super-resolution (SR) is an ill-posed problem where multiple high resolution (HR) images can be matched to one low resolution (LR) image due to the difference in their representation capabilities. Such many-to-one nature is particularly magnified when super-resolving with large upscaling factors from very low dimensional domains such as 8x8 resolution where detailed information of HR is hardly discovered. Most existing methods are optimized for deterministic generation of SR images under pre-defined objectives such as pixel-level reconstruction and thus limited to the one-to-one correspondence between LR and SR images against the nature. In this paper, we propose VarSR, Variational Super Resolution Network, that matches latent distributions of LR and HR images to recover the missing details. Specifically, we draw samples from the learned common latent distribution of LR and HR to generate diverse SR images as the many-to-one relationship. Experimental results validate that our method can produce more accurate and perceptually plausible SR images from very low resolutions compared to the deterministic techniques.