UIR: Implementing Deep Neural Networks in addition to Conventional Algorithms for Ultra-Image Recovery
Main Article Content
Abstract
Many of the images that can be accessed through web search engines or social media networking sites are rare and not high quality because they are endangered or disappear. There must be a way to increase the quality of these images and conduct experiments to reduce noise, remove blur, and make them sharper to reach high-quality surfaces. Approaches that seek to achieve better results compete to increase the efficiency of those low-resolution images and generate images with the same color (RGB) characteristics but with higher quality. Deep learning algorithms, especially the use of convolutional neural networks (CNNs), have achieved advanced results within this context. In this approach, we propose a powerful base model UIR for image recovery by using conventional neural networks (CNNs) added to conventional algorithms for ultrasupper-resolution from low-resolution images by extracting the feature map from a low-resolution image ILR as overlapping superresolution ISR patches, in which every patch represents a high-dimensional vector. The missing features of the pixels that occur during the training process are subsequently compensated via the residual Swin Transformer block (RSTB). The results of quantitative evaluation experiments using PSNR(db)/SSIM metrics were superior to those of state-of-the-art methods on benchmark datasets (Set5, Set14, and BSD100). The selected images have a magnification of x2, resulting in values of (36.86(db)/0.9739, 36.10(db)/0.9656, 34.74(db)/0.9893) and x4, resulting in values of (34.44(db)/0.9784, 27.71(db)/0.8894, and 26.87(db)/0.9915, respectively. The results of the visual comparison also revealed that the texture of the surfaces is sharper, more expressive, less noisy, and blurry than those of the other methods.
Article Details
Issue
Section

This work is licensed under a Creative Commons Attribution 4.0 International License.