Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (972)

Search Parameters:
Keywords = digital pixel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5770 KiB  
Article
Assessment of Influencing Factors and Robustness of Computable Image Texture Features in Digital Images
by Diego Andrade, Howard C. Gifford and Mini Das
Tomography 2025, 11(8), 87; https://doi.org/10.3390/tomography11080087 (registering DOI) - 31 Jul 2025
Abstract
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. [...] Read more.
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. While we use digital breast tomosynthesis (DBT) to show these effects, our results would be generally applicable to a wider range of other imaging modalities and applications. Methods: We examine factors in texture estimation methods, such as quantization, pixel distance offset, and region of interest (ROI) size, that influence the magnitudes of these readily computable and widely used image texture features (specifically Haralick’s gray level co-occurrence matrix (GLCM) textural features). Results: Our results indicate that quantization is the most influential of these parameters, as it controls the size of the GLCM and range of values. We propose a new multi-resolution normalization (by either fixing ROI size or pixel offset) that can significantly reduce quantization magnitude disparities. We show reduction in mean differences in feature values by orders of magnitude; for example, reducing it to 7.34% between quantizations of 8–128, while preserving trends. Conclusions: When combining images from multiple vendors in a common analysis, large variations in texture magnitudes can arise due to differences in post-processing methods like filters. We show that significant changes in GLCM magnitude variations may arise simply due to the filter type or strength. These trends can also vary based on estimation variables (like offset distance or ROI) that can further complicate analysis and robustness. We show pathways to reduce sensitivity to such variations due to estimation methods while increasing the desired sensitivity to patient-specific information such as breast density. Finally, we show that our results obtained from simulated DBT images are consistent with what we see when applied to clinical DBT images. Full article
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 175
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

30 pages, 4379 KiB  
Article
Cross-Platform Comparison of Generative Design Based on a Multi-Dimensional Cultural Gene Model of the Phoenix Pattern
by Yali Wang, Xinxiong Liu, Yan Gan, Yixiao Gong, Yuchen Xi and Lin Li
Appl. Sci. 2025, 15(15), 8170; https://doi.org/10.3390/app15158170 - 23 Jul 2025
Viewed by 198
Abstract
The rapid development of generative artificial intelligence has paved the way for a new approach to reproduce and intelligently generate traditional patterns digitally. This paper focuses on the traditional Chinese phoenix pattern and constructs a “Phoenix Pattern Multidimensional Cultural Gene Model” based on [...] Read more.
The rapid development of generative artificial intelligence has paved the way for a new approach to reproduce and intelligently generate traditional patterns digitally. This paper focuses on the traditional Chinese phoenix pattern and constructs a “Phoenix Pattern Multidimensional Cultural Gene Model” based on the grounded theory. It summarises seven semantic dimensions covering composition pattern, pixel configuration, colour system, media technology, semantic implication, theme context, and application scenario and divides them into explicit and implicit cultural genes. The study further proposes a control mechanism of “semantic label–prompt–image generation”, constructs a cross-platform prompt structure system suitable for Midjourney and Dreamina AI, and completes 28 groups of prompt combinations and six rounds of iterative experiments. The analysis of the results from 64 user questionnaires and 10 expert ratings reveals that Dreamina AI excels in cultural semantic restoration and context recognition. In contrast, Midjourney has an advantage in composition coordination and aesthetic consistency. Overall, the study verified the effectiveness of the cultural gene model in generating AIGC control. It proposed a framework for generating innovative traditional patterns, providing a theoretical basis and practical support for the intelligent expression of cultural heritage. Full article
Show Figures

Figure 1

18 pages, 33092 KiB  
Article
Yarn Color Measurement Method Based on Digital Photography
by Jinxing Liang, Guanghao Wu, Ke Yang, Jiangxiaotian Ma, Jihao Wang, Hang Luo, Xinrong Hu and Yong Liu
J. Imaging 2025, 11(8), 248; https://doi.org/10.3390/jimaging11080248 - 22 Jul 2025
Viewed by 227
Abstract
To overcome the complexity of yarn color measurement using spectrophotometry with yarn winding techniques and to enhance consistency with human visual perception, a yarn color measurement method based on digital photography is proposed. This study employs a photographic colorimetry system to capture digital [...] Read more.
To overcome the complexity of yarn color measurement using spectrophotometry with yarn winding techniques and to enhance consistency with human visual perception, a yarn color measurement method based on digital photography is proposed. This study employs a photographic colorimetry system to capture digital images of single yarns. The yarn and background are segmented using the K-means clustering algorithm, and the centerline of the yarn is extracted using a skeletonization algorithm. Spectral reconstruction and colorimetric principles are then applied to calculate the color values of pixels along the centerline. Considering the nonlinear characteristics of human brightness perception, the final yarn color is obtained through a nonlinear texture-adaptive weighted computation. The method is validated through psychophysical experiments using six yarns of different colors and compared with spectrophotometry and five other photographic measurement methods. Results indicate that among the seven yarn color measurement methods, including spectrophotometry, the proposed method—based on centerline extraction and nonlinear texture-adaptive weighting—yields results that more closely align with actual visual perception. Furthermore, among the six photographic measurement methods, the proposed method produces most similar to those obtained using spectrophotometry. This study demonstrates the inconsistency between spectrophotometric measurements and human visual perception of yarn color and provides methodological support for developing visually consistent color measurement methods for textured textiles. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

18 pages, 5460 KiB  
Article
New Perspectives on Digital Representation: The Case of the ‘Santa Casa de Misericórdia’ in São Carlos (Brazil)
by Cristiana Bartolomei, Luca Budriesi, Alfonso Ippolito, Davide Mezzino and Caterina Morganti
Buildings 2025, 15(14), 2502; https://doi.org/10.3390/buildings15142502 - 16 Jul 2025
Viewed by 276
Abstract
This research aims to investigate the Italian architectural heritage in Brazil through the analysis of the ‘Santa Casa de Misericórdia’ hospital in São Carlos, in the state of São Paulo. As part of the KNOW.IT national project, the work aims to recover and [...] Read more.
This research aims to investigate the Italian architectural heritage in Brazil through the analysis of the ‘Santa Casa de Misericórdia’ hospital in São Carlos, in the state of São Paulo. As part of the KNOW.IT national project, the work aims to recover and digitally enhance Italian heritage abroad from the 19th and 20th centuries. The buildings analysed were either designed or built by Italian architects who emigrated to South America or constructed using materials and techniques typical of Italian architecture of those years. The hospital, designed by the Italian architect Samuele Malfatti in 1891, was chosen for its historical value and its role in the urban context of the city of São Carlos, which, moreover, continues to perform its function even today. The study aims to create a digital archive with 3D models and two-dimensional graphical drawings. The methodology includes historical analysis, photogrammetric survey, and digital modelling using Agisoft Metashape and 3DF Zephyr software. A total of 636 images were processed, with the maximum resolution achieved in the models being 3526 × 2097 pixels. The results highlight the influence of Italian architecture on late 19th-century São Carlos and promote its virtual accessibility and wide-ranging knowledge. Full article
Show Figures

Figure 1

19 pages, 6293 KiB  
Article
Restoring Anomalous Water Surface in DOM Product of UAV Remote Sensing Using Local Image Replacement
by Chunjie Wang, Ti Zhang, Liang Tao and Jiayuan Lin
Sensors 2025, 25(13), 4225; https://doi.org/10.3390/s25134225 - 7 Jul 2025
Viewed by 369
Abstract
In the production of a digital orthophoto map (DOM) from unmanned aerial vehicle (UAV)-acquired overlapping images, some anomalies such as texture stretching or data holes frequently occur in water areas due to the lack of significant textural features. These anomalies seriously affect the [...] Read more.
In the production of a digital orthophoto map (DOM) from unmanned aerial vehicle (UAV)-acquired overlapping images, some anomalies such as texture stretching or data holes frequently occur in water areas due to the lack of significant textural features. These anomalies seriously affect the visual quality and data integrity of the resulting DOMs. In this study, we attempted to eliminate the water surface anomalies in an example DOM via replacing the entire water area with an intact one that was clipped out from one single UAV image. The water surface scope and boundary in the image was first precisely achieved using the multisource seed filling algorithm and contour-finding algorithm. Next, the tie points were selected from the boundaries of the normal and anomalous water surfaces, and employed to realize their spatial alignment using affine plane coordinate transformation. Finally, the normal water surface was overlaid onto the DOM to replace the corresponding anomalous water surface. The restored water area had good visual effect in terms of spectral consistency, and the texture transition with the surrounding environment was also sufficiently natural. According to the standard deviations and mean values of RGB pixels, the quality of the restored DOM was greatly improved in comparison with the original one. These demonstrated that the proposed method had a sound performance in restoring abnormal water surfaces in a DOM, especially for scenarios where the water surface area is relatively small and can be contained in a single UAV image. Full article
(This article belongs to the Special Issue Remote Sensing and UAV Technologies for Environmental Monitoring)
Show Figures

Figure 1

26 pages, 8232 KiB  
Article
A CML-ECA Chaotic Image Encryption System Based on Multi-Source Perturbation Mechanism and Dynamic DNA Encoding
by Xin Xie, Kun Zhang, Bing Zheng, Hao Ning, Yu Zhou, Qi Peng and Zhengyu Li
Symmetry 2025, 17(7), 1042; https://doi.org/10.3390/sym17071042 - 2 Jul 2025
Viewed by 351
Abstract
To meet the growing demand for secure and reliable image protection in digital communication, this paper proposes a novel image encryption framework that addresses the challenges of high plaintext sensitivity, resistance to statistical attacks, and key security. The method combines a two-dimensional dynamically [...] Read more.
To meet the growing demand for secure and reliable image protection in digital communication, this paper proposes a novel image encryption framework that addresses the challenges of high plaintext sensitivity, resistance to statistical attacks, and key security. The method combines a two-dimensional dynamically coupled map lattice (2D DCML) with elementary cellular automata (ECA) to construct a heterogeneous chaotic system with strong spatiotemporal complexity. To further enhance nonlinearity and diffusion, a multi-source perturbation mechanism and adaptive DNA encoding strategy are introduced. These components work together to obscure the image structure, pixel correlations, and histogram characteristics. By embedding spatial and temporal symmetry into the coupled lattice evolution and perturbation processes, the proposed method ensures a more uniform and balanced transformation of image data. Meanwhile, the method enhances the confusion and diffusion effects by utilizing the principle of symmetric perturbation, thereby improving the overall security of the system. Experimental evaluations on standard images demonstrate that the proposed scheme achieves high encryption quality in terms of histogram uniformity, information entropy, NPCR, UACI, and key sensitivity tests. It also shows strong resistance to chosen plaintext attacks, confirming its robustness for secure image transmission. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 2521 KiB  
Article
A Multimodal CMOS Readout IC for SWIR Image Sensors with Dual-Mode BDI/DI Pixels and Column-Parallel Two-Step Single-Slope ADC
by Yuyan Zhang, Zhifeng Chen, Yaguang Yang, Huangwei Chen, Jie Gao, Zhichao Zhang and Chengying Chen
Micromachines 2025, 16(7), 773; https://doi.org/10.3390/mi16070773 - 30 Jun 2025
Viewed by 398
Abstract
This paper proposes a dual-mode CMOS analog front-end (AFE) circuit for short-wave infrared (SWIR) image sensors, which integrates a hybrid readout circuit (ROIC) and a 12-bit two-step single-slope analog-to-digital converter (TS-SS ADC). The ROIC dynamically switches between buffered-direct-injection (BDI) and direct-injection (DI) modes, [...] Read more.
This paper proposes a dual-mode CMOS analog front-end (AFE) circuit for short-wave infrared (SWIR) image sensors, which integrates a hybrid readout circuit (ROIC) and a 12-bit two-step single-slope analog-to-digital converter (TS-SS ADC). The ROIC dynamically switches between buffered-direct-injection (BDI) and direct-injection (DI) modes, thus balancing injection efficiency against power consumption. While the DI structure offers simplicity and low power, it suffers from unstable biasing and reduced injection efficiency under high background currents. Conversely, the BDI structure enhances injection efficiency and bias stability via an input buffer but incurs higher power consumption. To address this trade-off, a dual-mode injection architecture with mode-switching transistors is implemented. Mode selection is executed in-pixel via a low-leakage transmission gate and coordinated by the column timing controller, enabling low-current pixels to operate in low-noise BDI mode, whereas high-current pixels revert to the low-power DI mode. The TS-SS ADC employs a four-terminal comparator and dynamic reference voltage compensation to mitigate charge leakage and offset, which improves signal-to-noise ratio (SNR) and linearity. The prototype occupies 2.1 mm × 2.88 mm in a 0.18 µm CMOS process and serves a 64 × 64 array. The AFE achieves a dynamic range of 75.58 dB, noise of 249.42 μV, and 81.04 mW power consumption. Full article
Show Figures

Figure 1

27 pages, 988 KiB  
Article
A Comparative Study of Descriptors for Quadrant-Convexity
by Péter Balázs and Sara Brunetti
Mathematics 2025, 13(13), 2114; https://doi.org/10.3390/math13132114 - 27 Jun 2025
Viewed by 165
Abstract
Many different descriptors have been proposed to measure the convexity of digital shapes. Most of these are based on the definition of continuous convexity and exhibit both advantages and drawbacks when applied in the digital domain. In contrast, within the field of Discrete [...] Read more.
Many different descriptors have been proposed to measure the convexity of digital shapes. Most of these are based on the definition of continuous convexity and exhibit both advantages and drawbacks when applied in the digital domain. In contrast, within the field of Discrete Tomography, a special type of convexity—called Quadrant-convexity—has been introduced. This form of convexity naturally arises from the pixel-based representation of digital shapes and demonstrates favorable properties for reconstruction from projections. In this paper, we present an overview of using Quadrant-convexity as the basis for designing shape descriptors. We explore two different approaches: the first is based on the geometric features of Quadrant-convex objects, while the second relies on the identification of Quadrant-concave pixels. For both approaches, we conduct extensive experiments to evaluate the strengths and limitations of the proposed descriptors. In particular, we show that all our descriptors achieve an average accuracy of approximately 95% to 97.5% on noisy retina images for a binary classification task. Furthermore, in a multiclass classification setting using a dataset of desmids, all our descriptors outperform traditional low-level shape descriptors, achieving an accuracy of 76.74%. Full article
Show Figures

Figure 1

20 pages, 1616 KiB  
Article
Application of Fourier-Galois Spectra Analysers for Rotating Image Analysis
by Dina Shaltykova, Kaisarali Kadyrzhan, Ibragim Suleimenov, Gaini Seitenova and Eldar Kopishev
Polymers 2025, 17(13), 1791; https://doi.org/10.3390/polym17131791 - 27 Jun 2025
Viewed by 252
Abstract
It is shown that the analysis of rotating circular images containing n = 2p−1 pixels, the state of which is described by variables of binary logic, and p is an integer, is expedient to carry out using digital spectra obtained using the [...] Read more.
It is shown that the analysis of rotating circular images containing n = 2p−1 pixels, the state of which is described by variables of binary logic, and p is an integer, is expedient to carry out using digital spectra obtained using the Fourier–Galois transformation, and the basis corresponding to such a transformation is formed by the method of classical algebraic extensions of the main Galois field GF(2) corresponding to binary logic. It is shown that the use of Fourier–Galois spectra makes it possible to reduce the analysis of a rotating image to the analysis of a still image by using the operation of digital logarithm. It is shown that the proposed approach is of interest, including from the point of view of improving equipment designed to study the rheological properties of liquids, in particular, polymer solutions in which non-trivial branched structures are formed. In this case, the use of the proposed approach provides an opportunity to modernize the classical method of measuring Stokes viscosity, focused on the study of mechanochemical reactions. The design of a viscometer that implements the proposed approach has been developed. In it, a digital image is formed by a set of optoelectronic pairs that track the circular motion of the ball in a cuvette driven by rotation. The electronic circuits of this type of viscometer are based on a Fourier–Galois spectrum analyser and a digital logarithm operation. The possibilities of generalizing the proposed approach to the analysis of rotating images of other types are considered. Full article
Show Figures

Figure 1

19 pages, 19052 KiB  
Article
An Image-Free Single-Pixel Detection System for Adaptive Multi-Target Tracking
by Yicheng Peng, Jianing Yang, Yuhao Feng, Shijie Yu, Fei Xing and Ting Sun
Sensors 2025, 25(13), 3879; https://doi.org/10.3390/s25133879 - 21 Jun 2025
Viewed by 864
Abstract
Conventional vision-based sensors face limitations such as low update rates, restricted applicability, and insufficient robustness in dynamic environments with complex object motions. Single-pixel tracking systems offer high efficiency and minimal data redundancy by directly acquiring target positions without full-image reconstruction. This paper proposes [...] Read more.
Conventional vision-based sensors face limitations such as low update rates, restricted applicability, and insufficient robustness in dynamic environments with complex object motions. Single-pixel tracking systems offer high efficiency and minimal data redundancy by directly acquiring target positions without full-image reconstruction. This paper proposes a single-pixel detection system for adaptive multi-target tracking based on the geometric moment and the exponentially weighted moving average (EWMA). The proposed system leverages geometric moments for high-speed target localization, requiring merely 3N measurements to resolve centroids for N targets. Furthermore, the output values of the system are used to continuously update the weight parameters, enabling adaptation to varying motion patterns and ensuring consistent tracking stability. Experimental validation using a digital micromirror device (DMD) operating at 17.857 kHz demonstrates a theoretical tracking update rate of 1984 Hz for three objects. Quantitative evaluations under 1920 × 1080 pixel resolution reveal a normalized root mean square error (NRMSE) of 0.00785, confirming the method’s capability for robust multi-target tracking in practical applications. Full article
Show Figures

Figure 1

27 pages, 11296 KiB  
Article
Implementation of MS Circle Map in Digital Image Encryption
by Ichsani Mursidah, Suryadi MT, Sarifuddin Madenda and Suryadi Harmanto
Appl. Sci. 2025, 15(13), 6998; https://doi.org/10.3390/app15136998 - 21 Jun 2025
Viewed by 325
Abstract
Digital data protection is crucial to prevent unauthorized modifications and tampering. A secure, reliable, and efficient encryption technique is needed to safeguard digital images. This paper proposes a novel MS Circle Map-based image encryption algorithm, integrating chaotic dynamics for enhanced security. The encryption [...] Read more.
Digital data protection is crucial to prevent unauthorized modifications and tampering. A secure, reliable, and efficient encryption technique is needed to safeguard digital images. This paper proposes a novel MS Circle Map-based image encryption algorithm, integrating chaotic dynamics for enhanced security. The encryption process begins by transforming the plain image matrix into a row vector. A secret key is then used as the initial condition for the MS Circle Map to generate a chaotic keystream. The encryption is performed through pixel diffusion using an XOR operation between the pixel intensity vector and the keystream, ensuring high randomness. The proposed method features a large key space, high key sensitivity, and strong resistance to brute force, statistical, and differential attacks. Performance evaluation through key space analysis, initial value sensitivity, entropy, correlation coefficient, NPCR, and UACI shows that the encrypted image using MS Circle Map has strong security properties. Meanwhile, the quality test results based on MSE and PSNR values confirm that the decrypted image is exactly the same as the original image. Full article
Show Figures

Figure 1

16 pages, 10517 KiB  
Article
Beyond the Light Meter: A Case-Study on HDR-Derived Illuminance Calculations Using a Proxy-Lambertian Surface
by Jackson Hanus, Arpan Guha and Abdourahim Barry
Buildings 2025, 15(12), 2131; https://doi.org/10.3390/buildings15122131 - 19 Jun 2025
Viewed by 373
Abstract
Accurate illuminance measurements are critical in assessing lighting quality during post-occupancy evaluations, and traditional methods are labor-intensive and time-consuming. This pilot study demonstrates an alternative that combines high dynamic range (HDR) imaging with a low-cost proxy-Lambertian surface to transform image luminance into spatial [...] Read more.
Accurate illuminance measurements are critical in assessing lighting quality during post-occupancy evaluations, and traditional methods are labor-intensive and time-consuming. This pilot study demonstrates an alternative that combines high dynamic range (HDR) imaging with a low-cost proxy-Lambertian surface to transform image luminance into spatial illuminance. Seven readily available materials were screened for luminance uniformity; the specimen with minimal deviation from Lambertian behavior (≈2%) was adopted as the pseudo-Lambertian surface. Calibrated HDR images of a fluorescent-lit university classroom were acquired with a digital single-lens reflex (DSLR) camera and processed in Photosphere, after which pixel luminance was converted to illuminance via Lambertian approximation. Predicted illuminance values were benchmarked against spectral illuminance meter readings at 42 locations on horizontal work planes, vertical presentation surfaces, and the circulation floor. The average errors were 5.20% for desks and 6.40% for the whiteboard—well below the 10% acceptance threshold for design validation—while the projector-screen and floor measurements exhibited slightly higher discrepancies of 9.90% and 14.40%, respectively. The proposed workflow significantly reduces the cost, complexity, and duration of lighting assessments, presenting a promising tool for streamlined, accurate post-occupancy evaluations. Future work may focus on refining this approach for diverse lighting conditions and complex material interactions. Full article
(This article belongs to the Special Issue Lighting in Buildings—2nd Edition)
Show Figures

Figure 1

25 pages, 4277 KiB  
Article
Decolorization with Warmth–Coolness Adjustment in an Opponent and Complementary Color System
by Oscar Sanchez-Cesteros and Mariano Rincon
J. Imaging 2025, 11(6), 199; https://doi.org/10.3390/jimaging11060199 - 18 Jun 2025
Viewed by 449
Abstract
Creating grayscale images from a color reality has been an inherent human practice since ancient times, but it became a technological challenge with the advent of the first black-and-white televisions and digital image processing. Decolorization is a process that projects visual information from [...] Read more.
Creating grayscale images from a color reality has been an inherent human practice since ancient times, but it became a technological challenge with the advent of the first black-and-white televisions and digital image processing. Decolorization is a process that projects visual information from a three-dimensional feature space to a one-dimensional space, thus reducing the dimensionality of the image while minimizing the loss of information. To achieve this, various strategies have been developed, including the application of color channel weights and the analysis of local and global image contrast, but there is no universal solution. In this paper, we propose a bio-inspired approach that combines findings from neuroscience on the architecture of the visual system and color coding with evidence from studies in the psychology of art. The goal is to simplify the decolorization process and facilitate its control through color-related concepts that are easily understandable to humans. This new method organizes colors in a scale that links activity on the retina with a system of opponent and complementary channels, thus allowing the adjustment of the perception of warmth and coolness in the image. The results show an improvement in chromatic contrast, especially in the warmth and coolness categories, as well as an enhanced ability to preserve subtle contrasts, outperforming other approaches in the Ishihara test used in color blindness detection. In addition, the method offers a computational advantage by reducing the process through direct pixel-level operation. Full article
(This article belongs to the Special Issue Color in Image Processing and Computer Vision)
Show Figures

Figure 1

35 pages, 8283 KiB  
Article
PIABC: Point Spread Function Interpolative Aberration Correction
by Chanhyeong Cho, Chanyoung Kim and Sanghoon Sull
Sensors 2025, 25(12), 3773; https://doi.org/10.3390/s25123773 - 17 Jun 2025
Viewed by 436
Abstract
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. [...] Read more.
Image quality in high-resolution digital single-lens reflex (DSLR) systems is degraded by Complementary Metal-Oxide-Semiconductor (CMOS) sensor noise and optical imperfections. Sensor noise becomes pronounced under high-ISO (International Organization for Standardization) settings, while optical aberrations such as blur and chromatic fringing distort the signal. Optical and sensor-level noise are distinct and hard to separate, but prior studies suggest that improving optical fidelity can suppress or mask sensor noise. Upon this understanding, we introduce a framework that utilizes densely interpolated Point Spread Functions (PSFs) to recover high-fidelity images. The process begins by simulating Gaussian-based PSFs as pixel-wise chromatic and spatial distortions derived from real degraded images. These PSFs are then encoded into a latent space to enhance their features and used to generate refined PSFs via similarity-weighted interpolation at each target position. The interpolated PSFs are applied through Wiener filtering, followed by residual correction, to restore images with improved structural fidelity and perceptual quality. We compare our method—based on pixel-wise, physical correction, and densely interpolated PSF at pre-processing—with post-processing networks, including deformable convolutional neural networks (CNNs) that enhance image quality without modeling degradation. Evaluations on DIV2K and RealSR-V3 confirm that our strategy not only enhances structural restoration but also more effectively suppresses sensor-induced artifacts, demonstrating the benefit of explicit physical priors for perceptual fidelity. Full article
(This article belongs to the Special Issue Sensors for Pattern Recognition and Computer Vision)
Show Figures

Figure 1

Back to TopTop