Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (24)

Search Parameters:
Keywords = over-exposure correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 4764 KB  
Article
A Two-Level Illumination Correction Network for Digital Meter Reading Recognition in Non-Uniform Low-Light Conditions
by Haoning Fu, Zhiwei Xie, Wenzhu Jiang, Xingjiang Ma and Dongying Yang
J. Imaging 2026, 12(4), 146; https://doi.org/10.3390/jimaging12040146 - 25 Mar 2026
Viewed by 296
Abstract
The automatic reading recognition of digital instruments is crucial for achieving metering automation and intelligent inspection. However, in non-standardized industrial environments, the masking effect caused by the coupling of non-uniform low-light conditions and the reflective surfaces of instrument panels severely degrades the displayed [...] Read more.
The automatic reading recognition of digital instruments is crucial for achieving metering automation and intelligent inspection. However, in non-standardized industrial environments, the masking effect caused by the coupling of non-uniform low-light conditions and the reflective surfaces of instrument panels severely degrades the displayed information, significantly limiting the recognition performance. Conventional image processing methods, while aiming to restore the imaging quality of instrument panels through low-light enhancement, inevitably introduce overexposure and indiscriminately amplify background noise during this process. To address the two key challenges of illumination recovery and noise suppression in the process of restoring panel image quality under non-uniform low-light conditions, this paper proposes a coarse-to-fine cascaded perception framework (CFCP). First, a lightweight YOLOv10 detector is employed to coarsely localize the meter reading region under non-uniform illumination conditions. Second, an Adaptive Illumination Correction Module (AICM) is designed to decouple and correct the illumination component at the pixel level, effectively restoring details in dark areas. Then, an Illumination-invariant Feature Perception Module (IFPM) is embedded at the feature level to dynamically perceive illumination-invariant features and filter out noise interference. Finally, the refined detection results are fed into a lightweight sequence recognition network to obtain the final meter readings. Experiments on a self-built industrial digital instrument dataset show that the proposed method achieves 93.2% recognition accuracy, with 17.1 ms latency and only 7.9 M parameters. Full article
(This article belongs to the Special Issue AI-Driven Image and Video Understanding)
Show Figures

Figure 1

33 pages, 2216 KB  
Article
Stabilizing Defect Visibility Under Overexposure in Fringe-Based Imaging via γ Nonlinearity Analysis
by Xiaolong Ma, Xiaofei Wang, Ruizhan Zhai, Zhongqing Jia, Wei Zhang, Bing Zhao and Chen Guan
Sensors 2026, 26(7), 2032; https://doi.org/10.3390/s26072032 - 25 Mar 2026
Viewed by 326
Abstract
Phase-shifting fringe projection (PSFP) is widely used in industrial inspection and three-dimensional measurement, where γ nonlinearity of the projector–camera system is traditionally treated as a phase-error source to be calibrated or compensated. In this work, γ nonlinearity is reinterpreted from an imaging perspective [...] Read more.
Phase-shifting fringe projection (PSFP) is widely used in industrial inspection and three-dimensional measurement, where γ nonlinearity of the projector–camera system is traditionally treated as a phase-error source to be calibrated or compensated. In this work, γ nonlinearity is reinterpreted from an imaging perspective and shown to act as a statistical distortion mechanism that reshapes modulation stability, overexposure behavior, and defect saliency in fringe-based imaging. Building on the intrinsic DC–AC decomposition of phase-shifting demodulation, we analyze how γ nonlinearity interacts with fringe modulation and frequency-selective transfer. An analytical model reveals that γ nonlinearity simultaneously suppresses the fringe fundamental and introduces harmonic leakage, leading to systematic compression of mean modulation contrast in high-brightness regions. As a result, γ correction does not necessarily enhance mean-based defect contrast and may even reduce it, contrary to common intuition. We further demonstrate that the primary benefit of γ correction lies in statistical stabilization rather than contrast amplification. By introducing modulation-domain saliency formulations and a frequency-domain harmonic energy ratio, a physical link is established between γ nonlinearity, overexposure, and defect separability. Controlled experiments on highly reflective sheet-metal specimens confirm that while mean-contrast- and SNR-based saliency metrics often decrease after γ correction, separability-based metrics consistently improve due to reduced nonlinear- and saturation-induced variance. Cross-channel and cross-condition analyses further show that modulation and reflectance images respond differently to γ correction, yet metric-level separability exhibits consistent improvement across channels. These results clarify the true role of γ correction in fringe-based inspection and provide theoretical insight and practical guidance for robust defect imaging under nonlinear and near-overexposure conditions. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 1948 KB  
Article
VDTAR-Net: A Cooperative Dual-Path Convolutional Neural Network–Transformer Network for Robust Highlight Reflection Segmentation
by Qianlong Zhang and Yue Zeng
Computers 2026, 15(3), 168; https://doi.org/10.3390/computers15030168 - 4 Mar 2026
Viewed by 351
Abstract
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent [...] Read more.
In medical endoscopic imaging, specular reflection (SR) frequently leads to local overexposure, obscuring essential tissue information and complicating computer-aided diagnosis (CAD). Traditional convolutional neural networks (CNNs) face difficulties in modeling global illumination phenomena due to their biased local receptive fields and the inherent “object assumption.” Conversely, pure transformer models often lose high-frequency boundary details and incur substantial computational costs. To tackle these challenges, this paper introduces VDTAR-Net, a specialized framework adapted to address the unique optical characteristics of specular reflections. Building upon hybrid architectures, our contribution focuses on two core mechanisms: (1) a Cross-architecture Fusion Module (CFM) that enables deep, bidirectional information flow, allowing the Transformer’s global illumination modeling to continuously correct the CNN’s local texture biases; and (2) a Reflective-Aware Module (RAM), which explicitly integrates the physical prior of high-intensity saturation into the attention mechanism. This task-specific design significantly enhances sensitivity to boundary details in overexposed regions. We also created the first large-scale, expert-labeled cervical white light segmentation dataset, Cervix-WL-900. High-quality ground truth labels were generated through rigorous double-blind annotation and arbitration by senior experts. Experimental results show that VDTAR-Net achieves a Dice score of 92.56% and a mean Intersection over Union (mIoU) score of 87.31% on Cervix-WL-900, demonstrating superior performance compared to methods like U-Net, DeepLabv3+, SegFormer, and PSPNet. Ablation studies further confirm the substantial contributions of dual-path collaboration, CFM deep fusion, and RAM task-specific priors. VDTAR-Net provides a robust baseline for precise highlight segmentation, laying a foundation for subsequent image quality assessment, restoration, and feature decoupling in diagnostic models. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

15 pages, 121635 KB  
Article
Deep Guided Exposure Correction with Knowledge Distillation
by Songrong Liu and Tao Zhang
Sensors 2025, 25(24), 7606; https://doi.org/10.3390/s25247606 - 15 Dec 2025
Viewed by 537
Abstract
Images captured with unreasonable exposures will greatly reduce visual quality. Exposure problems can be categorized as follows: (i) over-exposure, i.e., bright and losing image regions caused by too-long exposure; (ii) under-exposure, i.e., dark and drowned-in noises caused by too-short exposure. Most prior works [...] Read more.
Images captured with unreasonable exposures will greatly reduce visual quality. Exposure problems can be categorized as follows: (i) over-exposure, i.e., bright and losing image regions caused by too-long exposure; (ii) under-exposure, i.e., dark and drowned-in noises caused by too-short exposure. Most prior works only handle over- or under-exposure problems on sRGB domain and ignore prior knowledge of channel information. In this paper, we propose Deep Guided network for exposure correction on RAW domain with Knowledge Distillation (denoted as DGKD), solving two problems together. Firstly, according to color sensitivity, we employ blue/red channel and green channel as guidance information for over- and under-exposure correction, respectively. Secondly, to handle two varying problems in a unified network, we first train the over- and under-exposure correction networks individually and then distill knowledge into one deep guided network. The experimental results show that the proposed method outperforms the state-of-the-art methods under both quantitative metrics and visual quality. Specifically, the proposed method attained a peak signal-to-noise ratio of 24.653 dB and a structural similarity index of 0.8182 on the collected RAW image exposure correction dataset. Full article
Show Figures

Figure 1

20 pages, 3397 KB  
Article
Image Enhancement Algorithm and FPGA Implementation for High-Sensitivity Low-Light Detection Based on Carbon-Based HGFET
by Yi Cao, Yuyan Zhang, Zhifeng Chen, Dongyi Lin, Chengying Chen, Liming Chen and Jianhua Jiang
Electron. Mater. 2025, 6(4), 23; https://doi.org/10.3390/electronicmat6040023 - 2 Dec 2025
Viewed by 1174
Abstract
To address the issues of insufficient responsivity and low imaging contrast of carbon-based HGFET high-sensitivity short-wave infrared (SWIR) detectors under low-light conditions, this paper proposes a high-sensitivity and high-contrast image enhancement algorithm for low-light detection, with FPGA-based hardware verification. The proposed algorithm establishes [...] Read more.
To address the issues of insufficient responsivity and low imaging contrast of carbon-based HGFET high-sensitivity short-wave infrared (SWIR) detectors under low-light conditions, this paper proposes a high-sensitivity and high-contrast image enhancement algorithm for low-light detection, with FPGA-based hardware verification. The proposed algorithm establishes a multi-stage cooperative enhancement framework targeting key challenges such as low signal-to-noise ratio (SNR), high dark-state noise, and weak target extraction. Unlike traditional direct enhancement methods, the proposed approach first performs defective row-column correction and background noise separation based on dark-state data, which provides a clean foundation for signal reconstruction. Furthermore, an adaptive gamma correction mechanism based on image maximum value is introduced to avoid unnecessary nonlinear transformations in high-contrast regions. During the contrast enhancement stage, an exposure-constrained adaptive histogram equalization strategy is adopted to effectively suppress noise amplification and saturation in low-light scenes. Finally, an innovative dual-mode threshold selection method based on image variance is proposed, which can dynamically integrate the OTSU algorithm with statistical moment analysis to ensure robust background noise separation across both high- and low-contrast scenarios. Experimental results demonstrate that the proposed algorithm significantly improves target contrast in infrared images while preventing detail loss due to overexposure. Under microwatt-level laser power, background noise is effectively suppressed, and both imaging quality and weak target detection capability are substantially enhanced. Full article
Show Figures

Figure 1

16 pages, 7343 KB  
Article
Accelerated Super-Resolution Reconstruction for Structured Illumination Microscopy Integrated with Low-Light Optimization
by Caihong Huang, Dingrong Yi and Lichun Zhou
Micromachines 2025, 16(9), 1020; https://doi.org/10.3390/mi16091020 - 3 Sep 2025
Viewed by 2456
Abstract
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this [...] Read more.
Structured illumination microscopy (SIM) with π/2 phase-shift modulation traditionally relies on frequency-domain computation, which greatly limits processing efficiency. In addition, the illumination regime inherent in structured illumination techniques often results in poor visual quality of reconstructed images. To address these dual challenges, this study introduces DM-SIM-LLIE (Differential Low-Light Image Enhancement SIM), a novel framework that integrates two synergistic innovations. First, the study pioneers a spatial-domain computational paradigm for π/2 phase-shift SIM reconstruction. Through system differentiation, mathematical derivation, and algorithm simplification, an optimized spatial-domain model is established. Second, an adaptive local overexposure correction strategy is developed, combined with a zero-shot learning deep learning algorithm, RUAS, to enhance the image quality of structured light reconstructed images. Experimental validation using specimens such as fluorescent microspheres and bovine pulmonary artery endothelial cells demonstrates the advantages of this approach: compared with traditional frequency-domain methods, the reconstruction speed is accelerated by five times while maintaining equivalent lateral resolution and excellent axial resolution. The image quality of the low-light enhancement algorithm after local overexposure correction is superior to existing methods. These advances significantly increase the application potential of SIM technology in time-sensitive biomedical imaging scenarios that require high spatiotemporal resolution. Full article
(This article belongs to the Special Issue Advanced Biomaterials, Biodevices, and Their Application)
Show Figures

Figure 1

22 pages, 8901 KB  
Article
D3Fusion: Decomposition–Disentanglement–Dynamic Compensation Framework for Infrared-Visible Image Fusion in Extreme Low-Light
by Wansi Yang, Yi Liu and Xiaotian Chen
Appl. Sci. 2025, 15(16), 8918; https://doi.org/10.3390/app15168918 - 13 Aug 2025
Cited by 2 | Viewed by 1403
Abstract
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, [...] Read more.
Infrared-visible image fusion quality is critical for nighttime perception in autonomous driving and surveillance but suffers severe degradation under extreme low-light conditions, including irreversible texture loss in visible images, thermal boundary diffusion artifacts, and overexposure under dynamic non-uniform illumination. To address these challenges, a Decomposition–Disentanglement–Dynamic Compensation framework, D3Fusion, is proposed. Firstly, a Retinex-inspired Decomposition Illumination Net (DIN) decomposes inputs into enhanced images and degradative illumination maps for joint low-light recovery. Secondly, an illumination-guided encoder and a multi-scale differential compensation decoder dynamically balance cross-modal features. Finally, a progressive three-stage training paradigm from illumination correction through feature disentanglement to adaptive fusion resolves optimization conflicts. Compared to State-of-the-Art methods, on the LLVIP, TNO, MSRS, and RoadScene datasets, D3Fusion achieves an average improvement of 1.59% in standard deviation (SD), 6.9% in spatial frequency (SF), 2.59% in edge intensity (EI), and 1.99% in visual information fidelity (VIF), demonstrating superior performance in extreme low-light scenarios. The framework effectively suppresses thermal diffusion artifacts while mitigating exposure imbalance, adaptively brightening scenes while preserving texture details in shadowed regions. This significantly improves fusion quality for nighttime images by enhancing salient information, establishing a robust solution for multimodal perception under illumination-critical conditions. Full article
Show Figures

Figure 1

21 pages, 5260 KB  
Article
LapECNet: Laplacian Pyramid Networks for Image Exposure Correction
by Yongchang Li and Jing Jiang
Appl. Sci. 2025, 15(16), 8840; https://doi.org/10.3390/app15168840 - 11 Aug 2025
Viewed by 1758
Abstract
Images captured under complex lighting conditions often suffer from local under/ overexposure and detail loss. Existing methods typically process illumination and texture information in a mixed manner, making it difficult to simultaneously achieve precise exposure adjustment and preservation of detail. To address this [...] Read more.
Images captured under complex lighting conditions often suffer from local under/ overexposure and detail loss. Existing methods typically process illumination and texture information in a mixed manner, making it difficult to simultaneously achieve precise exposure adjustment and preservation of detail. To address this challenge, we propose LapECNet, an enhanced Laplacian pyramid network architecture for image exposure correction and detail reconstruction. Specifically, it decomposes the input image into different frequency bands of a Laplacian pyramid, enabling separate handling of illumination adjustment and detail enhancement. The framework first decomposes the image into three feature levels. At each level, we introduce a feature enhancement module that adaptively processes image features across different frequency bands using spatial and channel attention mechanisms. After enhancing the features at each level, we further propose a dynamic aggregation module that learns adaptive weights to hierarchically fuse multi-scale features, achieving context-aware recombination of the enhanced features. Extensive experiments with public benchmarks on the MSEC dataset demonstrated that our method gave improvements of 15.4% in PSNR and 7.2% in SSIM over previous methods. On the LCDP dataset, our method demonstrated improvements of 7.2% in PSNR and 13.9% in SSIM over previous methods. Full article
(This article belongs to the Special Issue Recent Advances in Parallel Computing and Big Data)
Show Figures

Figure 1

23 pages, 14051 KB  
Article
A Novel Method for Water Surface Debris Detection Based on YOLOV8 with Polarization Interference Suppression
by Yi Chen, Honghui Lin, Lin Xiao, Maolin Zhang and Pingjun Zhang
Photonics 2025, 12(6), 620; https://doi.org/10.3390/photonics12060620 - 18 Jun 2025
Cited by 2 | Viewed by 1735
Abstract
Aquatic floating debris detection is a key technological foundation for ecological monitoring and integrated water environment management. It holds substantial scientific and practical value in applications such as pollution source tracing, floating debris control, and maritime navigation safety. However, this field faces ongoing [...] Read more.
Aquatic floating debris detection is a key technological foundation for ecological monitoring and integrated water environment management. It holds substantial scientific and practical value in applications such as pollution source tracing, floating debris control, and maritime navigation safety. However, this field faces ongoing challenges due to water surface polarization. Reflections of polarized light produce intense glare, resulting in localized overexposure, detail loss, and geometric distortion in captured images. These optical artifacts severely impair the performance of conventional detection algorithms, increasing both false positives and missed detections. To overcome these imaging challenges in complex aquatic environments, we propose a novel YOLOv8-based detection framework with integrated polarized light suppression mechanisms. The framework consists of four key components: a fisheye distortion correction module, a polarization feature processing layer, a customized residual network with Squeeze-and-Excitation (SE) attention, and a cascaded pipeline for super-resolution reconstruction and deblurring. Additionally, we developed the PSF-IMG dataset (Polarized Surface Floats), which includes common floating debris types such as plastic bottles, bags, and foam boards. Extensive experiments demonstrate the network’s robustness in suppressing polarization artifacts and enhancing feature stability under dynamic optical conditions. Full article
(This article belongs to the Special Issue Advancements in Optical Measurement Techniques and Applications)
Show Figures

Figure 1

20 pages, 17077 KB  
Article
Joint Luminance Adjustment and Color Correction for Low-Light Image Enhancement Network
by Nenghuan Zhang, Xiao Han, Chenming Liu, Ruipeng Gang, Sai Ma and Yizhen Cao
Appl. Sci. 2024, 14(14), 6320; https://doi.org/10.3390/app14146320 - 19 Jul 2024
Cited by 5 | Viewed by 3485
Abstract
Most of the existing low-light enhancement research focuses on global illumination enhancement while ignoring the issues of brightness unevenness and color distortion. To address this dilemma, we propose a low-light image enhancement method that can achieve good performance in luminance adjustment and color [...] Read more.
Most of the existing low-light enhancement research focuses on global illumination enhancement while ignoring the issues of brightness unevenness and color distortion. To address this dilemma, we propose a low-light image enhancement method that can achieve good performance in luminance adjustment and color correction simultaneously. Specifically, the Luminance Adjustment Module is designed to model the global luminance adjustment parameters while taking into account the relationship between global and local illumination features, in order to prevent overexposure or underexposure. Furthermore, we design a Color Correction Module based on the attention mechanism, which utilizes the attention mechanism to capture global color features and correct the color deviation in the illumination-enhanced image. Additionally, we design a color loss function based on a 14-dimensional statistical feature vector related to color, enabling further restoration of the image’s true color. We conduct empirical studies on multiple public low-light datasets, demonstrating that the proposed method outperforms other representative state-of-the-art models regarding illumination enhancement and color correction. Full article
(This article belongs to the Special Issue Recent Advances in Image Processing)
Show Figures

Figure 1

16 pages, 5236 KB  
Article
Hash Encoding and Brightness Correction in 3D Industrial and Environmental Reconstruction of Tidal Flat Neural Radiation
by Huilin Ge, Biao Wang, Zhiyu Zhu, Jin Zhu and Nan Zhou
Sensors 2024, 24(5), 1451; https://doi.org/10.3390/s24051451 - 23 Feb 2024
Cited by 1 | Viewed by 2034
Abstract
We present an innovative approach to mitigating brightness variations in the unmanned aerial vehicle (UAV)-based 3D reconstruction of tidal flat environments, emphasizing industrial applications. Our work focuses on enhancing the accuracy and efficiency of neural radiance fields (NeRF) for 3D scene synthesis. We [...] Read more.
We present an innovative approach to mitigating brightness variations in the unmanned aerial vehicle (UAV)-based 3D reconstruction of tidal flat environments, emphasizing industrial applications. Our work focuses on enhancing the accuracy and efficiency of neural radiance fields (NeRF) for 3D scene synthesis. We introduce a novel luminance correction technique to address challenging illumination conditions, employing a convolutional neural network (CNN) for image enhancement in cases of overexposure and underexposure. Additionally, we propose a hash encoding method to optimize the spatial position encoding efficiency of NeRF. The efficacy of our method is validated using diverse datasets, including a custom tidal flat dataset and the Mip-NeRF 360 dataset, demonstrating superior performance across various lighting scenarios. Full article
Show Figures

Figure 1

19 pages, 14580 KB  
Article
Self-Supervised Non-Uniform Low-Light Image Enhancement Combining Image Inversion and Exposure Fusion
by Wei Huang, Kaili Li, Mengfan Xu and Rui Huang
Electronics 2023, 12(21), 4445; https://doi.org/10.3390/electronics12214445 - 29 Oct 2023
Cited by 3 | Viewed by 2675
Abstract
Low-light image enhancement is a challenging task in non-uniform low-light conditions, often resulting in local overexposure, noise amplification, and color distortion. To obtain satisfactory enhancement results, most models must resort to carefully selected paired or multi-exposure data sets. In this paper, we propose [...] Read more.
Low-light image enhancement is a challenging task in non-uniform low-light conditions, often resulting in local overexposure, noise amplification, and color distortion. To obtain satisfactory enhancement results, most models must resort to carefully selected paired or multi-exposure data sets. In this paper, we propose a self-supervised framework for non-uniform low-light image enhancement to address these issues, only requiring low-light images on their own for training. We first design a robust Retinex model-based image exposure enhancement network (EENet) to obtain global brightness enhancement and noise removal of images by carefully designing the loss function of each decomposition map. Then, to correct overexposed areas in the enhanced image, we incorporate the inverse image of the low-light image for enhancement using EENet. Furthermore, a three-branch asymmetric exposure fusion network (TAFNet) is designed. The two enhanced images and the original image are used as the TAFNet inputs to obtain a globally well-exposed and detail-rich image. Experimental results demonstrate that our framework outperforms some state-of-the-art methods in visual and quantitative comparisons. Full article
Show Figures

Figure 1

17 pages, 9527 KB  
Article
Two Residual Attention Convolution Models to Recover Underexposed and Overexposed Images
by Noorman Rinanto and Shun-Feng Su
Symmetry 2023, 15(10), 1850; https://doi.org/10.3390/sym15101850 - 1 Oct 2023
Cited by 2 | Viewed by 2434
Abstract
Inconsistent lighting phenomena in digital images, such as underexposure and overexposure, pose challenges in computer vision. Many studies have developed to address these issues. However, most of these techniques cannot remedy both exposure problems simultaneously. Meanwhile, existing methods that claim to be capable [...] Read more.
Inconsistent lighting phenomena in digital images, such as underexposure and overexposure, pose challenges in computer vision. Many studies have developed to address these issues. However, most of these techniques cannot remedy both exposure problems simultaneously. Meanwhile, existing methods that claim to be capable of handling these cases have not yielded optimal results, especially for images with blur and noise distortions. Therefore, this study proposes a system to improve underexposed and overexposed photos, consisting of two different residual attention convolution networks with the CIELab color space as the input. The first model working on the L-channel (luminance) is responsible for recovering degraded image illumination by using residual memory block networks with self-attention layers. The next model based on dense residual attention networks aims to restore degraded image colors using ab-channels (chromatic). A properly exposed image is produced by fusing the output of these models and converting them to RGB color space. Experiments on degraded synthetic images from two public datasets and one real-life exposure dataset demonstrate that the proposed system outperforms the state-of-the-art algorithms in optimal illumination and color correction outcomes for underexposed and overexposed images. Full article
(This article belongs to the Special Issue Symmetry in Computational Intelligence and Applications)
Show Figures

Figure 1

24 pages, 13251 KB  
Article
Magnesium Ingot Stacking Segmentation Algorithm for Industrial Robot Based on the Correction of Image Overexposure Area
by Qiguang Li, Huazheng Zheng, Wensheng Wang and Chenggang Li
Sensors 2023, 23(15), 6809; https://doi.org/10.3390/s23156809 - 30 Jul 2023
Viewed by 1843
Abstract
This paper proposes an adaptive threshold segmentation algorithm for the magnesium ingot stack based on image overexposure area correction (ATSIOAC), which solves the problem of mirror reflection on the surface of magnesium alloy ingots caused by external ambient light and auxiliary light sources. [...] Read more.
This paper proposes an adaptive threshold segmentation algorithm for the magnesium ingot stack based on image overexposure area correction (ATSIOAC), which solves the problem of mirror reflection on the surface of magnesium alloy ingots caused by external ambient light and auxiliary light sources. Firstly, considering the brightness and chromaticity information of the mapped image, we divide the exposure probability threshold into weak exposure and strong exposure regions. Secondly, the saturation difference between the magnesium ingot region and the background region is used to obtain a mask for the magnesium ingot region to eliminate interference from the image background. Then, the RGB average of adjacent pixels in the overexposed area is used as a reference to correct the colors of the strongly exposed and weakly exposed areas, respectively. Furthermore, in order to smoothly fuse the two corrected images, pixel weighted average (WA) is applied. Finally, the magnesium ingot sorting experimental device was constructed and the corrected top surface image of the ingot pile was segmented through ATSIOAC. The experimental results show that the overexposed area detection and correction algorithm proposed in this paper can effectively correct the color information in the overexposed area, and when segmenting ingot images, complete segmentation results of the top surface of the ingot pile can be obtained, effectively improving the accuracy of magnesium alloy ingot segmentation. The segmentation algorithm achieves a segmentation accuracy of 94.38%. Full article
(This article belongs to the Special Issue Sensing and Control Technology in Multi-Agent Systems)
Show Figures

Figure 1

17 pages, 1829 KB  
Article
Estimation of Pediatric Dosage of Antimalarial Drugs, Using Pharmacokinetic and Physiological Approach
by Ellen K. G. Mhango, Bergthora S. Snorradottir, Baxter H. K. Kachingwe, Kondwani G. H. Katundu and Sveinbjorn Gizurarson
Pharmaceutics 2023, 15(4), 1076; https://doi.org/10.3390/pharmaceutics15041076 - 27 Mar 2023
Cited by 4 | Viewed by 3884
Abstract
Most of the individuals who die of malaria in sub–Saharan Africa are children. It is, therefore, important for this age group to have access to the right treatment and correct dose. Artemether—lumefantrine is one of the fixed dose combination therapies that was approved [...] Read more.
Most of the individuals who die of malaria in sub–Saharan Africa are children. It is, therefore, important for this age group to have access to the right treatment and correct dose. Artemether—lumefantrine is one of the fixed dose combination therapies that was approved by the World Health Organization to treat malaria. However, the current recommended dose has been reported to cause underexposure or overexposure in some children. The aim of this article was, therefore, to estimate the doses that can mimic adult exposure. The availability of more and reliable pharmacokinetic data is essential to accurately estimate appropriate dosage regimens. The doses in this study were estimated using the physiological information from children and some pharmacokinetic data from adults due to the lack of pediatric pharmacokinetic data in the literature. Depending on the approach that was used to calculate the dose, the results showed that some children were underexposed, and others were overexposed. This can lead to treatment failure, toxicity, and even death. Therefore, when designing a dosage regimen, it is important to know and include the distinctions in physiology at various phases of development that influence the pharmacokinetics of various drugs in order to estimate the dose in young children. The physiology at each time point during the growth of a child may influence how the drug is absorbed, gets distributed, metabolized, and eliminated. From the results, there is a very clear need to conduct a clinical study to further verify if the suggested (i.e., 0.34 mg/kg for artemether and 6 mg/kg for lumefantrine) doses could be clinically efficacious. Full article
(This article belongs to the Section Pharmacokinetics and Pharmacodynamics)
Show Figures

Figure 1

Back to TopTop