Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (37)

Search Parameters:
Keywords = backlight illumination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 8224 KB  
Article
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Viewed by 262
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor [...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

29 pages, 2829 KB  
Article
Real-Time Deterministic Lane Detection on CPU-Only Embedded Systems via Binary Line Segment Filtering
by Shang-En Tsai, Shih-Ming Yang and Chia-Han Hsieh
Electronics 2026, 15(2), 351; https://doi.org/10.3390/electronics15020351 - 13 Jan 2026
Viewed by 310
Abstract
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the [...] Read more.
The deployment of Advanced Driver-Assistance Systems (ADAS) in economically constrained markets frequently relies on hardware architectures that lack dedicated graphics processing units. Within such environments, the integration of deep neural networks faces significant hurdles, primarily stemming from strict limitations on energy consumption, the absolute necessity for deterministic real-time response, and the rigorous demands of safety certification protocols. Meanwhile, traditional geometry-based lane detection pipelines continue to exhibit limited robustness under adverse illumination conditions, including intense backlighting, low-contrast nighttime scenes, and heavy rainfall. Motivated by these constraints, this work re-examines geometry-based lane perception from a sensor-level viewpoint and introduces a Binary Line Segment Filter (BLSF) that leverages the inherent structural regularity of lane markings in bird’s-eye-view (BEV) imagery within a computationally lightweight framework. The proposed BLSF is integrated into a complete pipeline consisting of inverse perspective mapping, median local thresholding, line-segment detection, and a simplified Hough-style sliding-window fitting scheme combined with RANSAC. Experiments on a self-collected dataset of 297 challenging frames show that the inclusion of BLSF significantly improves robustness over an ablated baseline while sustaining real-time performance on a 2 GHz ARM CPU-only platform. Additional evaluations on the Dazzling Light and Night subsets of the CULane and LLAMAS benchmarks further confirm consistent gains of approximately 6–7% in F1-score, together with corresponding improvements in IoU. These results demonstrate that interpretable, geometry-driven lane feature extraction remains a practical and complementary alternative to lightweight learning-based approaches for cost- and safety-critical ADAS applications. Full article
(This article belongs to the Special Issue Feature Papers in Electrical and Autonomous Vehicles, Volume 2)
Show Figures

Figure 1

30 pages, 34352 KB  
Review
Infrared and Visible Image Fusion Techniques for UAVs: A Comprehensive Review
by Junjie Li, Cunzheng Fan, Congyang Ou and Haokui Zhang
Drones 2025, 9(12), 811; https://doi.org/10.3390/drones9120811 - 21 Nov 2025
Cited by 3 | Viewed by 2028
Abstract
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery [...] Read more.
Infrared–visible (IR–VIS) image fusion is becoming central to unmanned aerial vehicle (UAV) perception, enabling robust operation across day–night cycles, backlighting, haze or smoke, and large viewpoint or scale changes. However, for practical applications some challenges still remain: visible images are illumination-sensitive; infrared imagery suffers thermal crossover and weak texture; motion and parallax cause cross-modal misalignment; UAV scenes contain many small or fast targets; and onboard platforms face strict latency, power, and bandwidth budgets. Given these UAV-specific challenges and constraints, we provide a UAV-centric synthesis of IR–VIS fusion. We: (i) propose a taxonomy linking data compatibility, fusion mechanisms, and task adaptivity; (ii) critically review learning-based methods—including autoencoders, CNNs, GANs, Transformers, and emerging paradigms; (iii) compare explicit/implicit registration strategies and general-purpose fusion frameworks; and (iv) consolidate datasets and evaluation metrics to reveal UAV-specific gaps. We further identify open challenges in benchmarking, metrics, lightweight design, and integration with downstream detection, segmentation, and tracking, offering guidance for real-world deployment. A continuously updated bibliography and resources are provided and discussed in the main text. Full article
Show Figures

Figure 1

21 pages, 14169 KB  
Article
High-Precision Complex Orchard Passion Fruit Detection Using the PHD-YOLO Model Improved from YOLOv11n
by Rongxiang Luo, Rongrui Zhao, Xue Ding, Shuangyun Peng and Fapeng Cai
Horticulturae 2025, 11(7), 785; https://doi.org/10.3390/horticulturae11070785 - 3 Jul 2025
Cited by 4 | Viewed by 1299
Abstract
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This [...] Read more.
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This study introduces a pioneering partial convolution module (ParConv), which employs a channel grouping and independent processing strategy to mitigate computational complexity. The module under consideration has been demonstrated to enhance the efficacy of local feature extraction in dense fruit regions by integrating sub-group feature-independent convolution and channel concatenation mechanisms. Secondly, deep separable convolution (DWConv) is adopted to replace standard convolution. The proposed method involves decoupling spatial convolution and channel convolution, a strategy that enables the retention of multi-scale feature expression capabilities while achieving a substantial reduction in model computation. The integration of the HSV Attentional Fusion (HSVAF) module within the backbone network facilitates the fusion of HSV color space characteristics with an adaptive attention mechanism, thereby enhancing feature discriminability under dynamic lighting conditions. The experiment was conducted on a dataset of 1212 original images collected from a planting base in Yunnan, China, covering multiple periods and angles. The dataset was constructed using enhancement strategies, including rotation and noise injection, and contains 2910 samples. The experimental results demonstrate that the improved model achieves a detection accuracy of 95.4%, a recall rate of 85.0%, mAP@0.5 of 91.5%, and an F1 score of 90.0% on the test set, which are 0.7%, 3.5%, 1.3%, and 2. The model demonstrated a 4% increase in accuracy compared to the baseline model YOLOv11n, with a single-frame inference time of 0.6 milliseconds. The model exhibited significant robustness in scenarios with dense fruits, leaf occlusion, and backlighting, validating the synergistic enhancement of staged convolution optimization and hybrid attention mechanisms. This solution offers a means to automate the monitoring of orchards, achieving a balance between accuracy and real-time performance. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

16 pages, 13161 KB  
Article
Experimental Assessment of the Effects of Gas Composition on Volatile Flames of Coal and Biomass Particles in Oxyfuel Combustion Using Multi-Parameter Optical Diagnostics
by Tao Li, Haowen Chen and Benjamin Böhm
Processes 2025, 13(6), 1817; https://doi.org/10.3390/pr13061817 - 8 Jun 2025
Viewed by 974
Abstract
This experimental study examines the particle-level combustion behavior of high-volatile bituminous coal and walnut shell particles in oxyfuel environments, with a particular focus on the gas-phase ignition characteristics and the structural development of volatile flames. Particles with similar size and shape distributions (a [...] Read more.
This experimental study examines the particle-level combustion behavior of high-volatile bituminous coal and walnut shell particles in oxyfuel environments, with a particular focus on the gas-phase ignition characteristics and the structural development of volatile flames. Particles with similar size and shape distributions (a median diameter of about 126 µm and an aspect ratio of around 1.5) are combusted in hot flows generated using lean, flat flames, where the oxygen mole fraction is systematically varied in both CO2/O2 and N2/O2 atmospheres while maintaining comparable gas temperatures and particle heating rates. The investigation employs a high-speed multi-camera diagnostic system combining laser-induced fluorescence of OH, diffuse backlight-illumination, and Mie scattering to simultaneously measure the particle size, shape, and velocity; the ignition delay time; and the volatile flame dynamics during early-stage volatile combustion. Advanced detection algorithms enable the extraction of these multiple parameters from spatiotemporally synchronized measurements. The results reveal that the ignition delay time decreases with an increasing oxygen mole fraction up to 30 vol%, beyond which point further oxygen enrichment no longer accelerates the ignition, as the process becomes limited by the volatile release rate. In contrast, the reactivity of volatile flames shows continuous enhancement with an increasing oxygen mole fraction, indicating non-premixed flame behavior governed by the diffusion of oxygen toward the particles. The analysis of the flame stand-off distance demonstrates that volatile flames burn closer to the particles at higher oxygen mole fractions, consistent with the expected scaling of O2 diffusion with its partial pressure. Notably, walnut shell and coal particles exhibit remarkably similar ignition delay times, volatile flame sizes, and OH-LIF intensities. The substitution of N2 with CO2 produces minimal differences, suggesting that for 126 µm particles under high-heating-rate conditions, the relatively small variations in the heat capacity and O2 diffusivity between these diluents have negligible effects on the homogeneous combustion phenomena observed. Full article
(This article belongs to the Special Issue Experiments and Diagnostics in Reacting Flows)
Show Figures

Figure 1

30 pages, 40714 KB  
Article
Zero-TCE: Zero Reference Tri-Curve Enhancement for Low-Light Images
by Chengkang Yu, Guangliang Han, Mengyang Pan, Xiaotian Wu and Anping Deng
Appl. Sci. 2025, 15(2), 701; https://doi.org/10.3390/app15020701 - 12 Jan 2025
Cited by 4 | Viewed by 2956
Abstract
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference [...] Read more.
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference loss function, the enhancement of low-light images is converted into depth curve estimation, with three curves fitted to enhance the dark details of the image: a brightness adjustment curve (LE-curve), a contrast enhancement curve (CE-curve), and a multi-scale feature fusion curve (MF-curve). Initially, we introduce the TCE-L and TCE-C modules to improve image brightness and enhance image contrast, respectively. Subsequently, we design a multi-scale feature fusion (MFF) module that integrates the original and enhanced images at multiple scales in the HSV color space based on the brightness distribution characteristics of low-light images, yielding an optimally enhanced image that avoids overexposure and color distortion. We compare our proposed method against ten other advanced algorithms based on multiple datasets, including LOL, DICM, MEF, NPE, and ExDark, that encompass complex illumination variations. Experimental results demonstrate that the proposed algorithm adapts better to the characteristics of images captured in low-light environments, producing enhanced images with sharp contrast, rich details, and preserved color authenticity, while effectively mitigating the issue of overexposure. Full article
Show Figures

Figure 1

21 pages, 6281 KB  
Article
Adltformer Team-Training with Detr: Enhancing Cattle Detection in Non-Ideal Lighting Conditions Through Adaptive Image Enhancement
by Zhiqiang Zheng, Mengbo Wang, Xiaoyu Zhao and Zhi Weng
Animals 2024, 14(24), 3635; https://doi.org/10.3390/ani14243635 - 17 Dec 2024
Cited by 3 | Viewed by 1134
Abstract
This study proposes an image enhancement detection technique based on Adltformer (Adaptive dynamic learning transformer) team-training with Detr (Detection transformer) to improve model accuracy in suboptimal conditions, addressing the challenge of detecting cattle in real pastures under complex lighting conditions—including backlighting, non-uniform lighting, [...] Read more.
This study proposes an image enhancement detection technique based on Adltformer (Adaptive dynamic learning transformer) team-training with Detr (Detection transformer) to improve model accuracy in suboptimal conditions, addressing the challenge of detecting cattle in real pastures under complex lighting conditions—including backlighting, non-uniform lighting, and low light. This often results in the loss of image details and structural information, color distortion, and noise artifacts, thereby compromising the visual quality of captured images and reducing model accuracy. To train the Adltformer enhancement model, the day-to-night image synthesis (DTN-Synthesis) algorithm generates low-light image pairs that are precisely aligned with normal light images and include controlled noise levels. The Adltformer and Detr team-training (AT-Detr) method is employed to preprocess the low-light cattle dataset for image enhancement, ensuring that the enhanced images are more compatible with the requirements of machine vision systems. The experimental results demonstrate that the AT-Detr algorithm achieves superior detection accuracy, with comparable runtime and model complexity, reaching 97.5% accuracy under challenging illumination conditions, outperforming both Detr alone and sequential image enhancement followed by Detr. This approach provides both theoretical justification and practical applicability for detecting cattle under challenging conditions in real-world farming environments. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

14 pages, 2550 KB  
Article
Backlight Imaging Based on Laser-Gated Technology
by Jinzhou Bai, Hengkang Zhang, Huiqin Gao, Shaogang Guo, Siyuan Wang and An Pan
Photonics 2024, 11(12), 1141; https://doi.org/10.3390/photonics11121141 - 4 Dec 2024
Viewed by 1819
Abstract
Backlight imaging refers to the process of capturing images when the light source directly enters the lens of imaging devices or against a high-brightness background, which usually suffers from degraded imaging quality caused by direct or reflected strong light. Traditional backlight imaging methods [...] Read more.
Backlight imaging refers to the process of capturing images when the light source directly enters the lens of imaging devices or against a high-brightness background, which usually suffers from degraded imaging quality caused by direct or reflected strong light. Traditional backlight imaging methods involve reducing light flux, expanding dynamic range, and utilizing avoidance angles. However, these methods only partially address the issue of backlighting, and are unable to effectively extract information from the areas overwhelmed by the backlight. To overcome these limitations, this paper reported a backlight imaging technique based on active illumination laser gated imaging technology (AILGIT), originally applied in underwater scattering imaging. Given that backlight imaging is essentially a form of scattering imaging, this technique is likely applicable to backlight scenarios. The AILGIT employs nanosecond-gated imaging components synchronized with nanosecond pulse laser illumination to spatially slice the target. This method allows the camera to capture target signals within specific slices only, which effectively suppresses ambient light and scattering interference from the medium and achieves high-contrast imaging with strong backlight suppression. Experiments obtained dynamic backlight imaging results for a vehicle with headlight on at night from a distance of 500 m, with 60 frames per second and a 4.2 by 2.8 meters’ field of view, where wheel contours and the license plate can be clearly distinguished. The result not only demonstrates the potential of AILGIT in suppressing strong backlight, but also lays the foundation for further research on laser 3D imaging and subsequent processing techniques for backlight targets. Full article
Show Figures

Figure 1

19 pages, 10886 KB  
Article
Advancing Nighttime Object Detection through Image Enhancement and Domain Adaptation
by Chenyuan Zhang and Deokwoo Lee
Appl. Sci. 2024, 14(18), 8109; https://doi.org/10.3390/app14188109 - 10 Sep 2024
Cited by 4 | Viewed by 4740
Abstract
Due to the lack of annotations for nighttime low-light images, object detection in low-light images has always been a challenging problem. Achieving high-precision results at night is also an issue. Additionally, we aim to use a single nighttime dataset to complete the knowledge [...] Read more.
Due to the lack of annotations for nighttime low-light images, object detection in low-light images has always been a challenging problem. Achieving high-precision results at night is also an issue. Additionally, we aim to use a single nighttime dataset to complete the knowledge distillation task while improving the detection accuracy of object detection models under nighttime low-light conditions and reducing the computational cost of the model, especially for small targets and objects contaminated by special nighttime lighting. This paper proposes a Nighttime Unsupervised Domain Adaptation Network (NUDN) based on knowledge distillation to address these issues. To improve the detection accuracy of nighttime images, high-confidence bounding box predictions from the teacher and region proposals from the student are first fused, allowing the teacher to perform better in subsequent training, thus generating a combination of high-confidence and low-confidence pseudo-labels. This combination of feature information is used to guide model training, enabling the model to extract feature information similar to that of source images in nighttime low-light images. Nighttime images and pseudo-labels undergo random size transformations before being used as input for the student, enhancing the model’s generalization across different scales. To address the scarcity of nighttime datasets, we propose a nighttime-specific augmentation pipeline called LightImg. This pipeline enhances nighttime features, transforming them into daytime features and reducing issues such as backlighting, uneven illumination, and dim nighttime light, enabling cross-domain research using existing nighttime datasets. Our experimental results show that NUDN can significantly improve nighttime low-light object detection accuracy on the SHIFT and ExDark datasets. We conduct extensive experiments and ablation studies to demonstrate the effectiveness and efficiency of our work. Full article
Show Figures

Figure 1

15 pages, 9788 KB  
Article
Directionally Illuminated Autostereoscopy with Seamless Viewpoints for Multi-Viewers
by Aiqin Zhang, Xuehao Chen, Jiahui Wang, Yong He and Jianying Zhou
Micromachines 2024, 15(3), 403; https://doi.org/10.3390/mi15030403 - 16 Mar 2024
Cited by 2 | Viewed by 2177
Abstract
Autostereoscopy is usually perceived at finite viewpoints that result from the separated pixel array of a display system. With directionally illuminated autostereoscopy, the separation of the illumination channel from the image channel provides extra flexibility in optimizing the performance of autostereoscopy. This work [...] Read more.
Autostereoscopy is usually perceived at finite viewpoints that result from the separated pixel array of a display system. With directionally illuminated autostereoscopy, the separation of the illumination channel from the image channel provides extra flexibility in optimizing the performance of autostereoscopy. This work demonstrates that by taking advantage of illumination freedom, seamless viewpoints in the sweet viewing region, where the ghosting does not cause significant discomfort, are realized. This realization is based on illuminating the screen with a polyline array of light emitting diodes (LEDs), and continuous viewpoints are generated through independent variation in the radiance of each individual LED column. This new method is implemented in the directionally illuminated display for both single and multiple viewers, proving its effectiveness as a valuable technique for achieving a high-quality and high-resolution autostereoscopic display with seamless viewpoints. Full article
(This article belongs to the Special Issue Novel 3D Display Technology towards Metaverse)
Show Figures

Figure 1

11 pages, 3643 KB  
Article
Wide Field of View Under-Panel Optical Lens Design for Fingerprint Recognition of Smartphone
by Cheng-Mu Tsai, Sung-Jr Wu, Yi-Chin Fang and Pin Han
Micromachines 2024, 15(3), 386; https://doi.org/10.3390/mi15030386 - 13 Mar 2024
Viewed by 2285
Abstract
Fingerprint recognition is a widely used biometric authentication method in LED-backlight smartphones. Due to the increasing demand for full-screen smartphones, under-display fingerprint recognition has become a popular trend. In this paper, we propose a design of an optical fingerprint recognition lens for under-display [...] Read more.
Fingerprint recognition is a widely used biometric authentication method in LED-backlight smartphones. Due to the increasing demand for full-screen smartphones, under-display fingerprint recognition has become a popular trend. In this paper, we propose a design of an optical fingerprint recognition lens for under-display smartphones. The lens is composed of three plastic aspheric lenses, with an effective focal length (EFL) of 0.61 mm, a field of view (FOV) of 126°, and a total track length (TTL) of 2.54 mm. The image quality of the lens meets the target specifications, with MTF over 80% in the center FOV and over 70% in the 0.7 FOV, distortion less than 8% at an image height of 1.0 mm, and relative illumination (RI) greater than 25% at an image height of 1.0 mm. The lens also meets the current industry standards in terms of tolerance sensitivity and Monte Carlo analysis. Full article
Show Figures

Figure 1

21 pages, 9455 KB  
Article
Experimental Study of Spray and Combustion Characteristics in Gas-Centered Swirl Coaxial Injectors: Influence of Recess Ratio and Gas Swirl
by Jungho Lee, Ingyu Lee, Seongphil Woo, Yeoungmin Han and Youngbin Yoon
Aerospace 2024, 11(3), 209; https://doi.org/10.3390/aerospace11030209 - 8 Mar 2024
Cited by 5 | Viewed by 3702
Abstract
The spray and combustion characteristics of a gas-centered swirl coaxial (GCSC) injector used in oxidizer-rich staged combustion cycle engines were analyzed. The study focused on varying the recess ratio, presence of gas swirl, and swirl direction to improve injector performance. The impact of [...] Read more.
The spray and combustion characteristics of a gas-centered swirl coaxial (GCSC) injector used in oxidizer-rich staged combustion cycle engines were analyzed. The study focused on varying the recess ratio, presence of gas swirl, and swirl direction to improve injector performance. The impact of the recess ratio was assessed by increasing it for gas jet-type injectors with varying momentum ratios. Gas-swirl effects were studied by comparing injectors with and without swirl against a baseline of a low recess ratio gas injection. In atmospheric pressure-spray experiments, injector performance was assessed using backlight photography, cross-sectional imaging with a structured laser illumination planar imaging technique (SLIPI), and droplet analysis using ParticleMaster. Increasing the recess ratio led to reduced spray angle and droplet size, and trends of gas swirl-type injectors were similar to those of high recess ratio gas jet-type injectors. Combustion tests involved fabricating combustion chamber heads equipped with identical injectors, varying only the injector type. Oxidizer-rich combustion gas, produced by a pre-burner, and kerosene served as propellants. Combustion characteristics, including characteristic velocity, combustion efficiency, and heat flux, were evaluated. Elevated recess ratios correlated with increased characteristic velocity and reduced differences in the momentum–flux ratios of injectors. However, increasing the recess ratio yielded diminishing returns on combustion efficiency enhancement beyond a certain threshold. Gas swirling did not augment characteristic velocity but notably influenced heat flux distribution. The trends observed in spray tests were related to combustion characteristics regarding heat flux and combustion efficiency. Additionally, it was possible to estimate changes in the location and shape of the flame according to the characteristics of the injector. Full article
Show Figures

Figure 1

18 pages, 4879 KB  
Article
U2-Net and ResNet50-Based Automatic Pipeline for Bacterial Colony Counting
by Libo Cao, Liping Zeng, Yaoxuan Wang, Jiayi Cao, Ziyu Han, Yang Chen, Yuxi Wang, Guowei Zhong and Shanlei Qiao
Microorganisms 2024, 12(1), 201; https://doi.org/10.3390/microorganisms12010201 - 18 Jan 2024
Cited by 7 | Viewed by 6858
Abstract
In this paper, an automatic colony counting system based on an improved image preprocessing algorithm and convolutional neural network (CNN)-assisted automatic counting method was developed. Firstly, we assembled an LED backlighting illumination platform as an image capturing system to obtain photographs of laboratory [...] Read more.
In this paper, an automatic colony counting system based on an improved image preprocessing algorithm and convolutional neural network (CNN)-assisted automatic counting method was developed. Firstly, we assembled an LED backlighting illumination platform as an image capturing system to obtain photographs of laboratory cultures. Consequently, a dataset was introduced consisting of 390 photos of agar plate cultures, which included 8 microorganisms. Secondly, we implemented a new algorithm for image preprocessing based on light intensity correction, which facilitated clearer differentiation between colony and media areas. Thirdly, a U2-Net was used to predict the probability distribution of the edge of the Petri dish in images to locate region of interest (ROI), and then threshold segmentation was applied to separate it. This U2-Net achieved an F1 score of 99.5% and a mean absolute error (MAE) of 0.0033 on the validation set. Then, another U2-Net was used to separate the colony region within the ROI. This U2-Net achieved an F1 score of 96.5% and an MAE of 0.005 on the validation set. After that, the colony area was segmented into multiple components containing single or adhesive colonies. Finally, the colony components (CC) were innovatively rotated and the image crops were resized as the input (with 14,921 image crops in the training set and 4281 image crops in the validation set) for the ResNet50 network to automatically count the number of colonies. Our method achieved an overall recovery of 97.82% for colony counting and exhibited excellent performance in adhesion classification. To the best of our knowledge, the proposed “light intensity correction-based image preprocessing→U2-Net segmentation for Petri dish edge→U2-Net segmentation for colony region→ResNet50-based counting” scheme represents a new attempt and demonstrates a high degree of automation and accuracy in recognizing and counting single-colony and multi-colony targets. Full article
Show Figures

Figure 1

13 pages, 6529 KB  
Article
The Development of a Remote Edge-Lit Backlight Structure with Blue Laser Diodes
by Bing-Mau Chen, Shang-Ping Ying, Truong An Pham, Shiuan-Yu Tseng and Yu-Kang Chang
Photonics 2024, 11(1), 78; https://doi.org/10.3390/photonics11010078 - 15 Jan 2024
Cited by 2 | Viewed by 3078
Abstract
In this study, we introduce a novel design of a remote edge-lit backlight structure featuring blue laser diodes (LDs). These LDs were integrated into a remote yellow phosphor layer on a light guide plate (LGP). Blue light emitted by the LDs passes through [...] Read more.
In this study, we introduce a novel design of a remote edge-lit backlight structure featuring blue laser diodes (LDs). These LDs were integrated into a remote yellow phosphor layer on a light guide plate (LGP). Blue light emitted by the LDs passes through the LGP and spreads to the remote phosphor layer, generating white light output. Owing to the incorporation of a scattering layer between sequential LGPs, the remote edge-lit backlight structure facilitates the expansion of the output surface of the LGP by combining multiple individual LGPs. Two- and three-LGP remote edge-lit backlight structures demonstrated acceptable white illuminance uniformity. The proposed architecture serves as a viable solution for achieving uniform illumination in planar lighting systems using blue LDs; thus, this structure is particularly suitable for linear lighting or slender backlighting instead of display stand applications. Full article
(This article belongs to the Special Issue Advanced Lasers and Their Applications)
Show Figures

Figure 1

15 pages, 19203 KB  
Article
Improved Faster Region-Based Convolutional Neural Networks (R-CNN) Model Based on Split Attention for the Detection of Safflower Filaments in Natural Environments
by Zhenguo Zhang, Ruimeng Shi, Zhenyu Xing, Quanfeng Guo and Chao Zeng
Agronomy 2023, 13(10), 2596; https://doi.org/10.3390/agronomy13102596 - 11 Oct 2023
Cited by 18 | Viewed by 3077
Abstract
The accurate acquisition of safflower filament information is the prerequisite for robotic picking operations. To detect safflower filaments accurately in different illumination, branch and leaf occlusion, and weather conditions, an improved Faster R-CNN model for filaments was proposed. Due to the characteristics of [...] Read more.
The accurate acquisition of safflower filament information is the prerequisite for robotic picking operations. To detect safflower filaments accurately in different illumination, branch and leaf occlusion, and weather conditions, an improved Faster R-CNN model for filaments was proposed. Due to the characteristics of safflower filaments being dense and small in the safflower images, the model selected ResNeSt-101 with residual network structure as the backbone feature extraction network to enhance the expressive power of extracted features. Then, using Region of Interest (ROI) Align improved ROI Pooling to reduce the feature errors caused by double quantization. In addition, employing the partitioning around medoids (PAM) clustering was chosen to optimize the scale and number of initial anchors of the network to improve the detection accuracy of small-sized safflower filaments. The test results showed that the mean Average Precision (mAP) of the improved Faster R-CNN reached 91.49%. Comparing with Faster R-CNN, YOLOv3, YOLOv4, YOLOv5, and YOLOv6, the improved Faster R-CNN increased the mAP by 9.52%, 2.49%, 5.95%, 3.56%, and 1.47%, respectively. The mAP of safflower filaments detection was higher than 91% on a sunny, cloudy, and overcast day, in sunlight, backlight, branch and leaf occlusion, and dense occlusion. The improved Faster R-CNN can accurately realize the detection of safflower filaments in natural environments. It can provide technical support for the recognition of small-sized crops. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

Back to TopTop