Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,851)

Search Parameters:
Keywords = low-light image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 11610 KiB  
Article
Exploring the Impact of Species Participation Levels on the Performance of Dominant Plant Identification Models in the Sericite–Artemisia Desert Grassland by Using Deep Learning
by Wenhao Liu, Guili Jin, Wanqiang Han, Mengtian Chen, Wenxiong Li, Chao Li and Wenlin Du
Agriculture 2025, 15(14), 1547; https://doi.org/10.3390/agriculture15141547 - 18 Jul 2025
Abstract
Accurate plant species identification in desert grasslands using hyperspectral data is a critical prerequisite for large-scale, high-precision grassland monitoring and management. However, due to prolonged overgrazing and the inherent ecological vulnerability of the environment, sericite–Artemisia desert grassland has experienced significant ecological degradation. [...] Read more.
Accurate plant species identification in desert grasslands using hyperspectral data is a critical prerequisite for large-scale, high-precision grassland monitoring and management. However, due to prolonged overgrazing and the inherent ecological vulnerability of the environment, sericite–Artemisia desert grassland has experienced significant ecological degradation. Therefore, in this study, we obtained spectral images of the grassland in April 2022 using a Soc710 VP imaging spectrometer (Surface Optics Corporation, San Diego, CA, USA), which were classified into three levels (low, medium, and high) based on the level of participation of Seriphidium transiliense (Poljakov) Poljakov and Ceratocarpus arenarius L. in the community. The optimal index factor (OIF) was employed to synthesize feature band images, which were subsequently used as input for the DeepLabv3p, PSPNet, and UNet deep learning models in order to assess the influence of species participation on classification accuracy. The results indicated that species participation significantly impacted spectral information extraction and model classification performance. Higher participation enhanced the scattering of reflectivity in the canopy structure of S. transiliense, while the light saturation effect of C. arenarius was induced by its short stature. Band combinations—such as Blue, Red Edge, and NIR (BREN) and Red, Red Edge, and NIR (RREN)—exhibited strong capabilities in capturing structural vegetation information. The identification model performances were optimal, with a high level of S. transiliense participation and with DeepLabv3p, PSPNet, and UNet achieving an overall accuracy (OA) of 97.86%, 96.51%, and 98.20%. Among the tested models, UNet exhibited the highest classification accuracy and robustness with small sample datasets, effectively differentiating between S. transiliense, C. arenarius, and bare ground. However, when C. arenarius was the primary target species, the model’s performance declined as its participation levels increased, exhibiting significant omission errors for S. transiliense, whose producer’s accuracy (PA) decreased by 45.91%. The findings of this study provide effective technical means and theoretical support for the identification of plant species and ecological monitoring in sericite–Artemisia desert grasslands. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

24 pages, 9664 KiB  
Article
Frequency-Domain Collaborative Lightweight Super-Resolution for Fine Texture Enhancement in Rice Imagery
by Zexiao Zhang, Jie Zhang, Jinyang Du, Xiangdong Chen, Wenjing Zhang and Changmeng Peng
Agronomy 2025, 15(7), 1729; https://doi.org/10.3390/agronomy15071729 - 18 Jul 2025
Abstract
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, [...] Read more.
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, which affects the detection performance. In view of the fact that pests and diseases affect the whole situation and tiny details are mostly localized, we propose a rice image reconstruction method based on an adaptive two-branch heterogeneous structure. The method consists of a low-frequency branch (LFB) that recovers global features using orientation-aware extended receptive fields to capture streaky global features, such as pests and diseases, and a high-frequency branch (HFB) that enhances detail edges through an adaptive enhancement mechanism to boost the clarity of local detail regions. By introducing the dynamic weight fusion mechanism (CSDW) and lightweight gating network (LFFN), the problem of the unbalanced fusion of frequency information for rice images in traditional methods is solved. Experiments on the 4× downsampled rice test set demonstrate that the proposed method achieves a 62% reduction in parameters compared to EDSR, 41% lower computational cost (30 G) than MambaIR-light, and an average PSNR improvement of 0.68% over other methods in the study while balancing memory usage (227 M) and inference speed. In downstream task validation, rice panicle maturity detection achieves a 61.5% increase in mAP50 (0.480 → 0.775) compared to interpolation methods, and leaf pest detection shows a 2.7% improvement in average mAP50 (0.949 → 0.975). This research provides an effective solution for lightweight rice image enhancement, with its dual-branch collaborative mechanism and dynamic fusion strategy establishing a new paradigm in agricultural rice image processing. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

21 pages, 33417 KiB  
Article
Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11
by Tianhang Weng and Xiaopeng Niu
Sensors 2025, 25(14), 4463; https://doi.org/10.3390/s25144463 - 17 Jul 2025
Abstract
Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored [...] Read more.
Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored for low-light environments, built upon the YOLOv11s framework. ELS-YOLO features a re-parameterized backbone (ER-HGNetV2) with integrated Re-parameterized Convolution and Efficient Channel Attention mechanisms, a Lightweight Feature Selection Pyramid Network (LFSPN) for multi-scale object detection, and a Shared Convolution Separate Batch Normalization Head (SCSHead) to reduce computational complexity. Layer-Adaptive Magnitude-Based Pruning (LAMP) is employed to compress the model size. Experiments on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves high detection accuracy with a compact model. Here, we show that ELS-YOLO attains a mAP@0.5 of 74.3% and 68.7% on the ExDark and DroneVehicle datasets, respectively, while maintaining real-time inference capability. Full article
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)
Show Figures

Figure 1

26 pages, 3771 KiB  
Article
BGIR: A Low-Illumination Remote Sensing Image Restoration Algorithm with ZYNQ-Based Implementation
by Zhihao Guo, Liangliang Zheng and Wei Xu
Sensors 2025, 25(14), 4433; https://doi.org/10.3390/s25144433 - 16 Jul 2025
Viewed by 51
Abstract
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. [...] Read more.
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. Therefore, in order to improve the visibility and signal-to-noise ratio of remote sensing images based on CMOS imaging systems, this paper proposes a low-light remote sensing image enhancement method and a corresponding ZYNQ (Zynq-7000 All Programmable SoC) design scheme called the BGIR (Bilateral-Guided Image Restoration) algorithm, which uses an improved multi-scale Retinex algorithm in the HSV (hue–saturation–value) color space. First, the RGB image is used to separate the original image’s H, S, and V components. Then, the V component is processed using the improved algorithm based on bilateral filtering. The image is then adjusted using the gamma correction algorithm to make preliminary adjustments to the brightness and contrast of the whole image, and the S component is processed using segmented linear enhancement to obtain the base layer. The algorithm is also deployed to ZYNQ using ARM + FPGA software synergy, reasonably allocating each algorithm module and accelerating the algorithm by using a lookup table and constructing a pipeline. The experimental results show that the proposed method improves processing speed by nearly 30 times while maintaining the recovery effect, which has the advantages of fast processing speed, miniaturization, embeddability, and portability. Following the end-to-end deployment, the processing speeds for resolutions of 640 × 480 and 1280 × 720 are shown to reach 80 fps and 30 fps, respectively, thereby satisfying the performance requirements of the imaging system. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

12 pages, 5633 KiB  
Article
Study on Joint Intensity in Real-Space and k-Space of SFS Super-Resolution Imaging via Multiplex Illumination Modulation
by Xiaoyu Yang, Haonan Zhang, Feihong Lin, Xu Liu and Qing Yang
Photonics 2025, 12(7), 717; https://doi.org/10.3390/photonics12070717 - 16 Jul 2025
Viewed by 96
Abstract
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were [...] Read more.
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were carried out, and their real-space images and k-space images were obtained. The influence of single illumination with different SFS and mixed illumination with various combinations on SFS super-resolution imaging was analyzed. The phenomena of sample SF coverage were discussed. The SFS super-resolution imaging characteristics based on low-coherence illumination and highly localized light fields were discovered. The phenomenon of image magnification during SFS super-resolution imaging process was discussed. The differences and connections between the SF spectrum of objects and the k-space images obtained in SFS super-resolution imaging process were explained. This provides certain support for optimization of high-throughput SFS super-resolution imaging. Full article
Show Figures

Figure 1

19 pages, 3619 KiB  
Article
An Adaptive Underwater Image Enhancement Framework Combining Structural Detail Enhancement and Unsupervised Deep Fusion
by Semih Kahveci and Erdinç Avaroğlu
Appl. Sci. 2025, 15(14), 7883; https://doi.org/10.3390/app15147883 - 15 Jul 2025
Viewed by 102
Abstract
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To [...] Read more.
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To address these issues, this study proposes a detail-oriented hybrid framework for underwater image enhancement that synergizes the strengths of traditional image processing with the powerful feature extraction capabilities of unsupervised deep learning. Our framework introduces a novel multi-scale detail enhancement unit to accentuate structural information, followed by a Latent Low-Rank Representation (LatLRR)-based simplification step. This unique combination effectively suppresses common artifacts like oversharpening, spurious edges, and noise by decomposing the image into meaningful subspaces. The principal structural features are then optimally combined with a gamma-corrected luminance channel using an unsupervised MU-Fusion network, achieving a balanced optimization of both global contrast and local details. The experimental results on the challenging Test-C60 and OceanDark datasets demonstrate that our method consistently outperforms state-of-the-art fusion-based approaches, achieving average improvements of 7.5% in UIQM, 6% in IL-NIQE, and 3% in AG. Wilcoxon signed-rank tests confirm that these performance gains are statistically significant (p < 0.01). Consequently, the proposed method significantly mitigates prevalent issues such as color aberration, detail loss, and artificial haze, which are frequently encountered in existing techniques. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 12097 KiB  
Article
Adaptive Outdoor Cleaning Robot with Real-Time Terrain Perception and Fuzzy Control
by Raul Fernando Garcia Azcarate, Akhil Jayadeep, Aung Kyaw Zin, James Wei Shung Lee, M. A. Viraj J. Muthugala and Mohan Rajesh Elara
Mathematics 2025, 13(14), 2245; https://doi.org/10.3390/math13142245 - 10 Jul 2025
Viewed by 278
Abstract
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A [...] Read more.
Outdoor cleaning robots must operate reliably across diverse and unstructured surfaces, yet many existing systems lack the adaptability to handle terrain variability. This paper proposes a terrain-aware cleaning framework that dynamically adjusts robot behavior based on real-time surface classification and slope estimation. A 128-channel LiDAR sensor captures signal intensity images, which are processed by a ResNet-18 convolutional neural network to classify floor types as wood, smooth, or rough. Simultaneously, pitch angles from an onboard IMU detect terrain inclination. These inputs are transformed into fuzzy sets and evaluated using a Mamdani-type fuzzy inference system. The controller adjusts brush height, brush speed, and robot velocity through 81 rules derived from 48 structured cleaning experiments across varying terrain and slopes. Validation was conducted in low-light (night-time) conditions, leveraging LiDAR’s lighting-invariant capabilities. Field trials confirm that the robot responds effectively to environmental conditions, such as reducing speed on slopes or increasing brush pressure on rough surfaces. The integration of deep learning and fuzzy control enables safe, energy-efficient, and adaptive cleaning in complex outdoor environments. This work demonstrates the feasibility and real-world applicability for combining perception and inference-based control in terrain-adaptive robotic systems. Full article
(This article belongs to the Special Issue Research and Applications of Neural Networks and Fuzzy Logic)
Show Figures

Figure 1

23 pages, 10392 KiB  
Article
Dual-Branch Luminance–Chrominance Attention Network for Hydraulic Concrete Image Enhancement
by Zhangjun Peng, Li Li, Chuanhao Chang, Rong Tang, Guoqiang Zheng, Mingfei Wan, Juanping Jiang, Shuai Zhou, Zhenggang Tian and Zhigui Liu
Appl. Sci. 2025, 15(14), 7762; https://doi.org/10.3390/app15147762 - 10 Jul 2025
Viewed by 158
Abstract
Hydraulic concrete is a critical infrastructure material, with its surface condition playing a vital role in quality assessments for water conservancy and hydropower projects. However, images taken in complex hydraulic environments often suffer from degraded quality due to low lighting, shadows, and noise, [...] Read more.
Hydraulic concrete is a critical infrastructure material, with its surface condition playing a vital role in quality assessments for water conservancy and hydropower projects. However, images taken in complex hydraulic environments often suffer from degraded quality due to low lighting, shadows, and noise, making it difficult to distinguish defects from the background and thereby hindering accurate defect detection and damage evaluation. In this study, following systematic analyses of hydraulic concrete color space characteristics, we propose a Dual-Branch Luminance–Chrominance Attention Network (DBLCANet-HCIE) specifically designed for low-light hydraulic concrete image enhancement. Inspired by human visual perception, the network simultaneously improves global contrast and preserves fine-grained defect textures, which are essential for structural analysis. The proposed architecture consists of a Luminance Adjustment Branch (LAB) and a Chroma Restoration Branch (CRB). The LAB incorporates a Luminance-Aware Hybrid Attention Block (LAHAB) to capture both the global luminance distribution and local texture details, enabling adaptive illumination correction through comprehensive scene understanding. The CRB integrates a Channel Denoiser Block (CDB) for channel-specific noise suppression and a Frequency-Domain Detail Enhancement Block (FDDEB) to refine chrominance information and enhance subtle defect textures. A feature fusion block is designed to fuse and learn the features of the outputs from the two branches, resulting in images with enhanced luminance, reduced noise, and preserved surface anomalies. To validate the proposed approach, we construct a dedicated low-light hydraulic concrete image dataset (LLHCID). Extensive experiments conducted on both LOLv1 and LLHCID benchmarks demonstrate that the proposed method significantly enhances the visual interpretability of hydraulic concrete surfaces while effectively addressing low-light degradation challenges. Full article
Show Figures

Figure 1

21 pages, 2471 KiB  
Article
Attention-Based Mask R-CNN Enhancement for Infrared Image Target Segmentation
by Liang Wang and Kan Ren
Symmetry 2025, 17(7), 1099; https://doi.org/10.3390/sym17071099 - 9 Jul 2025
Viewed by 264
Abstract
Image segmentation is an important method in the field of image processing, while infrared (IR) image segmentation is one of the challenges in this field due to the unique characteristics of IR data. Infrared imaging utilizes the infrared radiation emitted by objects to [...] Read more.
Image segmentation is an important method in the field of image processing, while infrared (IR) image segmentation is one of the challenges in this field due to the unique characteristics of IR data. Infrared imaging utilizes the infrared radiation emitted by objects to produce images, which can supplement the performance of visible-light images under adverse lighting conditions to some extent. However, the low spatial resolution and limited texture details in IR images hinder the achievement of high-precision segmentation. To address these issues, an attention mechanism based on symmetrical cross-channel interaction—motivated by symmetry principles in computer vision—was integrated into a Mask Region-Based Convolutional Neural Network (Mask R-CNN) framework. A Bottleneck-enhanced Squeeze-and-Attention (BNSA) module was incorporated into the backbone network, and novel loss functions were designed for both the bounding box (Bbox) regression and mask prediction branches to enhance segmentation performance. Furthermore, a dedicated infrared image dataset was constructed to validate the proposed method. The experimental results demonstrate that the optimized model achieves higher segmentation accuracy and better segmentation performance compared to the original network and other mainstream segmentation models on our dataset, demonstrating how symmetrical design principles can effectively improve complex vision tasks. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Computer Vision)
Show Figures

Figure 1

22 pages, 1661 KiB  
Article
UniText: A Unified Framework for Chinese Text Detection, Recognition, and Restoration in Ancient Document and Inscription Images
by Lu Shen, Zewei Wu, Xiaoyuan Huang, Boliang Zhang, Su-Kit Tang, Jorge Henriques and Silvia Mirri
Appl. Sci. 2025, 15(14), 7662; https://doi.org/10.3390/app15147662 - 8 Jul 2025
Viewed by 265
Abstract
Processing ancient text images presents significant challenges due to severe visual degradation, missing glyph structures, and various types of noise caused by aging. These issues are particularly prominent in Chinese historical documents and stone inscriptions, where diverse writing styles, multi-angle capturing, uneven lighting, [...] Read more.
Processing ancient text images presents significant challenges due to severe visual degradation, missing glyph structures, and various types of noise caused by aging. These issues are particularly prominent in Chinese historical documents and stone inscriptions, where diverse writing styles, multi-angle capturing, uneven lighting, and low contrast further hinder the performance of traditional OCR techniques. In this paper, we propose a unified neural framework, UniText, for the detection, recognition, and glyph restoration of Chinese characters in images of historical documents and inscriptions. UniText operates at the character level and processes full-page inputs, making it robust to multi-scale, multi-oriented, and noise-corrupted text. The model adopts a multi-task architecture that integrates spatial localization, semantic recognition, and visual restoration through stroke-aware supervision and multi-scale feature aggregation. Experimental results on our curated dataset of ancient Chinese texts demonstrate that UniText achieves a competitive performance in detection and recognition while producing visually faithful restorations under challenging conditions. This work provides a technically scalable and generalizable framework for image-based document analysis, with potential applications in historical document processing, digital archiving, and broader tasks in text image understanding. Full article
Show Figures

Figure 1

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 206
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

25 pages, 3175 KiB  
Article
Turbulence-Resilient Object Classification in Remote Sensing Using a Single-Pixel Image-Free Approach
by Yin Cheng, Yusen Liao and Jun Ke
Sensors 2025, 25(13), 4137; https://doi.org/10.3390/s25134137 - 2 Jul 2025
Viewed by 266
Abstract
In remote sensing, object classification often suffers from severe degradation caused by atmospheric turbulence and low-signal conditions. Traditional image reconstruction approaches are computationally expensive and fragile under such conditions. In this work, we propose a novel image-free classification framework using single-pixel imaging (SPI), [...] Read more.
In remote sensing, object classification often suffers from severe degradation caused by atmospheric turbulence and low-signal conditions. Traditional image reconstruction approaches are computationally expensive and fragile under such conditions. In this work, we propose a novel image-free classification framework using single-pixel imaging (SPI), which directly classifies targets from 1D measurements without reconstructing the image. A learnable sampling matrix is introduced for structured light modulation, and a hybrid CNN-Transformer network (Hybrid-CTNet) is employed for robust feature extraction. To enhance resilience against turbulence and enable efficient deployment, we design a (N+1)×L hybrid strategy that integrates convolutional and Transformer blocks in every stage. Extensive simulations and optical experiments validate the effectiveness of our approach under various turbulence intensities and sampling rates as low as 1%. Compared with existing image-based and image-free methods, our model achieves superior performance in classification accuracy, computational efficiency, and robustness, which is important for potential low-resource real-time remote sensing applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

18 pages, 3665 KiB  
Article
Analytical Device and Prediction Method for Urine Component Concentrations
by Zhe Wang, Jianbang Huang, Qimeng Chen, Yuanhua Yu, Xuan Yu, Yue Zhao, Yan Wang, Chunxiang Shi, Zizhao Zhao and Dachun Tang
Micromachines 2025, 16(7), 789; https://doi.org/10.3390/mi16070789 - 2 Jul 2025
Viewed by 278
Abstract
To tackle the low-accuracy problem with analyzing urine component concentrations in real time, a fully automated dipstick analysis device of urine dry chemistry was designed, and a prediction method combining an image acquisition system with a whale optimization algorithm (WOA) for BP neural [...] Read more.
To tackle the low-accuracy problem with analyzing urine component concentrations in real time, a fully automated dipstick analysis device of urine dry chemistry was designed, and a prediction method combining an image acquisition system with a whale optimization algorithm (WOA) for BP neural network optimization was proposed. The image acquisition system, which comprised an ESP32S3 chip and a GC2145 camera, was used to collect the urine test strip images, and then color data were calibrated by image processing and color correction on the upper computer. The correlations between reflected light and concentrations were established following the Kubelka–Munk theory and the Beer–Lambert law. A mathematical model of urine colorimetric value and concentration was constructed based on the least squares method. The WOA algorithm was applied to optimize the weight and threshold of the BP neural network, and substantial data were utilized to train the neural network and perform comparative analysis. The experimental results show that the MAE, RMSE and R2 of predicted versus actual urine protein values were, respectively, 3.1415, 4.328 and approximately 1. The WOA-BP neural network model exhibited high precision and accuracy in predicting the urine component concentrations. Full article
(This article belongs to the Section B:Biology and Biomedicine)
Show Figures

Figure 1

20 pages, 2735 KiB  
Article
Leaf Area Estimation in High-Wire Tomato Cultivation Using Plant Body Scanning
by Hiroki Naito, Tokihiro Fukatsu, Kota Shimomoto, Fumiki Hosoi and Tomohiko Ota
AgriEngineering 2025, 7(7), 206; https://doi.org/10.3390/agriengineering7070206 - 1 Jul 2025
Viewed by 351
Abstract
Accurate estimation of the leaf area index (LAI), a key indicator of canopy development and light interception, is essential for improving productivity in greenhouse tomato cultivation. This study presents a non-destructive LAI estimation method using side-view images captured by a vertical scanning system. [...] Read more.
Accurate estimation of the leaf area index (LAI), a key indicator of canopy development and light interception, is essential for improving productivity in greenhouse tomato cultivation. This study presents a non-destructive LAI estimation method using side-view images captured by a vertical scanning system. The system recorded the full vertical profile of tomato plants grown under two deleafing strategies: modifying leaf height (LH) and altering leaf density (LD). Vegetative and leaf areas were extracted using color-based masking and semantic segmentation with the Segment Anything Model (SAM), a general-purpose deep learning tool. Regression models based on leaf or all vegetative pixel counts showed strong correlations with destructively measured LAI, particularly under LH conditions (R2 > 0.85; mean absolute percentage error ≈ 16%). Under LD conditions, accuracy was slightly lower due to occlusion and leaf orientation. Compared with prior 3D-based methods, the proposed 2D approach achieved comparable accuracy while maintaining low cost and a labor-efficient design. However, the system has not been tested in real production, and its generalizability across cultivars, environments, and growth stages remains unverified. This proof-of-concept study highlights the potential of side-view imaging for LAI monitoring and calls for further validation and integration of leaf count estimation. Full article
Show Figures

Figure 1

18 pages, 4391 KiB  
Article
UWMambaNet: Dual-Branch Underwater Image Reconstruction Based on W-Shaped Mamba
by Yuhan Zhang, Xinyang Yu and Zhanchuan Cai
Mathematics 2025, 13(13), 2153; https://doi.org/10.3390/math13132153 - 30 Jun 2025
Viewed by 213
Abstract
Underwater image enhancement is a challenging task due to the unique optical properties of water, which often lead to color distortion, low contrast, and detail loss. At the present stage, the methods based on the CNN have the problem of insufficient global attention, [...] Read more.
Underwater image enhancement is a challenging task due to the unique optical properties of water, which often lead to color distortion, low contrast, and detail loss. At the present stage, the methods based on the CNN have the problem of insufficient global attention, and the methods based on Transformer generally have the problem of quadratic complexity. To address this challenge, we propose a dual-branch network architecture based on the W-shaped Mamba: UWMambaNet. Our method integrates the color contrast enhancement branch and the detail enhancement branch, and each branch is dedicated to improving specific aspects of underwater images. The color contrast enhancement branch utilizes the RGB and Lab color spaces and uses the Mamba block for advanced feature fusion to enhance color fidelity and contrast. The detail enhancement branch adopts a multi-scale feature extraction strategy to capture fine and contextual details through parallel convolutional paths. The Mamba module is added to the dual branches, and state-space modeling is used to capture the long-range dependencies and spatial relationships in the image data. This enables effective modeling of the complex interactions and light propagation effects inherent in the underwater environment. Experimental results show that our method significantly improves the visual quality of underwater images and is superior to existing technologies in terms of quantitative indicators and visualization effects; compared to the best candidate models on the UIEB and EUVP datasets, UWMambaNet improves UCIQE by 3.7% and 2.4%, respectively. Full article
Show Figures

Figure 1

Back to TopTop