Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (284)

Search Parameters:
Keywords = edge-illumination

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9119 KiB  
Article
An Improved YOLOv8n-Based Method for Detecting Rice Shelling Rate and Brown Rice Breakage Rate
by Zhaoyun Wu, Yehao Zhang, Zhongwei Zhang, Fasheng Shen, Li Li, Xuewu He, Hongyu Zhong and Yufei Zhou
Agriculture 2025, 15(15), 1595; https://doi.org/10.3390/agriculture15151595 - 24 Jul 2025
Abstract
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. [...] Read more.
Accurate and real-time detection of rice shelling rate (SR) and brown rice breakage rate (BR) is crucial for intelligent hulling sorting but remains challenging because of small grain size, dense adhesion, and uneven illumination causing missed detections and blurred boundaries in traditional YOLOv8n. This paper proposes a high-precision, lightweight solution based on an enhanced YOLOv8n with improvements in network architecture, feature fusion, and attention mechanism. The backbone’s C2f module is replaced with C2f-Faster-CGLU, integrating partial convolution (PConv) local convolution and convolutional gated linear unit (CGLU) gating to reduce computational redundancy via sparse interaction and enhance small-target feature extraction. A bidirectional feature pyramid network (BiFPN) weights multiscale feature fusion to improve edge positioning accuracy of dense grains. Attention mechanism for fine-grained classification (AFGC) is embedded to focus on texture and damage details, enhancing adaptability to light fluctuations. The Detect_Rice lightweight head compresses parameters via group normalization and dynamic convolution sharing, optimizing small-target response. The improved model achieved 96.8% precision and 96.2% mAP. Combined with a quantity–mass model, SR/BR detection errors reduced to 1.11% and 1.24%, meeting national standard (GB/T 29898-2013) requirements, providing an effective real-time solution for intelligent hulling sorting. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

16 pages, 1420 KiB  
Article
Light-Driven Quantum Dot Dialogues: Oscillatory Photoluminescence in Langmuir–Blodgett Films
by Tefera Entele Tesema
Nanomaterials 2025, 15(14), 1113; https://doi.org/10.3390/nano15141113 - 18 Jul 2025
Viewed by 207
Abstract
This study explores the optical properties of a close-packed monolayer composed of core/shell-alloyed CdSeS/ZnS quantum dots (QDs) of two different sizes and compositions. The monolayers were self-assembled in a stacked configuration at the water/air interface using Langmuir–Blodgett (LB) techniques. Under continuous 532 nm [...] Read more.
This study explores the optical properties of a close-packed monolayer composed of core/shell-alloyed CdSeS/ZnS quantum dots (QDs) of two different sizes and compositions. The monolayers were self-assembled in a stacked configuration at the water/air interface using Langmuir–Blodgett (LB) techniques. Under continuous 532 nm laser illumination on the red absorption edge of the blue-emitting smaller QDs (QD450), the red-emitting larger QDs (QD645) exhibited oscillatory temporal dynamics in their photoluminescence (PL), characterized by a pronounced blueshift in the emission peak wavelength and an abrupt decrease in peak intensity. Conversely, excitation by a 405 nm laser on the blue absorption edge induced a drastic redshift in the emission wavelength over time. These significant shifts in emission spectra are attributed to photon- and anisotropic-strain-assisted interlayer atom transfer. The findings provide new insights into strain-driven atomic rearrangements and their impact on the photophysical behavior of QD systems. Full article
Show Figures

Graphical abstract

19 pages, 3619 KiB  
Article
An Adaptive Underwater Image Enhancement Framework Combining Structural Detail Enhancement and Unsupervised Deep Fusion
by Semih Kahveci and Erdinç Avaroğlu
Appl. Sci. 2025, 15(14), 7883; https://doi.org/10.3390/app15147883 - 15 Jul 2025
Viewed by 143
Abstract
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To [...] Read more.
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To address these issues, this study proposes a detail-oriented hybrid framework for underwater image enhancement that synergizes the strengths of traditional image processing with the powerful feature extraction capabilities of unsupervised deep learning. Our framework introduces a novel multi-scale detail enhancement unit to accentuate structural information, followed by a Latent Low-Rank Representation (LatLRR)-based simplification step. This unique combination effectively suppresses common artifacts like oversharpening, spurious edges, and noise by decomposing the image into meaningful subspaces. The principal structural features are then optimally combined with a gamma-corrected luminance channel using an unsupervised MU-Fusion network, achieving a balanced optimization of both global contrast and local details. The experimental results on the challenging Test-C60 and OceanDark datasets demonstrate that our method consistently outperforms state-of-the-art fusion-based approaches, achieving average improvements of 7.5% in UIQM, 6% in IL-NIQE, and 3% in AG. Wilcoxon signed-rank tests confirm that these performance gains are statistically significant (p < 0.01). Consequently, the proposed method significantly mitigates prevalent issues such as color aberration, detail loss, and artificial haze, which are frequently encountered in existing techniques. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 7701 KiB  
Article
YOLO-StarLS: A Ship Detection Algorithm Based on Wavelet Transform and Multi-Scale Feature Extraction for Complex Environments
by Yihan Wang, Shuang Zhang, Jianhao Xu, Zhenwen Cheng and Gang Du
Symmetry 2025, 17(7), 1116; https://doi.org/10.3390/sym17071116 - 11 Jul 2025
Viewed by 236
Abstract
Ship detection in complex environments presents challenges such as sea surface reflections, wave interference, variations in illumination, and a range of target scales. The interaction between symmetric ship structures and wave patterns challenges conventional algorithms, particularly in maritime wireless networks. This study presents [...] Read more.
Ship detection in complex environments presents challenges such as sea surface reflections, wave interference, variations in illumination, and a range of target scales. The interaction between symmetric ship structures and wave patterns challenges conventional algorithms, particularly in maritime wireless networks. This study presents YOLO-StarLS (You Only Look Once with Star-topology Lightweight Ship detection), a detection framework leveraging wavelet transforms and multi-scale feature extraction through three core modules. We developed a Wavelet Multi-scale Feature Extraction Network (WMFEN) utilizing adaptive Haar wavelet decomposition with star-topology extraction to preserve multi-frequency information while minimizing detail loss. We introduced a Cross-axis Spatial Attention Refinement module (CSAR), which integrates star structures with cross-axis attention mechanisms to enhance spatial perception. We constructed an Efficient Detail-Preserving Detection head (EDPD) combining differential and shared convolutions to enhance edge detection while reducing computational complexity. Evaluation on the SeaShips dataset demonstrated YOLO-StarLS achieved superior performance for both mAP50 and mAP50–95 metrics, improving by 2.21% and 2.42% over the baseline YOLO11. The approach achieved significant efficiency, with a 36% reduction in the number of parameters to 1.67 M, a 34% decrease in complexity to 4.3 GFLOPs, and an inference speed of 162.0 FPS. Comparative analysis against eight algorithms confirmed the superiority in symmetric target detection. This work enhances real-time ship detection and provides foundations for maritime wireless surveillance networks. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 6653 KiB  
Article
Development of a Calibration Procedure of the Additive Masked Stereolithography Method for Improving the Accuracy of Model Manufacturing
by Paweł Turek, Anna Bazan, Paweł Kubik and Michał Chlost
Appl. Sci. 2025, 15(13), 7412; https://doi.org/10.3390/app15137412 - 1 Jul 2025
Viewed by 374
Abstract
The article presents a three-stage methodology for calibrating 3D printing using mSLA technology, aimed at improving dimensional accuracy and print repeatability. The proposed approach is based on procedures that enable the collection and analysis of numerical data, thereby minimizing the influence of the [...] Read more.
The article presents a three-stage methodology for calibrating 3D printing using mSLA technology, aimed at improving dimensional accuracy and print repeatability. The proposed approach is based on procedures that enable the collection and analysis of numerical data, thereby minimizing the influence of the operator’s subjective judgment, which is commonly relied upon in traditional calibration methods. In the first stage, compensation for the uneven illumination of the LCD matrix was performed by establishing a regression model that describes the relationship between UV radiation intensity and pixel brightness. Based on this model, a grayscale correction mask was developed. The second stage focused on determining the optimal exposure time, based on its effect on dimensional accuracy, detail reproduction, and model strength. The optimal exposure time is defined as the duration that provides the highest possible mechanical strength without significant loss of detail due to the light bleed phenomenon (i.e., diffusion of UV radiation beyond the mask edge). In the third stage, scale correction was applied to compensate for shrinkage and geometric distortions, further reducing the impact of light bleed on the dimensional fidelity of printed components. The proposed methodology was validated using an Anycubic Photon M3 Premium printer with Anycubic ABS-Like Resin Pro 2.0. Compensating for light intensity variation reduced the original standard deviation from 0.26 to 0.17 mW/cm2, corresponding to a decrease of more than one third. The methodology reduced surface displacement due to shrinkage from 0.044% to 0.003%, and the residual internal dimensional error from 0.159 mm to 0.017 mm (a 72% reduction). Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

31 pages, 5644 KiB  
Article
SWMD-YOLO: A Lightweight Model for Tomato Detection in Greenhouse Environments
by Quan Wang, Ye Hua, Qiongdan Lou and Xi Kan
Agronomy 2025, 15(7), 1593; https://doi.org/10.3390/agronomy15071593 - 29 Jun 2025
Viewed by 346
Abstract
The accurate detection of occluded tomatoes in complex greenhouse environments remains challenging due to the limited feature representation ability and high computational costs of existing models. This study proposes SWMD-YOLO, a lightweight multi-scale detection network optimized for greenhouse scenarios. The model integrates switchable [...] Read more.
The accurate detection of occluded tomatoes in complex greenhouse environments remains challenging due to the limited feature representation ability and high computational costs of existing models. This study proposes SWMD-YOLO, a lightweight multi-scale detection network optimized for greenhouse scenarios. The model integrates switchable atrous convolution (SAConv) and wavelet transform convolution (WTConv) for the dynamic adjustment of receptive fields for occlusion-adaptive feature extraction and to decompose features into multi-frequency sub-bands, respectively, thus preserving critical edge details of obscured targets. Traditional down-sampling is replaced with a dynamic sample (DySample) operator to minimize information loss during resolution transitions, while a multi-scale convolutional attention (MSCA) mechanism prioritizes discriminative regions under varying illumination. Additionally, we introduce Focaler-IoU, a novel loss function that addresses sample imbalance by dynamically re-weighting gradients for partially occluded and multi-scale targets. Experiments on greenhouse tomato data sets demonstrate that SWMD-YOLO achieves 93.47% mAP50 with a detection speed of 75.68 FPS, outperforming baseline models in accuracy while reducing parameters by 18.9%. Cross-data set validation confirms the model’s robustness to complex backgrounds and lighting variations. Overall, the proposed model provides a computationally efficient solution for real-time crop monitoring in resource-constrained precision agriculture systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 3340 KiB  
Article
Infrared Monocular Depth Estimation Based on Radiation Field Gradient Guidance and Semantic Priors in HSV Space
by Rihua Hao, Chao Xu and Chonghao Zhong
Sensors 2025, 25(13), 4022; https://doi.org/10.3390/s25134022 - 27 Jun 2025
Viewed by 344
Abstract
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework [...] Read more.
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework is developed that leverages the illumination-invariant properties of infrared images for accurate depth estimation. Specifically, a multi-task UNet architecture was designed to perform gradient extraction, semantic segmentation, and texture reconstruction from infrared RAW images. To strengthen structural learning, a Radiation Field Gradient Guidance (RGG) module was incorporated, enabling edge-aware attention mechanisms. The gradients, semantics, and textures were mapped to the Saturation (S), Hue (H), and Value (V) channels in the HSV color space, subsequently converted into an RGB format for input into the depth estimation network. Additionally, a sky mask loss was introduced during training to mitigate the influence of ambiguous sky regions. Experimental validation on a custom infrared dataset demonstrated high accuracy, achieving a δ1 of 0.976. These results confirm that integrating radiation field gradient guidance and semantic priors in HSV space significantly enhances depth estimation performance for infrared imagery. Full article
Show Figures

Figure 1

32 pages, 7048 KiB  
Article
DCMC-UNet: A Novel Segmentation Model for Carbon Traces in Oil-Immersed Transformers Improved with Dynamic Feature Fusion and Adaptive Illumination Enhancement
by Hongxin Ji, Jiaqi Li, Zhennan Shi, Zijian Tang, Xinghua Liu and Peilin Han
Sensors 2025, 25(13), 3904; https://doi.org/10.3390/s25133904 - 23 Jun 2025
Viewed by 278
Abstract
For large oil-immersed transformers, their metal-enclosed structure poses significant challenges for direct visual inspection of internal defects. To ensure the effective detection of internal insulation defects, this study employs a self-developed micro-robot for internal visual inspection. Given the substantial morphological and dimensional variations [...] Read more.
For large oil-immersed transformers, their metal-enclosed structure poses significant challenges for direct visual inspection of internal defects. To ensure the effective detection of internal insulation defects, this study employs a self-developed micro-robot for internal visual inspection. Given the substantial morphological and dimensional variations of target defects (e.g., carbon traces produced by surface discharge inside the transformer), the intelligent and efficient extraction of carbon trace features from complex backgrounds becomes critical for robotic inspection. To address these challenges, we propose the DCMC-UNet, a semantic segmentation model for carbon traces containing adaptive illumination enhancement and dynamic feature fusion. For blurred carbon trace images caused by unstable light reflection and illumination in transformer oil, an improved CLAHE algorithm is developed, incorporating learnable parameters to balance luminance and contrast while enhancing edge features of carbon traces. To handle the morphological diversity and edge complexity of carbon traces, a dynamic deformable encoder (DDE) was integrated into the encoder, leveraging deformable convolutional kernels to improve carbon trace feature extraction. An edge-aware decoder (EAD) was integrated into the decoder, which extracts edge details from predicted segmentation maps and fuses them with encoded features to enrich edge features. To mitigate the semantic gap between the encoder and the decoder, we replace the standard skip connection with a cross-level attention connection fusion layer (CLFC), enhancing the multi-scale fusion of morphological and edge features. Furthermore, a multi-scale atrous feature aggregation module (MAFA) is designed in the neck to enhance the integration of deep semantic and shallow visual features, improving multi-dimensional feature fusion. Experimental results demonstrate that DCMC-UNet outperforms U-Net, U-Net++, and other benchmarks in carbon trace segmentation. For the transformer carbon trace dataset, it achieves better segmentation than the baseline U-Net, with an improved mIoU of 14.04%, Dice of 10.87%, pixel accuracy (P) of 10.97%, and overall accuracy (Acc) of 5.77%. The proposed model provides reliable technical support for surface discharge intensity assessment and insulation condition evaluation in oil-immersed transformers. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

29 pages, 7409 KiB  
Article
Quality Assessment of High-Speed Motion Blur Images for Mobile Automated Tunnel Inspection
by Chulhee Lee, Donggyou Kim and Dongku Kim
Sensors 2025, 25(12), 3804; https://doi.org/10.3390/s25123804 - 18 Jun 2025
Viewed by 534
Abstract
This study quantitatively evaluates the impact of motion blur—caused by high-speed movement—on image quality in a mobile tunnel scanning system (MTSS). To simulate movement at speeds of up to 70 km/h, a high-speed translational motion panel was developed. Images were captured under conditions [...] Read more.
This study quantitatively evaluates the impact of motion blur—caused by high-speed movement—on image quality in a mobile tunnel scanning system (MTSS). To simulate movement at speeds of up to 70 km/h, a high-speed translational motion panel was developed. Images were captured under conditions compliant with the ISO 12233 international standard, and image quality was assessed using two metrics: blurred edge width (BEW) and the spatial frequency response at 50% contrast (MTF50). Experiments were conducted under varying shutter speeds, lighting conditions (15,000 lx and 40,000 lx), and motion speeds. The results demonstrated that increased motion speed increased BEW and decreased MTF50, indicating greater blur intensity and reduced image sharpness. Two-way analysis of variance and t-tests confirmed that shutter and motion speed significantly affected image quality. Although higher illumination levels partially improved, they also occasionally led to reduced sharpness. Field validation using MTSS in actual tunnel environments demonstrated that BEW and MTF50 effectively captured blur variations by scanning direction. This study proposes BEW and MTF50 as reliable indicators for quantitatively evaluating motion blur in tunnel inspection imagery and suggests their potential to optimize MTSS operation and improve the accuracy of automated defect detection. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

35 pages, 1553 KiB  
Article
Efficient Learning-Based Robotic Navigation Using Feature-Based RGB-D Pose Estimation and Topological Maps
by Eder A. Rodríguez-Martínez, Jesús Elías Miranda-Vega, Farouk Achakir, Oleg Sergiyenko, Julio C. Rodríguez-Quiñonez, Daniel Hernández Balbuena and Wendy Flores-Fuentes
Entropy 2025, 27(6), 641; https://doi.org/10.3390/e27060641 - 15 Jun 2025
Viewed by 633
Abstract
Robust indoor robot navigation typically demands either costly sensors or extensive training data. We propose a cost-effective RGB-D navigation pipeline that couples feature-based relative pose estimation with a lightweight multi-layer-perceptron (MLP) policy. RGB-D keyframes extracted from human-driven traversals form nodes of a topological [...] Read more.
Robust indoor robot navigation typically demands either costly sensors or extensive training data. We propose a cost-effective RGB-D navigation pipeline that couples feature-based relative pose estimation with a lightweight multi-layer-perceptron (MLP) policy. RGB-D keyframes extracted from human-driven traversals form nodes of a topological map; edges are added when visual similarity and geometric–kinematic constraints are jointly satisfied. During autonomy, LightGlue features and SVD give six-DoF relative pose to the active keyframe, and the MLP predicts one of four discrete actions. Low visual similarity or detected obstacles trigger graph editing and Dijkstra replanning in real time. Across eight tasks in four Habitat-Sim environments, the agent covered 190.44 m, replanning when required, and consistently stopped within 0.1 m of the goal while running on commodity hardware. An information-theoretic analysis over the Multi-Illumination dataset shows that LightGlue maximizes per-second information gain under lighting changes, motivating its selection. The modular design attains reliable navigation without metric SLAM or large-scale learning, and seamlessly accommodates future perception or policy upgrades. Full article
Show Figures

Figure 1

27 pages, 75388 KiB  
Article
High-Fidelity 3D Gaussian Splatting for Exposure-Bracketing Space Target Reconstruction: OBB-Guided Regional Densification with Sobel Edge Regularization
by Yijin Jiang, Xiaoyuan Ren, Huanyu Yin, Libing Jiang, Canyu Wang and Zhuang Wang
Remote Sens. 2025, 17(12), 2020; https://doi.org/10.3390/rs17122020 - 11 Jun 2025
Viewed by 1552
Abstract
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe [...] Read more.
In this paper, a novel optimization framework based on 3D Gaussian splatting (3DGS) for high-fidelity 3D reconstruction of space targets under exposure bracketing conditions is studied. In the considered scenario, multi-view optical imagery captures space targets under complex and dynamic illumination, where severe inter-frame brightness variations degrade reconstruction quality by introducing photometric inconsistencies and blurring fine geometric details. Unlike existing methods, we explicitly address these challenges by integrating exposure-aware adaptive refinement and edge-preserving regularization into the 3DGS pipeline. Specifically, we propose an exposure bracketing-oriented bounding box (OBB) regional densification strategy to dynamically identify and refine under-reconstructed regions. In addition, we introduce a Sobel edge regularization mechanism to guide the learning of sharp geometric features and improve texture fidelity. To validate the framework, experiments are conducted on both a custom OBR-ST dataset and the public SHIRT dataset, demonstrating that our method significantly outperforms state-of-the-art techniques in geometric accuracy and visual quality under exposure-bracketing scenarios. The results highlight the effectiveness of our approach in enabling robust in-orbit perception for space applications. Full article
(This article belongs to the Special Issue Advances in 3D Reconstruction with High-Resolution Satellite Data)
Show Figures

Graphical abstract

20 pages, 48645 KiB  
Article
VCAFusion: A Framework for Infrared and Low Light Visible Image Fusion Based on Visual Characteristics Adjustment
by Jiawen Li, Zhengzhong Huang, Jiapin Peng, Xiaochuan Zhang and Rongzhu Zhang
Appl. Sci. 2025, 15(11), 6295; https://doi.org/10.3390/app15116295 - 3 Jun 2025
Viewed by 518
Abstract
Infrared (IR) and visible (VIS) image fusion enhances vision tasks by combining complementary data. However, most existing methods assume normal lighting conditions and thus perform poorly in low-light environments, where VIS images often lose critical texture details. To address this limitation, we propose [...] Read more.
Infrared (IR) and visible (VIS) image fusion enhances vision tasks by combining complementary data. However, most existing methods assume normal lighting conditions and thus perform poorly in low-light environments, where VIS images often lose critical texture details. To address this limitation, we propose VCAFusion, a novel approach for robust infrared and visible image fusion in low-light scenarios. Our framework incorporates an adaptive brightness adjustment model based on light reflection theory to mitigate illumination-induced degradation in nocturnal images. Additionally, we design an adaptive enhancement function inspired by human visual perception to recover weak texture details. To further improve fusion quality, we develop an edge-preserving multi-scale decomposition model and a saliency-preserving strategy, ensuring seamless integration of perceptual features. By effectively balancing low-light enhancement and fusion, our framework preserves both the intensity distribution and the fine texture details of salient objects. Extensive experiments on public datasets demonstrate that VCAFusion achieves superior fusion quality, closely aligning with human visual perception and outperforming state-of-the-art methods in both qualitative and quantitative evaluations. Full article
Show Figures

Figure 1

30 pages, 18356 KiB  
Article
Measurement and Simulation Optimization of the Light Environment of Traditional Residential Houses in the Patio Style: A Case Study of the Architectural Culture of Shanggantang Village, Xiangnan, China
by Jinlin Jiang, Chengjun Tang, Yinghao Wang and Lishuang Liang
Buildings 2025, 15(11), 1786; https://doi.org/10.3390/buildings15111786 - 23 May 2025
Viewed by 347
Abstract
In southern Hunan province, a vital element of China’s architectural cultural legacy, the quality of the indoor lighting environment influences physical performance and the transmission of spatial culture. The province encounters minor environmental disparities and diminishing liveability attributed to evolving construction practices and [...] Read more.
In southern Hunan province, a vital element of China’s architectural cultural legacy, the quality of the indoor lighting environment influences physical performance and the transmission of spatial culture. The province encounters minor environmental disparities and diminishing liveability attributed to evolving construction practices and cultural standards. The three varieties of traditional residences in Shanggantang Village are employed to assess the daylight factor (DF), illumination uniformity (U0), daylight autonomy (DA), and useful daylight illumination (UDI). We subsequently integrate field measurements with static and dynamic numerical simulations to create a multi-dimensional analytical framework termed “measured-static-dynamic”. This method enables the examination of the influence of floor plan layout on light, as well as the relationship between window size, building configuration, and natural illumination. The lighting factor (DF) of the core area of the central patio-type residence reaches 27.7% and the illumination uniformity (U0) is 0.62, but the DF of the transition area plummets to 1.6%; the composite patio type enhances the DF of the transition area to 1.2% through the alleyway-assisted lighting, which is a 24-fold improvement over the offset patio type. Parameter optimization showed that the percentage of all-natural daylighting time (DA) in the edge zone of the central patio type increased from 21.4% to 58.3% when the window height was adjusted to 90%. The results of the study provide a quantitative basis for the optimization of the light environment and low-carbon renewal of traditional residential buildings. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

23 pages, 13758 KiB  
Article
Edge–Region Collaborative Segmentation of Potato Leaf Disease Images Using Beluga Whale Optimization Algorithm with Danger Sensing Mechanism
by Jin-Ling Bei and Ji-Quan Wang
Agriculture 2025, 15(11), 1123; https://doi.org/10.3390/agriculture15111123 - 23 May 2025
Viewed by 332
Abstract
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel [...] Read more.
Precise detection of potato diseases is critical for food security, yet traditional image segmentation methods struggle with challenges including uneven illumination, background noise, and the gradual color transitions of lesions under complex field conditions. Therefore, a collaborative segmentation framework of Otsu and Sobel edge detection based on the beluga whale optimization algorithm with a danger sensing mechanism (DSBWO) is proposed. The method introduces an S-shaped control parameter, a danger sensing mechanism, a dynamic foraging strategy, and an improved whale fall model to enhance global search ability, prevent premature convergence, and improve solution quality. DSBWO demonstrates superior optimization performance on the CEC2017 benchmark, with faster convergence and higher accuracy than other algorithms. Experiments on the Berkeley Segmentation Dataset and potato early/late blight images show that DSBWO achieves excellent segmentation performance across multiple evaluation metrics. Specifically, it reaches a maximum IoU of 0.8797, outperforming JSBWO (0.8482) and PSOSHO (0.8503), while maintaining competitive PSNR and SSIM values. Even under different Gaussian noise levels, DSBWO maintains stable segmentation accuracy and low CPU time, confirming its robustness. These findings suggest that DSBWO provides a reliable and efficient solution for automatic crop disease monitoring and can be extended to other smart agriculture applications. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

27 pages, 13146 KiB  
Article
Underwater-Image Enhancement Based on Maximum Information-Channel Correction and Edge-Preserving Filtering
by Wei Liu, Jingxuan Xu, Siying He, Yongzhen Chen, Xinyi Zhang, Hong Shu and Ping Qi
Symmetry 2025, 17(5), 725; https://doi.org/10.3390/sym17050725 - 9 May 2025
Viewed by 737
Abstract
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement [...] Read more.
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement (UIE) approach based on maximum information-channel compensation and edge-preserving filtering techniques. Specifically, we first develop a channel information transmission strategy grounded in maximum information preservation principles, utilizing the maximum information channel to improve the color fidelity of the input image. Next, we locally enhance the color-corrected image using guided filtering and generate a series of globally contrast-enhanced images by applying gamma transformations with varying parameter values. In the final stage, the enhanced image sequence is decomposed into low-frequency (LF) and high-frequency (HF) components via side-window filtering. For the HF component, a weight map is constructed by calculating the difference between the current exposedness and the optimum exposure. For the LF component, we derive a comprehensive feature map by integrating the brightness map, saturation map, and saliency map, thereby accurately assessing the quality of degraded regions in a manner that aligns with the symmetry principle inherent in human vision. Ultimately, we combine the LF and HF components through a weighted summation process, resulting in a high-quality underwater image. Experimental results demonstrate that our method effectively achieves both color restoration and contrast enhancement, outperforming several State-of-the-Art UIE techniques across multiple datasets. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

Back to TopTop