Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (28)

Search Parameters:
Keywords = bright–dark lighting environment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 5199 KB  
Article
YOLO-DER: A Dynamic Enhancement Routing Framework for Adverse Weather Vehicle Detection
by Ruilai Gao, Mohd Hasbullah Omar and Massudi Mahmuddin
Electronics 2025, 14(24), 4851; https://doi.org/10.3390/electronics14244851 - 10 Dec 2025
Viewed by 317
Abstract
Deep learning-based vehicle detection methods have achieved impressive performance in favorable conditions. However, their effectiveness declines significantly in adverse weather scenarios, such as fog, rain, and low-illumination environments, due to severe image degradation. Existing approaches often fail to achieve efficient integration between image [...] Read more.
Deep learning-based vehicle detection methods have achieved impressive performance in favorable conditions. However, their effectiveness declines significantly in adverse weather scenarios, such as fog, rain, and low-illumination environments, due to severe image degradation. Existing approaches often fail to achieve efficient integration between image enhancement and object detection, and typically lack adaptive strategies to cope with diverse degradation patterns. To address these challenges, this paper proposes a novel end-to-end detection framework, You Only Look Once-Dynamic Enhancement Routing (YOLO-DER), which introduces a lightweight Dynamic Enhancement Routing module. This module adaptively selects the optimal enhancement strategy—such as dehazing or brightness correction—based on the degradation characteristics of the input image. It is jointly optimized with the YOLOv12 detector to achieve tight integration of enhancement and detection. Extensive experiments on BDD100K, Foggy Cityscapes, and ExDark demonstrate the superior performance of YOLO-DER, yielding mAP50 scores of 80.8%, 57.9%, and 85.6%, which translate into absolute gains of +3.8%, +2.3%, and +2.9% over YOLOv12 on the respective datasets. The results confirm its robustness and generalization across foggy, rainy, and low-light conditions, providing an efficient and scalable solution for all-weather visual perception in autonomous driving. Full article
Show Figures

Figure 1

21 pages, 7741 KB  
Article
Polarization-Guided Deep Fusion for Real-Time Enhancement of Day–Night Tunnel Traffic Scenes: Dataset, Algorithm, and Network
by Renhao Rao, Changcai Cui, Liang Chen, Zhizhao Ouyang and Shuang Chen
Photonics 2025, 12(12), 1206; https://doi.org/10.3390/photonics12121206 - 8 Dec 2025
Viewed by 323
Abstract
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure [...] Read more.
The abrupt light-to-dark or dark-to-light transitions at tunnel entrances and exits cause short-term, large-scale illumination changes, leading traditional RGB perception to suffer from exposure mutations, glare, and noise accumulation at critical moments, thereby triggering perception failures and blind zones. Addressing this typical failure scenario, this paper proposes a closed-loop enhancement solution centered on polarization imaging as a core physical prior, comprising a real-world polarimetric road dataset, a polarimetric physics-enhanced algorithm, and a beyond-fusion network, while satisfying both perception enhancement and real-time constraints. First, we construct the POLAR-GLV dataset, which is captured using a four-angle polarization camera under real highway tunnel conditions, covering the entire process of entering tunnels, inside tunnels, and exiting tunnels, systematically collecting data on adverse illumination and failure distributions in day–night traffic scenes. Second, we propose the Polarimetric Physical Enhancement with Adaptive Modulation (PPEAM) method, which uses Stokes parameters, DoLP, and AoLP as constraints. Leveraging the glare sensitivity of DoLP and richer texture information, it adaptively performs dark region enhancement and glare suppression according to scene brightness and dark region ratio, providing real-time polarization-based image enhancement. Finally, we design the Polar-PENet beyond-fusion network, which introduces Polarization-Aware Gates (PAG) and CBAM on top of physical priors, coupled with detection-driven perception-oriented loss and a beyond mechanism to explicitly fuse physics and deep semantics to surpass physical limitations. Experimental results show that compared to original images, Polar-PENet (beyond-fusion network) achieves PSNR and SSIM scores of 19.37 and 0.5487, respectively, on image quality metrics, surpassing the performance of PPEAM (polarimetric physics-enhanced algorithm) which scores 18.89 and 0.5257. In terms of downstream object detection performance, Polar-PENet performs exceptionally well in areas with drastic illumination changes such as tunnel entrances and exits, achieving a mAP of 63.7%, representing a 99.7% improvement over original images and a 12.1% performance boost over PPEAM’s 56.8%. In terms of processing speed, Polar-PENet is 2.85 times faster than the physics-enhanced algorithm PPEAM, with an inference speed of 183.45 frames per second, meeting the real-time requirements of autonomous driving and laying a solid foundation for practical deployment in edge computing environments. The research validates the effective paradigm of using polarimetric physics as a prior and surpassing physics through learning methods. Full article
(This article belongs to the Special Issue Computational Optical Imaging: Theories, Algorithms, and Applications)
Show Figures

Figure 1

17 pages, 3222 KB  
Article
The Influences of Bright–Dark Lighting Environments on Driving Safety in the Diverging Zone of Interchange in Highway Tunnels
by Zechao Zhang, Jiangbi Hu, Ronghua Wang and Changqiu Jiang
Appl. Sci. 2025, 15(18), 10067; https://doi.org/10.3390/app151810067 - 15 Sep 2025
Viewed by 618
Abstract
Increasing the lighting luminance in the diverging zone of interchange in highway tunnels can generally enhance driving safety. However, it creates a bright–dark luminance contrast with the adjacent road. A pronounced contrast can induce new driving risks. This underlying mechanism remains unclear. Three [...] Read more.
Increasing the lighting luminance in the diverging zone of interchange in highway tunnels can generally enhance driving safety. However, it creates a bright–dark luminance contrast with the adjacent road. A pronounced contrast can induce new driving risks. This underlying mechanism remains unclear. Three key factors, i.e., the luminance of the dark environment, the bright–dark luminance ratio, and the position of the small target, are identified in this paper, which affect drivers’ visual recognition abilities. Based on fundamental tunnel lighting design rules, a series of naturalistic driving tests on the visual recognition distance for small targets with 132 conditions were designed. It combined three dark environment luminance levels (1.5~3.5 cd/m2), four bright–dark luminance ratios (2~5), and eleven small target positions (−50~+50 m). Twenty-four drivers were randomly selected and drove vehicles under the different scenarios. Their visual recognition distances for small targets were recorded and analyzed. The results show that visual recognition distances for small target visuals under different bright–dark lighting environments vary significantly, and the shortest distances occur exactly at the luminance boundary. Both decreasing the bright–dark luminance ratio and proportionally increasing the luminance levels of the bright and dark environments can markedly improve the visual recognition distance. A multi-parameter regression model was developed to correlate the visual recognition distance at the bright–dark luminance boundary with the luminance of the dark environment and the bright–dark luminance ratio. Based on drivers’ required safe sight distance, a method for setting lighting luminance in the diverging zone of interchange was proposed. The methodology and findings offer technical support for lighting design and safety management in the diverging zone of interchange in highway tunnels. Full article
Show Figures

Figure 1

19 pages, 7242 KB  
Article
RICNET: Retinex-Inspired Illumination Curve Estimation for Low-Light Enhancement in Industrial Welding Scenes
by Chenbo Shi, Xiangyu Zhang, Delin Wang, Changsheng Zhu, Aiping Liu, Chun Zhang and Xiaobing Feng
Sensors 2025, 25(16), 5192; https://doi.org/10.3390/s25165192 - 21 Aug 2025
Cited by 1 | Viewed by 956
Abstract
Feature tracking is essential for welding crawler robots’ trajectory planning. As welding often occurs in dark environments like pipelines or ship hulls, the system requires low-light image capture for laser tracking. However, such images typically have poor brightness and contrast, degrading both weld [...] Read more.
Feature tracking is essential for welding crawler robots’ trajectory planning. As welding often occurs in dark environments like pipelines or ship hulls, the system requires low-light image capture for laser tracking. However, such images typically have poor brightness and contrast, degrading both weld seam feature extraction and trajectory anomaly detection accuracy. To address this, we propose a Retinex-based low-light enhancement network tailored for cladding scenarios. The network features an illumination curve estimation module and requires no paired or unpaired reference images during training, alleviating the need for cladding-specific datasets. It adaptively adjusts brightness, restores image details, and effectively suppresses noise. Extensive experiments on public (LOLv1 and LOLv2) and self-collected weld datasets show that our method outperformed existing approaches in PSNR, SSIM, and LPIPS. Additionally, weld seam segmentation under low-light conditions achieved 95.1% IoU and 98.9% accuracy, confirming the method’s effectiveness for downstream tasks in robotic welding. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

25 pages, 8272 KB  
Article
Dark-YOLO: A Low-Light Object Detection Algorithm Integrating Multiple Attention Mechanisms
by Ye Liu, Shixin Li, Liming Zhou, Haichen Liu and Zhiyu Li
Appl. Sci. 2025, 15(9), 5170; https://doi.org/10.3390/app15095170 - 6 May 2025
Cited by 5 | Viewed by 5385
Abstract
Object detection in low-light environments is often hampered by unfavorable factors such as low brightness, low contrast, and noise, which lead to issues like missed detections and false positives. To address these challenges, this paper proposes a low-light object detection algorithm named Dark-YOLO, [...] Read more.
Object detection in low-light environments is often hampered by unfavorable factors such as low brightness, low contrast, and noise, which lead to issues like missed detections and false positives. To address these challenges, this paper proposes a low-light object detection algorithm named Dark-YOLO, which dynamically extracts features. First, an adaptive image enhancement module is introduced to restore image information and enrich feature details. Second, the spatial feature pyramid module is improved by incorporating cross-overlapping average pooling and max pooling to extract salient features while retaining global and local information. Then, a dynamic feature extraction module is designed, which combines partial convolution with a parameter-free attention mechanism, allowing the model to flexibly capture critical and effective information from the image. Finally, a dimension reciprocal attention module is introduced to ensure the model can comprehensively consider various features within the image. Experimental results show that the proposed model achieves an mAP@50 of 71.3% and an mAP@50-95 of 44.2% on the real-world low-light dataset ExDark, demonstrating that Dark-YOLO effectively detects objects under low-light conditions. Furthermore, facial recognition in dark environments is a particularly challenging task. Dark-YOLO demonstrates outstanding performance on the DarkFace dataset, achieving an mAP@50 of 49.1% and an mAP@50-95 of 21.9%, further validating its effectiveness for face detection under complex low-light conditions. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

31 pages, 12588 KB  
Article
Evaluating Spatial Attributes of Surface Colors Under Daylight and Electrical Lighting in Sustainable Architecture
by Carolina Espinoza-Sanhueza, Marc Hébert, Jean-François Lalonde and Claude Demers
Sustainability 2025, 17(4), 1653; https://doi.org/10.3390/su17041653 - 17 Feb 2025
Cited by 2 | Viewed by 2178
Abstract
This paper investigates the spatial attributes of the color properties and brightness characteristics of sustainable architectural strategies including daylight, electrical lighting, and surface color in architecture, which could potentially impact users’ spatial experiences. Images of 48 spaces varying in surface color configurations, type [...] Read more.
This paper investigates the spatial attributes of the color properties and brightness characteristics of sustainable architectural strategies including daylight, electrical lighting, and surface color in architecture, which could potentially impact users’ spatial experiences. Images of 48 spaces varying in surface color configurations, type of light source, and position of the lighting strategy were evaluated. The analyses included assessments of color palettes, descriptors based on saturation and brightness properties, and brightness distribution maps. The results indicate that lighting design and types of light source influence the saturation and brightness properties of the perceived hues evaluated in the same environment, leading to variations in color descriptors or adjectives. Furthermore, this study demonstrates that variations in brightness between bright and dark zones, the creation of focal points, and perceived spatial fragmentation depend on the reflectance of the colors applied in the surfaces, the position of the lighting, and the type of light source. This study does not aim to establish best practices for enhancing users’ emotions through architecture. Instead, it explores how variations in color and light influence perceptual descriptions that have been previously associated with emotional responses. This research recognizes the impact of sustainable strategies including surface colors under daylight and electrical lighting on users’ spatial experiences. Full article
(This article belongs to the Section Health, Well-Being and Sustainability)
Show Figures

Figure 1

30 pages, 40714 KB  
Article
Zero-TCE: Zero Reference Tri-Curve Enhancement for Low-Light Images
by Chengkang Yu, Guangliang Han, Mengyang Pan, Xiaotian Wu and Anping Deng
Appl. Sci. 2025, 15(2), 701; https://doi.org/10.3390/app15020701 - 12 Jan 2025
Cited by 4 | Viewed by 2686
Abstract
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference [...] Read more.
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference loss function, the enhancement of low-light images is converted into depth curve estimation, with three curves fitted to enhance the dark details of the image: a brightness adjustment curve (LE-curve), a contrast enhancement curve (CE-curve), and a multi-scale feature fusion curve (MF-curve). Initially, we introduce the TCE-L and TCE-C modules to improve image brightness and enhance image contrast, respectively. Subsequently, we design a multi-scale feature fusion (MFF) module that integrates the original and enhanced images at multiple scales in the HSV color space based on the brightness distribution characteristics of low-light images, yielding an optimally enhanced image that avoids overexposure and color distortion. We compare our proposed method against ten other advanced algorithms based on multiple datasets, including LOL, DICM, MEF, NPE, and ExDark, that encompass complex illumination variations. Experimental results demonstrate that the proposed algorithm adapts better to the characteristics of images captured in low-light environments, producing enhanced images with sharp contrast, rich details, and preserved color authenticity, while effectively mitigating the issue of overexposure. Full article
Show Figures

Figure 1

18 pages, 41079 KB  
Article
Research on Target Image Classification in Low-Light Night Vision
by Yanfeng Li, Yongbiao Luo, Yingjian Zheng, Guiqian Liu and Jiekai Gong
Entropy 2024, 26(10), 882; https://doi.org/10.3390/e26100882 - 21 Oct 2024
Cited by 6 | Viewed by 3099
Abstract
In extremely dark conditions, low-light imaging may offer spectators a rich visual experience, which is important for both military and civic applications. However, the images taken in ultra-micro light environments usually have inherent defects such as extremely low brightness and contrast, a high [...] Read more.
In extremely dark conditions, low-light imaging may offer spectators a rich visual experience, which is important for both military and civic applications. However, the images taken in ultra-micro light environments usually have inherent defects such as extremely low brightness and contrast, a high noise level, and serious loss of scene details and colors, which leads to great challenges in the research of low-light image and object detection and classification. The low-light night vision image used as the study object in this work has an excessively dim overall picture and very little information about the screen’s features. Three algorithms, HE, AHE, and CLAHE, were used to enhance and highlight the image. The effectiveness of these image enhancement methods is evaluated using metrics such as the peak signal-to-noise ratio and mean square error, and CLAHE was selected after comparison. The target image includes vehicles, people, license plates, and objects. The gray-level co-occurrence matrix (GLCM) was used to extract the texture features of the enhanced images, and the extracted image texture features were used as input to construct a backpropagation (BP) neural network classification model. Then, low-light image classification models were developed based on VGG16 and ResNet50 convolutional neural networks combined with low-light image enhancement algorithms. The experimental results show that the overall classification accuracy of the VGG16 convolutional neural network model is 92.1%. Compared with the BP and ResNet50 neural network models, the classification accuracy was increased by 4.5% and 2.3%, respectively, demonstrating its effectiveness in classifying low-light night vision targets. Full article
Show Figures

Figure 1

17 pages, 3459 KB  
Article
Performance Analysis of a Color-Code-Based Optical Camera Communication System
by Hasan Ziya Dinc and Yavuz Erol
Appl. Sci. 2024, 14(19), 9102; https://doi.org/10.3390/app14199102 - 8 Oct 2024
Cited by 1 | Viewed by 1561
Abstract
In this study, we present a visible light communication (VLC) system that analyzes the performance of an optical camera communication (OCC) system, utilizing a mobile phone camera as the receiver and a computer monitor as the transmitter. By creating color channels in the [...] Read more.
In this study, we present a visible light communication (VLC) system that analyzes the performance of an optical camera communication (OCC) system, utilizing a mobile phone camera as the receiver and a computer monitor as the transmitter. By creating color channels in the form of a 4 × 4 matrix within a frame, we determine the parameters that affect the successful transmission of data packets. Factors such as the brightness or darkness of the test room, the light color of the lamp in the illuminated environment, the effects of daylight when the monitor is positioned in front of a window, and issues related to dead pixels and light bleed originating from the monitor’s production process have been considered to ensure accurate data transmission. In this context, we utilized the PyCharm, Pydroid, Python, Tkinter, and OpenCV platforms for programming the transmitter and receiver units. Through the application of image processing techniques, we mitigated the effects of daylight on communication performance, thereby proposing a superior system compared to standard VLC systems that incorporate photodiodes. Additionally, considering objectives such as the maximum number of channels and the maximum distance, we regulated the sizes of the channels, the distances between the channels, and the number of channels. The NumPy library, compatible with Python–Tkinter, was employed to determine the color levels and dimensions of the channels. We investigate the effects of RGB and HSV color spaces on the data transmission rate and communication distance. Furthermore, the impact of the distance between color channels on color detection performance is discussed in detail. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

10 pages, 2120 KB  
Article
Development of a Scanning Protocol for Anthropological Remains: A Preliminary Study
by Matteo Orsi, Roberta Fusco, Alessandra Mazzucchi, Roberto Taglioretti, Maurizio Marinato and Marta Licata
Heritage 2024, 7(9), 4997-5006; https://doi.org/10.3390/heritage7090236 - 10 Sep 2024
Cited by 1 | Viewed by 1511
Abstract
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is [...] Read more.
Structured-light scanning is a fast and efficient technique for the acquisition of 3D point clouds. However, the extensive and daily application of this class of scanners can be challenging because of the technical know-how necessary to validate the low-cost instrumentation. This challenge is worth accepting because of the large amount of data that can be collected accurately with the aid of specific technical protocols. This work is a preliminary study of the development of an acquisition protocol for anthropological remains performing tests in two opposite and extreme contexts: one characterised by a dark environment and one located in an open area and characterised by a very bright environment. This second context showed the influence of sunlight in the acquisition process, resulting in a colourless point cloud. It is a first step towards the development of a technical protocol for the acquisition of anthropological remains, based on the research of limits and problems associated with an instrument. Full article
(This article belongs to the Section Archaeological Heritage)
Show Figures

Figure 1

18 pages, 1497 KB  
Review
A Review of the Characteristics of Light Pollution: Assessment Technique, Policy, and Legislation
by Ying Hao, Peiyao Wang, Zhongyao Zhang, Zhiming Xu and Dagong Jia
Energies 2024, 17(11), 2750; https://doi.org/10.3390/en17112750 - 4 Jun 2024
Cited by 9 | Viewed by 5882
Abstract
Light pollution from the use of artificial lighting poses significant impacts on human health, traffic safety, ecological environment, astronomy, and energy use. The advancement of characteristics of light pollution assessment technology has played a significant role in shaping prevention and control policies, thereby [...] Read more.
Light pollution from the use of artificial lighting poses significant impacts on human health, traffic safety, ecological environment, astronomy, and energy use. The advancement of characteristics of light pollution assessment technology has played a significant role in shaping prevention and control policies, thereby enabling measures, such as environmental standards and legislation and product procurement guidelines, but considerable variation in the definition, control strategies, and regulatory frameworks remains. Therefore, there is a need to review the characteristics of light pollution, including the assessment technique, policy, and legislation. Through the literature review, it can be found that technical standards are required to prevent light pollution. For example, light pollution is decreased by 6% in France through the legislation of artificial light. Key approaches are suggested to control global light pollution, including implementing ambient brightness zoning, regulating lighting product usage, and establishing dark sky reserves. Technology and policy should be integrated. The precise data coming from satellite imagery, drones, and balloons could provide guidance when making the policies. Full article
(This article belongs to the Topic Thermal Energy Transfer and Storage)
Show Figures

Figure 1

20 pages, 11394 KB  
Article
Bio-Inspired Dark Adaptive Nighttime Object Detection
by Kuo-Feng Hung and Kang-Ping Lin
Biomimetics 2024, 9(3), 158; https://doi.org/10.3390/biomimetics9030158 - 3 Mar 2024
Cited by 6 | Viewed by 3212
Abstract
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, [...] Read more.
Nighttime object detection is challenging due to dim, uneven lighting. The IIHS research conducted in 2022 shows that pedestrian anti-collision systems are less effective at night. Common solutions utilize costly sensors, such as thermal imaging and LiDAR, aiming for highly accurate detection. Conversely, this study employs a low-cost 2D image approach to address the problem by drawing inspiration from biological dark adaptation mechanisms, simulating functions like pupils and photoreceptor cells. Instead of relying on extensive machine learning with day-to-night image conversions, it focuses on image fusion and gamma correction to train deep neural networks for dark adaptation. This research also involves creating a simulated environment ranging from 0 lux to high brightness, testing the limits of object detection, and offering a high dynamic range testing method. Results indicate that the dark adaptation model developed in this study improves the mean average precision (mAP) by 1.5−6% compared to traditional models. Our model is capable of functioning in both twilight and night, showcasing academic novelty. Future developments could include using virtual light in specific image areas or integrating with smart car lighting to enhance detection accuracy, thereby improving safety for pedestrians and drivers. Full article
(This article belongs to the Special Issue Biomimetic and Bioinspired Computer Vision and Image Processing)
Show Figures

Figure 1

18 pages, 4837 KB  
Article
Rethinking Underwater Crab Detection via Defogging and Channel Compensation
by Yueping Sun, Bikang Yuan, Ziqiang Li, Yong Liu and Dean Zhao
Fishes 2024, 9(2), 60; https://doi.org/10.3390/fishes9020060 - 30 Jan 2024
Cited by 3 | Viewed by 2785
Abstract
Crab aquaculture is an important component of the freshwater aquaculture industry in China, encompassing an expansive farming area of over 6000 km2 nationwide. Currently, crab farmers rely on manually monitored feeding platforms to count the number and assess the distribution of crabs [...] Read more.
Crab aquaculture is an important component of the freshwater aquaculture industry in China, encompassing an expansive farming area of over 6000 km2 nationwide. Currently, crab farmers rely on manually monitored feeding platforms to count the number and assess the distribution of crabs in the pond. However, this method is inefficient and lacks automation. To address the problem of efficient and rapid detection of crabs via automated systems based on machine vision in low-brightness underwater environments, a two-step color correction and improved dark channel prior underwater image processing approach for crab detection is proposed in this paper. Firstly, the parameters of the dark channel prior are optimized with guided filtering and quadtrees to solve the problems of blurred underwater images and artificial lighting. Then, the gray world assumption, the perfect reflection assumption, and a strong channel to compensate for the weak channel are applied to improve the pixels of red and blue channels, correct the color of the defogged image, optimize the visual effect of the image, and enrich the image information. Finally, ShuffleNetV2 is applied to optimize the target detection model to improve the model detection speed and real-time performance. The experimental results show that the proposed method has a detection rate of 90.78% and an average confidence level of 0.75. Compared with the improved YOLOv5s detection results of the original image, the detection rate of the proposed method is increased by 21.41%, and the average confidence level is increased by 47.06%, which meets a good standard. This approach could effectively build an underwater crab distribution map and provide scientific guidance for crab farming. Full article
Show Figures

Figure 1

22 pages, 56589 KB  
Article
Target Search for Joint Local and High-Level Semantic Information Based on Image Preprocessing Enhancement in Indoor Low-Light Environments
by Huapeng Tang, Danyang Qin, Jiaqiang Yang, Haoze Bie, Yue Li, Yong Zhu and Lin Ma
ISPRS Int. J. Geo-Inf. 2023, 12(10), 400; https://doi.org/10.3390/ijgi12100400 - 30 Sep 2023
Cited by 4 | Viewed by 2016
Abstract
In indoor low-light environments, the lack of light makes the captured images often suffer from quality degradation problems, including missing features in dark areas, noise interference, low brightness, and low contrast. Therefore, the feature extraction algorithms are unable to extract the feature information [...] Read more.
In indoor low-light environments, the lack of light makes the captured images often suffer from quality degradation problems, including missing features in dark areas, noise interference, low brightness, and low contrast. Therefore, the feature extraction algorithms are unable to extract the feature information contained in the images accurately, thereby hindering the subsequent target search task in this environment and making it difficult to determine the location information of the target. Aiming at this problem, a joint local and high-level semantic information (JLHS) target search method is proposed based on joint bilateral filtering and camera response model (JBCRM) image preprocessing enhancement. The JBCRM method improves the image quality by highlighting the dark region features and removing the noise interference in order to solve the problem of the difficult extraction of feature points in low-light images, thus providing better visual data for subsequent target search tasks. The JLHS method increases the feature matching accuracy between the target image and the offline database image by combining local and high-level semantic information to characterize the image content, thereby boosting the accuracy of the target search. Experiments show that, compared with the existing image-enhancement methods, the PSNR of the JBCRM method is increased by 34.24% at the highest and 2.61% at the lowest. The SSIM increased by 63.64% at most and increased by 12.50% at least. The Laplacian operator increased by 54.47% at most and 3.49% at least. When the mainstream feature extraction techniques, SIFT, ORB, AKAZE, and BRISK, are utilized, the number of feature points in the JBCRM-enhanced images are improved by a minimum of 20.51% and a maximum of 303.44% over the original low-light images. Compared with other target search methods, the average search error of the JLHS method is only 9.8 cm, which is 91.90% lower than the histogram-based search method. Meanwhile, the average search error is reduced by 18.33% compared to the VGG16-based target search method. As a result, the method proposed in this paper significantly improves the accuracy of the target search in low-light environments, thus broadening the application scenarios of target search in indoor environments, and providing an effective solution for accurately determining the location of the target in geospatial space. Full article
Show Figures

Figure 1

22 pages, 7686 KB  
Article
Object Detection Performance Evaluation for Autonomous Vehicles in Sandy Weather Environments
by Nasser Aloufi, Abdulaziz Alnori, Vijey Thayananthan and Abdullah Basuhail
Appl. Sci. 2023, 13(18), 10249; https://doi.org/10.3390/app131810249 - 13 Sep 2023
Cited by 17 | Viewed by 4192
Abstract
In order to reach the highest level of automation, autonomous vehicles (AVs) are required to be aware of surrounding objects and detect them even in adverse weather. Detecting objects is very challenging in sandy weather due to characteristics of the environment, such as [...] Read more.
In order to reach the highest level of automation, autonomous vehicles (AVs) are required to be aware of surrounding objects and detect them even in adverse weather. Detecting objects is very challenging in sandy weather due to characteristics of the environment, such as low visibility, occlusion, and changes in lighting. In this paper, we considered the You Only Look Once (YOLO) version 5 and version 7 architectures to evaluate the performance of different activation functions in sandy weather. In our experiments, we targeted three activation functions: Sigmoid Linear Unit (SiLU), Rectified Linear Unit (ReLU), and Leaky Rectified Linear Unit (LeakyReLU). The metrics used to evaluate their performance were precision, recall, and mean average precision (mAP). We used the Detection in Adverse Weather Nature (DAWN) dataset which contains various weather conditions, though we selected sandy images only. Moreover, we extended the DAWN dataset and created an augmented version of the dataset using several augmentation techniques, such as blur, saturation, brightness, darkness, noise, exposer, hue, and grayscale. Our results show that in the original DAWN dataset, YOLOv5 with the LeakyReLU activation function surpassed other architectures with respect to the reported research results in sandy weather and achieved 88% mAP. For the augmented DAWN dataset that we developed, YOLOv7 with SiLU achieved 94% mAP. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop