Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (390)

Search Parameters:
Keywords = visible spectrum images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2702 KiB  
Article
Real-Time Depth Monitoring of Air-Film Cooling Holes in Turbine Blades via Coherent Imaging During Femtosecond Laser Machining
by Yi Yu, Ruijia Liu, Chenyu Xiao and Ping Xu
Photonics 2025, 12(7), 668; https://doi.org/10.3390/photonics12070668 - 2 Jul 2025
Viewed by 336
Abstract
Given the exceptional capabilities of femtosecond laser processing in achieving high-precision ablation for air-film cooling hole fabrication on turbine blades, it is imperative to develop an advanced monitoring methodology that enables real-time feedback control to automatically terminate the laser upon complete penetration detection, [...] Read more.
Given the exceptional capabilities of femtosecond laser processing in achieving high-precision ablation for air-film cooling hole fabrication on turbine blades, it is imperative to develop an advanced monitoring methodology that enables real-time feedback control to automatically terminate the laser upon complete penetration detection, thereby effectively preventing backside damage. To tackle this issue, a spectrum-domain coherent imaging technique has been developed. This innovative approach adapts the fundamental principle of fiber-based Michelson interferometry by integrating the air-film hole into a sample arm configuration. A broadband super-luminescent diode with a 830 nm central wavelength and a 26 nm spectral bandwidth serves as the coherence-optimized illumination source. An optimal normalized reflectivity of 0.2 is established to maintain stable interference fringe visibility throughout the drilling process. The system achieves a depth resolution of 11.7 μm through Fourier transform analysis of dynamic interference patterns. With customized optical path design specifically engineered for through-hole-drilling applications, the technique demonstrates exceptional sensitivity, maintaining detection capability even under ultralow reflectivity conditions (0.001%) at the hole bottom. Plasma generation during laser processing is investigated, with plasma density measurements providing optical thickness data for real-time compensation of depth measurement deviations. The demonstrated system represents an advancement in non-destructive in-process monitoring for high-precision laser machining applications. Full article
(This article belongs to the Special Issue Advances in Laser Measurement)
Show Figures

Figure 1

19 pages, 620 KiB  
Article
Software-Based Transformation of White Light Endoscopy Images to Hyperspectral Images for Improved Gastrointestinal Disease Detection
by Chien-Wei Huang, Chang-Chao Su, Chu-Kuang Chou, Arvind Mukundan, Riya Karmakar, Tsung-Hsien Chen, Pranav Shukla, Devansh Gupta and Hsiang-Chen Wang
Diagnostics 2025, 15(13), 1664; https://doi.org/10.3390/diagnostics15131664 - 30 Jun 2025
Viewed by 461
Abstract
Background/Objectives: Gastrointestinal diseases (GID), such as oesophagitis, polyps, and ulcerative colitis, contribute significantly to global morbidity and mortality. Conventional diagnostic methods employing white light imaging (WLI) in wireless capsule endoscopy (WCE) provide limited spectrum information, therefore influencing classification performance. Methods: A new technique [...] Read more.
Background/Objectives: Gastrointestinal diseases (GID), such as oesophagitis, polyps, and ulcerative colitis, contribute significantly to global morbidity and mortality. Conventional diagnostic methods employing white light imaging (WLI) in wireless capsule endoscopy (WCE) provide limited spectrum information, therefore influencing classification performance. Methods: A new technique called Spectrum Aided Vision Enhancer (SAVE), which converts traditional WLI images into hyperspectral imaging (HSI)-like representations, hence improving diagnostic accuracy. HSI involves the acquisition of image data across numerous wavelengths of light, extending beyond the visible spectrum, to deliver comprehensive information regarding the material composition and attributes of the imaged objects. This technique facilitates improved tissue characterisation, rendering it especially effective for identifying abnormalities in medical imaging. Using a carefully selected dataset consisting of 6000 annotated photos taken from the KVASIR and ETIS-Larib Polyp Database, this work classifies normal, ulcers, polyps, and oesophagitis. The performance of both the original WLI and SAVE transformed images was assessed using advanced deep learning architectures. The principal outcome was the overall classification accuracy for normal, ulcer, polyp, and oesophagitis categories, contrasting SAVE-enhanced images with standard WLI across five deep learning models. Results: The principal outcome of this study was the enhancement of diagnostic accuracy for gastrointestinal disease classification, assessed through classification accuracy, precision, recall, and F1 score. The findings illustrate the efficacy of the SAVE method in improving diagnostic performance without requiring specialised equipment. With the best accuracy of 98% attained using EfficientNetB7, compared to 97% with WLI, experimental data show that SAVE greatly increases classification metrics across all models. With relative improvement from 85% (WLI) to 92% (SAVE), VGG16 showed the highest accuracy. Conclusions: These results confirm that the SAVE algorithm significantly improves the early identification and classification of GID, therefore providing a potential development towards more accurate, non-invasive GID diagnostics with WCE. Full article
Show Figures

Figure 1

13 pages, 1109 KiB  
Technical Note
Detection of Bacterial Leaf Spot Disease in Sesame (Sesamum indicum L.) Using a U-Net Autoencoder
by Minju Lee, Jeseok Lee, Amit Ghimire, Yegyeong Bae, Tae-An Kang, Youngnam Yoon, In-Jung Lee, Choon-Wook Park, Byungwon Kim and Yoonha Kim
Remote Sens. 2025, 17(13), 2230; https://doi.org/10.3390/rs17132230 - 29 Jun 2025
Viewed by 306
Abstract
Hyperspectral imaging (HSI) integrates spectroscopy and imaging, providing detailed spectral–spatial information, and the selection of task-relevant wavelengths can streamline data acquisition and processing for field deployment. Anomaly detection aims to identify observations that deviate from normal patterns, typically in a one-class classification framework. [...] Read more.
Hyperspectral imaging (HSI) integrates spectroscopy and imaging, providing detailed spectral–spatial information, and the selection of task-relevant wavelengths can streamline data acquisition and processing for field deployment. Anomaly detection aims to identify observations that deviate from normal patterns, typically in a one-class classification framework. In this study, we extend this framework to a binary classification by employing a U-Net based deterministic autoencoder augmented with attention blocks to analyze HSI data of sesame plants inoculated with Pseudomonas syringae pv. sesami. Single-band grayscale images across the full spectral range were used to train the model on healthy samples, while the presence of disease was classified by assessing the reconstruction error, which we refer to as the anomaly score. The average classification accuracy in the visible region spectrum (430–689 nm) exceeded 0.8, with peaks at 641 nm and 689 nm. In comparison, the near-infrared region (>700 nm) attained an accuracy of approximately 0.6. Several visible bands demonstrated potential for early disease detection. Some lesion samples showed a gradual increase in anomaly scores over time, and notably, Band 23 (689 nm) exhibited exceeded anomaly scores even at early stages before visible symptoms appeared. This supports the potential of this wavelength for the early-stage detection of bacterial leaf spots in sesame. Full article
Show Figures

Graphical abstract

23 pages, 8395 KiB  
Review
Revisiting Fat Content in Bone Lesions: Paradigms in Bone Lesion Detection
by Ali Shah, Neel R. Raja, Hasaam Uldin, Sonal Saran and Rajesh Botchu
Diseases 2025, 13(7), 197; https://doi.org/10.3390/diseases13070197 - 27 Jun 2025
Viewed by 810
Abstract
Bone lesions encountered as part of radiology practice can bring diagnostic challenges, both when encountered incidentally or suspected as a primary bone lesion, and in patients at risk of metastases or marrow-based malignancies. Differentiating benign from malignant bone marrow lesions is critical, yet [...] Read more.
Bone lesions encountered as part of radiology practice can bring diagnostic challenges, both when encountered incidentally or suspected as a primary bone lesion, and in patients at risk of metastases or marrow-based malignancies. Differentiating benign from malignant bone marrow lesions is critical, yet can be challenging due to overlapping imaging characteristics. One key imaging feature that can assist with diagnosis is the presence of fat within the lesion. Fat can be present either macroscopically (i.e., visible on radiographs, computed tomography (CT), and conventional magnetic resonance imaging (MRI)), or microscopically, detected through specialised MRI techniques such as chemical shift imaging (CSI). This comprehensive review explores the diagnostic significance of both macroscopic and microscopic fat in bone lesions and discusses how its presence can point towards benignity. We illustrate the spectrum of fat-containing bone lesions, encompassing both typical and atypical presentations, and provide practical imaging strategies to increase diagnostic accuracy by utilising radiographs, CT, and MRI in characterising these lesions. Specifically, CSI is highlighted as a non-invasive method for evaluating intralesional fat content, to distinguish benign marrow entities from malignant marrow-replacing conditions based on quantifiable signal drop-off. Furthermore, we detail imaging pitfalls with a focus on conditions that can mimic malignancy (such as aggressive haemangiomas) and collision lesions. Through a detailed discussion and illustrative examples, we aim to guide radiologists and clinicians in recognising reassuring imaging features while also identifying scenarios where further investigation may be warranted. Full article
Show Figures

Figure 1

20 pages, 1173 KiB  
Article
Validation of an Eye-Tracking Algorithm Based on Smartphone Videos: A Pilot Study
by Wanzi Su, Damon Hoad, Leandro Pecchia and Davide Piaggio
Diagnostics 2025, 15(12), 1446; https://doi.org/10.3390/diagnostics15121446 - 6 Jun 2025
Viewed by 615
Abstract
Introduction: This study aimed to develop and validate an efficient eye-tracking algorithm suitable for the analysis of images captured in the visible-light spectrum using a smartphone camera. Methods: The investigation primarily focused on comparing two algorithms, which were named CHT_TM and CHT_ACM, abbreviated [...] Read more.
Introduction: This study aimed to develop and validate an efficient eye-tracking algorithm suitable for the analysis of images captured in the visible-light spectrum using a smartphone camera. Methods: The investigation primarily focused on comparing two algorithms, which were named CHT_TM and CHT_ACM, abbreviated from the core functions: Circular Hough Transform (CHT), Active Contour Models (ACMs), and Template Matching (TM). Results: CHT_TM significantly improved the running speed of the CHT_ACM algorithm, with not much difference in the resource consumption, and improved the accuracy on the x axis. CHT_TM achieved a reduction by 79% of the execution time. CHT_TM performed with an average mean percentage error of 0.34% and 0.95% in the x and y direction across the 19 manually validated videos, compared to 0.81% and 0.85% for CHT_ACM. Different conditions, like manually opening the eyelids with a finger versus without a finger, were also compared across four different tasks. Conclusions: This study shows that applying TM improves the original eye-tracking algorithm with CHT_ACM. The new algorithm has the potential to help the tracking of eye movement, which can facilitate the early screening and diagnosis of neurodegenerative diseases. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

14 pages, 2757 KiB  
Article
Highly Efficient Inverted Organic Light-Emitting Devices with Li-Doped MgZnO Nanoparticle Electron Injection Layer
by Hwan-Jin Yoo, Go-Eun Kim, Chan-Jun Park, Su-Been Lee, Seo-Young Kim and Dae-Gyu Moon
Micromachines 2025, 16(6), 617; https://doi.org/10.3390/mi16060617 - 24 May 2025
Viewed by 502
Abstract
Inverted organic light-emitting devices (OLEDs) have been attracting considerable attention due to their advantages such as high stability, low image sticking, and low operating stress in display applications. To address the charge imbalance that has been known as a critical issue of the [...] Read more.
Inverted organic light-emitting devices (OLEDs) have been attracting considerable attention due to their advantages such as high stability, low image sticking, and low operating stress in display applications. To address the charge imbalance that has been known as a critical issue of the inverted OLEDs, Li-doped MgZnO nanoparticles were synthesized as an electron-injection layer of the inverted OLEDs. Hexagonal wurtzite-structured Li-doped MgZnO nanoparticles were synthesized at room temperature via a solution precipitation method using LiCl, magnesium acetate tetrahydrate, zinc acetate dihydrate, and tetramethylammonium hydroxide pentahydrate. The Mg concentration was fixed at 10%, while the Li concentration was varied up to 15%. The average particle size decreased with Li doping, exhibiting the particle sizes of 3.6, 3.0, and 2.7 nm for the MgZnO, 10% and 15% Li-doped MgZnO nanoparticles, respectively. The band gap, conduction band minimum and valence band maximum energy levels, and the visible emission spectrum of the Li-doped MgZnO nanoparticles were investigated. The surface roughness and electrical conduction properties of the Li-doped MgZnO nanoparticle films were also analyzed. The inverted phosphorescent OLEDs with Li-doped MgZnO nanoparticles exhibited higher external quantum efficiency (EQE) due to better charge balance resulting from suppressed electron conduction, compared to the undoped MgZnO nanoparticle devices. The maximum EQE of 21.7% was achieved in the 15% Li-doped MgZnO nanoparticle devices. Full article
(This article belongs to the Special Issue Photonic and Optoelectronic Devices and Systems, Third Edition)
Show Figures

Figure 1

18 pages, 3160 KiB  
Article
Ultrasonic Beamforming-Based Visual Localisation of Minor and Multiple Gas Leaks Using a Microelectromechanical System (MEMS) Microphone Array
by Tao Wang, Jiawen Ji, Jianglong Lan and Bo Wang
Sensors 2025, 25(10), 3190; https://doi.org/10.3390/s25103190 - 19 May 2025
Viewed by 676
Abstract
The development of a universal method for real-time gas leak localisation imaging is crucial for preventing substantial financial losses and hazardous incidents. To achieve this objective, this study integrates array signal processing and electronic techniques to construct an ultrasonic sensor array for gas [...] Read more.
The development of a universal method for real-time gas leak localisation imaging is crucial for preventing substantial financial losses and hazardous incidents. To achieve this objective, this study integrates array signal processing and electronic techniques to construct an ultrasonic sensor array for gas leak detection and localisation. A digital microelectromechanical system microphone array is used to capture spatial ultrasonic information. By processing the array signals using beamforming algorithms, an acoustic spatial power spectrum is obtained, which facilitates the estimation of the locations of potential gas leak sources. In the pre-processing of beamforming, the Hilbert transform is employed instead of the fast Fourier transform to save computational resources. Subsequently, the spatial power spectrum is fused with visible-light images to generate acoustic localisation images, which enables the visualisation of gas leak sources. Experimental validation demonstrates that the system detects minor and multiple gas leaks in real time, meeting the sensitivity and accuracy requirements of embedded industrial applications. These findings contribute to the development of practical, cost-effective, and scalable gas leak detection systems for industrial and environmental safety applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

22 pages, 46263 KiB  
Article
The Rapid Detection of Foreign Fibers in Seed Cotton Based on Hyperspectral Band Selection and a Lightweight Neural Network
by Yeqi Fei, Zhenye Li, Dongyi Wang and Chao Ni
Agriculture 2025, 15(10), 1088; https://doi.org/10.3390/agriculture15101088 - 18 May 2025
Viewed by 447
Abstract
Contamination with foreign fibers—such as mulch films and polypropylene strands—during cotton harvesting and processing severely compromises fiber quality. The traditional detection methods often fail to identify fine impurities under visible light, while full-spectrum hyperspectral imaging (HSI) techniques—despite their effectiveness—tend to be prohibitively expensive [...] Read more.
Contamination with foreign fibers—such as mulch films and polypropylene strands—during cotton harvesting and processing severely compromises fiber quality. The traditional detection methods often fail to identify fine impurities under visible light, while full-spectrum hyperspectral imaging (HSI) techniques—despite their effectiveness—tend to be prohibitively expensive and computationally intensive. Specifically, the vast amount of redundant spectral information in full-spectrum HSI escalates both the system’s costs and processing challenges. To address these challenges, this study presents an intelligent detection framework that integrates optimized spectral band selection with a lightweight neural network. A novel hybrid Harris Hawks–Whale Optimization Operator (HWOO) is employed to isolate 12 discriminative bands from the original 288 channels, effectively eliminating redundant spectral data. Additionally, a lightweight attention mechanism, combined with a depthwise convolution module, enables real-time inference for online production. The proposed attention-enhanced CNN architecture achieves a 99.75% classification accuracy with real-time processing at 12.201 μs per pixel, surpassing the full-spectrum models by 11.57% in its accuracy while drastically reducing the processing time from 370.1 μs per pixel. This approach not only enables the high-speed removal of impurities in harvested seed cotton production lines but also offers a cost-effective pathway to practical multispectral solutions. Moreover, this methodology demonstrates broad applicability for quality control in agricultural product processing. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

23 pages, 7984 KiB  
Article
A Transfer Learning-Based VGG-16 Model for COD Detection in UV–Vis Spectroscopy
by Jingwei Li, Iqbal Muhammad Tauqeer, Zhiyu Shao and Haidong Yu
J. Imaging 2025, 11(5), 159; https://doi.org/10.3390/jimaging11050159 - 17 May 2025
Viewed by 603
Abstract
Chemical oxygen demand (COD) serves as a key indicator of organic pollution in water bodies, and its rapid and accurate detection is crucial for environmental protection. Recently, ultraviolet–visible (UV–Vis) spectroscopy has gained popularity for COD detection due to its convenience and the absence [...] Read more.
Chemical oxygen demand (COD) serves as a key indicator of organic pollution in water bodies, and its rapid and accurate detection is crucial for environmental protection. Recently, ultraviolet–visible (UV–Vis) spectroscopy has gained popularity for COD detection due to its convenience and the absence of chemical reagents. Meanwhile, deep learning has emerged as an effective approach for automatically extracting spectral features and predicting COD. This paper proposes transforming one-dimensional spectra into two-dimensional spectrum images and employing convolutional neural networks (CNNs) to extract features and model automatically. However, training such deep learning models requires a vast dataset of water samples, alongside the complex task of labeling this data. To address these challenges, we introduce a transfer learning model based on VGG-16 for spectrum images. In this approach, parameters in the initial layers of the model are frozen, while those in the later layers are fine-tuned with the spectrum images. The effectiveness of this method is demonstrated through experiments conducted on our dataset, where the results indicate that it significantly enhances the accuracy of COD prediction compared to traditional methods and other deep learning methods such as partial least squares regression (PLSR), support vector machine (SVM), artificial neural network (ANN), and CNN-based methods. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

30 pages, 12255 KiB  
Article
Unmanned Aerial Vehicle-Based Hyperspectral Imaging for Potato Virus Y Detection: Machine Learning Insights
by Siddat B. Nesar, Paul W. Nugent, Nina K. Zidack and Bradley M. Whitaker
Remote Sens. 2025, 17(10), 1735; https://doi.org/10.3390/rs17101735 - 15 May 2025
Viewed by 1152
Abstract
The potato is the third most important crop in the world, and more than 375 million metric tonnes of potatoes are produced globally on an annual basis. Potato Virus Y (PVY) poses a significant threat to the production of seed potatoes, resulting in [...] Read more.
The potato is the third most important crop in the world, and more than 375 million metric tonnes of potatoes are produced globally on an annual basis. Potato Virus Y (PVY) poses a significant threat to the production of seed potatoes, resulting in economic losses and risks to food security. Current detection methods for PVY typically rely on serological assays for leaves and PCR for tubers; however, these processes are labor-intensive, time-consuming, and not scalable. In this proof-of-concept study, we propose the use of unmanned aerial vehicles (UAVs) integrated with hyperspectral cameras, including a downwelling irradiance sensor, to detect the PVY in commercial growers’ fields. We used a 400–1000 nm visible and near-infrared (Vis-NIR) hyperspectral camera and trained several standard machine learning and deep learning models with optimized hyperparameters on a curated dataset. The performance of the models is promising, with the convolutional neural network (CNN) achieving a recall of 0.831, reliably identifying the PVY-infected plants. Notably, UAV-based imaging maintained performance levels comparable to ground-based methods, supporting its practical viability. The hyperspectral camera captures a wide range of spectral bands, many of which are redundant in identifying the PVY. Our analysis identified five key spectral regions that are informative in identifying the PVY. Two of them are in the visible spectrum, two are in the near-infrared spectrum, and one is in the red-edge spectrum. This research shows that early-season PVY detection is feasible using UAV hyperspectral imaging, offering the potential to minimize economic and yield losses. It also highlights the most relevant spectral regions that carry the distinctive signatures of PVY. This research demonstrates the feasibility of early-season PVY detection using UAV hyperspectral imaging and provides guidance for developing cost-effective multispectral sensors tailored to this task. Full article
Show Figures

Figure 1

20 pages, 7445 KiB  
Article
Synthesis, Structural Characterization, Luminescent Properties, and Antibacterial and Anticancer Activities of Rare Earth-Caffeic Acid Complexes
by Nguyen Thi Hien Lan, Hoang Phu Hiep, Tran Van Quy and Pham Van Khang
Molecules 2025, 30(10), 2162; https://doi.org/10.3390/molecules30102162 - 14 May 2025
Viewed by 529
Abstract
Rare earth elements (Ln: Sm, Eu, Tb, Dy) were complexed with caffeic acid (Caf), a natural phenolic compound, to synthesize novel luminescent complexes with enhanced biological activities. The complexes, formulated as Ln(Caf)3·4H2O, were characterized using infrared spectroscopy (IR), thermogravimetric [...] Read more.
Rare earth elements (Ln: Sm, Eu, Tb, Dy) were complexed with caffeic acid (Caf), a natural phenolic compound, to synthesize novel luminescent complexes with enhanced biological activities. The complexes, formulated as Ln(Caf)3·4H2O, were characterized using infrared spectroscopy (IR), thermogravimetric analysis (TGA/DTA), mass spectrometry (MS), and fluorescence spectroscopy. Structural studies confirmed the coordination of caffeic acid via carboxylate and hydroxyl groups, forming stable hexacoordinate complexes. Luminescence analysis revealed intense emission bands in the visible spectrum (480–700 nm), attributed to f-f transitions of Ln3+ ions, with decay lifetimes ranging from 0.054 to 0.064 ms. Biological assays demonstrated significant antibacterial activity against Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa, with inhibition zones up to 44 mm at 200 µg/mL. The complexes also exhibited potent anticancer activity against MCF7 breast cancer cells, with Sm(Caf)3·4H3O showing the lowest IC50 value (15.5 µM). This study highlights the dual functionality of rare earth metal-caffeic acid complexes as promising candidates for biomedical imaging and therapeutic applications. Full article
Show Figures

Figure 1

21 pages, 7212 KiB  
Article
Combining Cirrus and Aerosol Corrections for Improved Reflectance Retrievals over Turbid Waters from Visible Infrared Imaging Radiometer Suite Data
by Bo-Cai Gao, Rong-Rong Li, Marcos J. Montes and Sean C. McCarthy
Oceans 2025, 6(2), 28; https://doi.org/10.3390/oceans6020028 - 14 May 2025
Viewed by 500
Abstract
The multi-band atmospheric correction algorithms, now referred to as remote sensing reflectance (Rrs) algorithms, have been implemented on a NASA computing facility for global remote sensing of ocean color and atmospheric aerosol parameters from data acquired with several satellite instruments, including [...] Read more.
The multi-band atmospheric correction algorithms, now referred to as remote sensing reflectance (Rrs) algorithms, have been implemented on a NASA computing facility for global remote sensing of ocean color and atmospheric aerosol parameters from data acquired with several satellite instruments, including the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi spacecraft platform. These algorithms are based on the 2-band version of the SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) algorithm. The bands centered near 0.75 and 0.865 μm are used for atmospheric corrections. In order to obtain high-quality Rrs values over Case 1 waters (deep clear ocean waters), strict masking criteria are implemented inside these algorithms to mask out thin clouds and very turbid water pixels. As a result, Rrs values are often not retrieved over bright Case 2 waters. Through our analysis of VIIRS data, we have found that spatial features of bright Case 2 waters are observed in VIIRS visible band images contaminated by thin cirrus clouds. In this article, we describe methods of combining cirrus and aerosol corrections to improve spatial coverage in Rrs retrievals over Case 2 waters. One method is to remove cirrus cloud effects using our previously developed operational VIIRS cirrus reflectance algorithm and then to perform atmospheric corrections with our updated version of the spectrum-matching algorithm, which uses shortwave IR (SWIR) bands above 1 μm for retrieving atmospheric aerosol parameters and extrapolates the aerosol parameters to the visible region to retrieve water-leaving reflectances of VIIRS visible bands. Another method is to remove the cirrus effect first and then make empirical atmospheric and sun glint corrections for water-leaving reflectance retrievals. The two methods produce comparable retrieved results, but the second method is about 20 times faster than the spectrum-matching method. We compare our retrieved results with those obtained from the NASA VIIRS Rrs algorithm. We will show that the assumption of zero water-leaving reflectance for the VIIRS band centered at 0.75 μm (M6) over Case 2 waters with the NASA Rrs algorithm can sometimes result in slight underestimates of water-leaving reflectances of visible bands over Case 2 waters, where the M6 band water-leaving reflectances are actually not equal to zero. We will also show conclusively that the assumption of thin cirrus clouds as ‘white’ aerosols during atmospheric correction processes results in overestimates of aerosol optical thicknesses and underestimates of aerosol Ångström coefficients. Full article
(This article belongs to the Special Issue Ocean Observing Systems: Latest Developments and Challenges)
Show Figures

Figure 1

24 pages, 7653 KiB  
Article
AMamNet: Attention-Enhanced Mamba Network for Hyperspectral Remote Sensing Image Classification
by Chunjiang Liu, Feng Wang, Qinglei Jia, Li Liu and Tianxiang Zhang
Atmosphere 2025, 16(5), 541; https://doi.org/10.3390/atmos16050541 - 2 May 2025
Viewed by 580
Abstract
Hyperspectral imaging, a key technology in remote sensing, captures rich spectral information beyond the visible spectrum, rendering it indispensable for advanced classification tasks. However, with developments in hyperspectral imaging, spatial–spectral redundancy and spectral confusion have increasingly revealed the limitations of convolutional neural networks [...] Read more.
Hyperspectral imaging, a key technology in remote sensing, captures rich spectral information beyond the visible spectrum, rendering it indispensable for advanced classification tasks. However, with developments in hyperspectral imaging, spatial–spectral redundancy and spectral confusion have increasingly revealed the limitations of convolutional neural networks (CNNs) and vision transformers (ViT). Recent advancements in state space models (SSMs) have demonstrated their superiority in linear modeling compared to convolution and transformer-based approaches. Based on this foundation, this study proposes a model named AMamNet that integrates convolutional and attention mechanisms with SSMs. As a core component of AMamNet, Attention-Bidirectional Mamba Block, leverages the self-attention mechanism to capture inter-spectral dependencies, while SSMs enhance sequential feature extraction, effectively managing the continuous nature of hyperspectral image spectral bands. Technically, a multi-scale convolution stem block is designed to achieve shallow spatial–spectral feature fusion and reduce information redundancy. Extensive experiments conducted on three benchmark datasets, namely the Indian Pines dataset, Pavia University dataset, and WHU-Hi-LongKou dataset, demonstrate that AMamNet achieves robust, state-of-the-art performance, underscoring its effectiveness in mitigating redundancy and confusion within the spatial–spectral characteristics of hyperspectral images. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

26 pages, 4974 KiB  
Article
Artificial Intelligence-Based Prediction Model for Maritime Vessel Type Identification
by Hrvoje Karna, Maja Braović, Anita Gudelj and Kristian Buličić
Information 2025, 16(5), 367; https://doi.org/10.3390/info16050367 - 29 Apr 2025
Cited by 1 | Viewed by 1047
Abstract
This paper presents an artificial intelligence-based model for the classification of maritime vessel images obtained by cameras operating in the visible part of the electromagnetic spectrum. It incorporates both the deep learning techniques for initial image representation and traditional image processing and machine [...] Read more.
This paper presents an artificial intelligence-based model for the classification of maritime vessel images obtained by cameras operating in the visible part of the electromagnetic spectrum. It incorporates both the deep learning techniques for initial image representation and traditional image processing and machine learning methods for subsequent image classification. The presented model is therefore a hybrid approach that uses the Inception v3 deep learning model for the purpose of image vectorization and a combination of SVM, kNN, logistic regression, Naïve Bayes, neural network, and decision tree algorithms for final image classification. The model is trained and tested on a custom dataset consisting of a total of 2915 images of maritime vessels. These images were split into three subsections: training (2444 images), validation (271 images), and testing (200 images). The images themselves encompassed 11 distinctive classes: cargo, container, cruise, fishing, military, passenger, pleasure, sailing, special, tanker, and non-class (objects that can be encountered at sea but do not represent maritime vessels). The presented model accurately classified 86.5% of the images used for training purposes and therefore demonstrated how a relatively straightforward model can still achieve high accuracy and potentially be useful in real-world operational environments aimed at sea surveillance and automatic situational awareness at sea. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

36 pages, 26652 KiB  
Article
Low-Light Image Enhancement for Driving Condition Recognition Through Multi-Band Images Fusion and Translation
by Dong-Min Son and Sung-Hak Lee
Mathematics 2025, 13(9), 1418; https://doi.org/10.3390/math13091418 - 25 Apr 2025
Viewed by 527
Abstract
When objects are obscured by shadows or dim surroundings, image quality is improved by fusing near-infrared and visible-light images. At night, when visible and NIR lights are insufficient, long-wave infrared (LWIR) imaging can be utilized, necessitating the attachment of a visible-light sensor to [...] Read more.
When objects are obscured by shadows or dim surroundings, image quality is improved by fusing near-infrared and visible-light images. At night, when visible and NIR lights are insufficient, long-wave infrared (LWIR) imaging can be utilized, necessitating the attachment of a visible-light sensor to an LWIR camera to simultaneously capture both LWIR and visible-light images. This camera configuration enables the acquisition of infrared images at various wavelengths depending on the time of day. To effectively fuse clear visible regions from the visible-light spectrum with those from the LWIR spectrum, a multi-band fusion method is proposed. The proposed fusion process subsequently combines detailed information from infrared and visible-light images, enhancing object visibility. Additionally, this process compensates for color differences in visible-light images, resulting in a natural and visually consistent output. The fused images are further enhanced using a night-to-day image translation module, which improves overall brightness and reduces noise. This night-to-day translation module is a trained CycleGAN-based module that adjusts object brightness in nighttime images to levels comparable to daytime images. The effectiveness and superiority of the proposed method are validated using image quality metrics. The proposed method significantly contributes to image enhancement, achieving the best average scores compared to other methods, with a BRISQUE of 30.426 and a PIQE of 22.186. This study improves the accuracy of human and object recognition in CCTV systems and provides a potential image-processing tool for autonomous vehicles. Full article
Show Figures

Figure 1

Back to TopTop