Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,129)

Search Parameters:
Keywords = single-pixel imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3275 KB  
Article
3D Reconstruction Method for GM-APD Array LiDAR Based on Intensity Image Guidance
by Ye Liu, Kehao Chi, Ruikai Xue and Genghua Huang
Photonics 2026, 13(4), 323; https://doi.org/10.3390/photonics13040323 - 26 Mar 2026
Abstract
Geiger-mode avalanche photodiode (GM-APD) array light detection and ranging (LiDAR) has significant advantages in low-light scenes due to its single-photon-level detection sensitivity. However, it is susceptible to noise, which leads to a decrease in target localization accuracy. Traditional methods rely on long-term accumulation [...] Read more.
Geiger-mode avalanche photodiode (GM-APD) array light detection and ranging (LiDAR) has significant advantages in low-light scenes due to its single-photon-level detection sensitivity. However, it is susceptible to noise, which leads to a decrease in target localization accuracy. Traditional methods rely on long-term accumulation to distinguish signal photons from noise photons, making it difficult to achieve efficient processing, especially in scenarios with sparse echo photons and low signal-to-noise ratio (SNR), where performance is limited. To quickly and accurately obtain three-dimensional (3D) information of the target under such extreme conditions, this paper proposes a method for target detection and temporal window depth estimation based on intensity information guidance. First, noise suppression is performed on the intensity image according to its statistical characteristics, and an outlier detection mechanism based on neighborhood sparsity is introduced to remove outliers, thereby completing the target detection. Next, by exploiting the spatial continuity and reflectivity similarity of the target, local fusion of photon data within the target neighborhood is performed to construct highly consistent “superpixels”. Finally, according to the distribution difference between signal photons and noise photons on the time axis, temporal window screening is applied to the superpixels to extract depth information, and empty pixels are filled using a convex segmentation method to achieve depth estimation of the target. The experimental results demonstrate that under conditions of low photon counts and strong noise, the proposed method significantly outperforms traditional and existing methods in target recovery and depth estimation by effectively integrating target intensity information. Furthermore, this method achieves faster reconstruction speed, enabling high-precision and high-efficiency 3D target reconstruction. Full article
(This article belongs to the Special Issue Advances in Photon-Counting Imaging and Sensing)
Show Figures

Figure 1

19 pages, 3508 KB  
Article
Scalable One-Pixel Attacks on Deep Neural Networks for High-Resolution Images
by Wonhong Nam, Hyunwoo Moon, Kunha Kim and Hyunyoung Kil
Mathematics 2026, 14(7), 1095; https://doi.org/10.3390/math14071095 - 24 Mar 2026
Viewed by 142
Abstract
Recent studies have shown that deep neural networks can be misled by adversarial examples that involve only imperceptible perturbations. Among these, one-pixel attacks (OPA) represent an extreme yet powerful threat, as they alter only a single pixel of an input image while causing [...] Read more.
Recent studies have shown that deep neural networks can be misled by adversarial examples that involve only imperceptible perturbations. Among these, one-pixel attacks (OPA) represent an extreme yet powerful threat, as they alter only a single pixel of an input image while causing misclassification. While prior work has demonstrated the effectiveness of OPAs on low-resolution datasets, extending these attacks to high-resolution images poses a significant challenge due to the dramatic increase in the number of pixels and the resulting expansion of the search space. In this paper, we address this challenge by proposing a scalable one-pixel attack framework for deep neural networks on high-resolution images. The key difficulty in high-resolution OPAs lies in identifying a vulnerable pixel among tens of thousands of candidates under a black-box setting, where exhaustive pixel-wise probing is prohibitively expensive. To overcome this limitation, we decompose the attack into two phases. In the first phase, we efficiently identify a small set of promising pixel locations using a hierarchical patch-based search strategy, which iteratively prunes large image regions via coarse-grained patch perturbations, thereby substantially reducing the number of required model queries. In the second phase, for each selected pixel candidate, we search for adversarial RGB values using a black-box optimization method based on momentum-accelerated finite-difference gradient estimation. We evaluate our method on popular deep neural network architectures using high-resolution ImageNet images. The experimental results demonstrate that our approach achieves high attack success rates while significantly reducing query cost and improving the quality of the resulting adversarial perturbations compared to existing strategies. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 21672 KB  
Article
Ultra-Fast Digital Silicon Photomultiplier with Timestamping Capability in a 110 nm CMOS Process
by Tommaso Maria Floris, Marcello Campajola, Gianmaria Collazuol, Manuel Dionísio Da Rocha Rolo, Giuliana Fiorillo, Francesco Licciulli, Mario Nicola Mazziotta, Lucio Pancheri, Lodovico Ratti, Luigi Pio Rignanese, Davide Falchieri, Romualdo Santoro, Fatemeh Shojaei and Carla Vacchi
Electronics 2026, 15(6), 1300; https://doi.org/10.3390/electronics15061300 - 20 Mar 2026
Viewed by 174
Abstract
A monolithic digital Silicon Photomultiplier (SiPM) featuring 1024 microcells with a 30-micrometer pitch and a 50% fill factor has been designed in a 110-nanometer CMOS image sensor technology. The device under consideration integrates both SPAD sensors and front-end electronics in the same substrate. [...] Read more.
A monolithic digital Silicon Photomultiplier (SiPM) featuring 1024 microcells with a 30-micrometer pitch and a 50% fill factor has been designed in a 110-nanometer CMOS image sensor technology. The device under consideration integrates both SPAD sensors and front-end electronics in the same substrate. It can count up to 1024 photons in less than 22 ns, while assigning timestamps to the first and last detected photons with a time resolution of less than 100 ps. A parallel counter structure combined with a fast adder tree provides photon counting in digital form with low latency, whereas a carefully balanced fast NAND tree ensures a fixed-pattern time uncertainty not exceeding 26 ps. The architecture incorporates in-pixel memory for individual cell disabling and configurable thresholding on the timing signal for noise mitigation. In order to optimize the fill factor, a part of the electronics is placed outside the array, while the most sensitive elements of the timing and counting circuits are laid out close to the sensor, in the SPAD array. A serial readout is employed to provide a single output connection per SiPM, thereby simplifying system integration. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

13 pages, 4543 KB  
Data Descriptor
HyCervix: In Vivo Hyperspectral Cervix Dataset for Non-Invasive Detection of Precancerous and Cancerous Lesions
by Carlos Vega, Norberto Medina, Raquel Leon, Himar Fabelo, Alicia Martín and Gustavo M. Callico
Data 2026, 11(3), 62; https://doi.org/10.3390/data11030062 - 18 Mar 2026
Viewed by 163
Abstract
Hyperspectral (HS) imaging has emerged as a promising tool for improving the non-invasive detection of different diseases, offering spatial and spectral information in a single imaging modality. In this work, we present a dataset of HS images of the in vivo human cervix, [...] Read more.
Hyperspectral (HS) imaging has emerged as a promising tool for improving the non-invasive detection of different diseases, offering spatial and spectral information in a single imaging modality. In this work, we present a dataset of HS images of the in vivo human cervix, including different precancerous and cancerous lesions. The dataset comprises 77 HS images acquired from 77 patients during routine colposcopic examination. All images were captured using a clinical colposcope equipped with an HS camera, covering the spectral range from 470 to 900 nm. Each HS image is accompanied by detailed pixel-level annotations for different clinically relevant tissue classes: ectocervix, endocervix, cervical intraepithelial neoplasia lesions, and invasive carcinoma. These labels were established through expert colposcopic assessment and confirmed by cytology or biopsy. The dataset contains clinical data from these patients, including demographic information, colposcopy and biopsy findings, and clinical diagnoses. Full article
Show Figures

Figure 1

23 pages, 2962 KB  
Article
Feasibility of Infrared-Based Pedestrian Detectability in Unlit Urban and Rural Road Sections Using Consumer Thermal Cameras
by Yordan Stoyanov, Atanasi Tashev and Penko Mitev
Vehicles 2026, 8(3), 61; https://doi.org/10.3390/vehicles8030061 - 16 Mar 2026
Viewed by 207
Abstract
This study evaluates the feasibility of using two affordable thermal cameras (UNI-T UTi260M and UTi260T), which are not designed as automotive sensors, for observing pedestrians and warm objects during night-time driving under low-illumination conditions. The experimental setup includes mounting the camera on the [...] Read more.
This study evaluates the feasibility of using two affordable thermal cameras (UNI-T UTi260M and UTi260T), which are not designed as automotive sensors, for observing pedestrians and warm objects during night-time driving under low-illumination conditions. The experimental setup includes mounting the camera on the vehicle body (e.g., side mirror area/roof), recording road scenes in urban and rural environments, and selecting representative frames for qualitative and quantitative analysis. The study assesses: (i) observable pedestrian detectability in unlit road sections and under oncoming headlight glare, where visible cameras often lose contrast; (ii) the influence of low ambient temperature and strong cold wind on image appearance (including “whitening”/contrast shifts); and (iii) workflow differences, where UTi260M relies on a smartphone application for streaming/recording, while UTi260T supports PC-based image analysis and temperature-profile visualization. In addition, a calibration-based geometric method is proposed for approximate pedestrian distance estimation from single frames using silhouette pixel height and a regression model based on 1/hpx, valid for a specific mounting configuration and a known subject height. Results indicate that both cameras can highlight warm objects relative to the background and support visual pedestrian identification at low illumination, including in the presence of oncoming headlights, with UTi260M showing more stable behavior in parts of the tests. This work is a feasibility study and does not claim Advanced Driver Assist Systems (ADAS) functionality; it outlines limitations, repeatability considerations, and a minimal set of metrics and procedures for future extension. All quantitative indicators derived from exported frames are explicitly treated as image-level proxy metrics, not as physical sensor characteristics. Full article
(This article belongs to the Special Issue Novel Solutions for Transportation Safety, 2nd Edition)
Show Figures

Figure 1

16 pages, 4714 KB  
Article
Metasurface-Enabled Dual-Channel Optical Image Authentication Based on Polarization Multiplexing
by Yanfeng Su, Biao Zhu, Wenming Chen, Ruijie Xue, Zijing Li, Zhijian Cai, Qibin Feng and Guoqiang Lv
Photonics 2026, 13(3), 280; https://doi.org/10.3390/photonics13030280 - 15 Mar 2026
Viewed by 188
Abstract
In this paper, a metasurface-enabled dual-channel optical image authentication based on polarization multiplexing is proposed. During encryption, authentication phases corresponding to dual-channel plaintext images are firstly calculated by using a sparse-constraint-driven authentication-holography (SCDAH) algorithm. Then, target transmission phase and geometric phase of metasurface [...] Read more.
In this paper, a metasurface-enabled dual-channel optical image authentication based on polarization multiplexing is proposed. During encryption, authentication phases corresponding to dual-channel plaintext images are firstly calculated by using a sparse-constraint-driven authentication-holography (SCDAH) algorithm. Then, target transmission phase and geometric phase of metasurface to be designed are obtained accordingly by the composite phase modulation (CPM) principle. Next, the nanopillar-type metasurface unit is performed with parameter scanning to establish the transmission and geometric phase databases. Finally, the structural parameters of each nanopillar are determined on a pixel-by-pixel basis to complete the construction of polarization-multiplexing authentication metasurface (PMAM). During authentication, the PMAM are respectively illuminated by the left-handed circularly polarized (LCP) and right-handed circularly polarized (RCP) light to obtain pseudo-random images produced by far-field diffraction, and then the nonlinear correlation distribution between diffraction image and corresponding channel plaintext image is calculated, and the final authentication result of each channel is determined based on whether the signal-to-noise ratio of the nonlinear correlation distribution meets the standard. In fact, a new physical-characteristic-driven dual-channel optical image authentication technology is formed, where double identities of the user holding this PMAM can be simultaneously verified, breaking through the rigid constraint of conventional single metasurface-to-single image, meanwhile improving the capacity and efficiency for authentication metasurface from the perspective of physical mechanism. Numerical simulations are performed to demonstrate the feasibility of the proposed method, and the simulation results prove that the proposed method exhibits high feasibility and security as well as strong robustness against cropping attack, showing a promising application potential in the field of identity recognition and authentication. Full article
Show Figures

Figure 1

13 pages, 901 KB  
Article
Comparative Testicular Echotexture and Scrotal Thermography Before and After Electroejaculation in Beef Bulls
by Carlos C. Pérez-Marín and Luis Quevedo-Sánchez
Appl. Sci. 2026, 16(6), 2780; https://doi.org/10.3390/app16062780 - 13 Mar 2026
Viewed by 197
Abstract
Testicular echotexture and scrotal thermography were evaluated in beef bulls before and after electroejaculation (EE) to assess their response to semen collection and to standardize echotexture assessment methodology. Twelve Limousin bulls (12 months to 5 years of age) were included in the study. [...] Read more.
Testicular echotexture and scrotal thermography were evaluated in beef bulls before and after electroejaculation (EE) to assess their response to semen collection and to standardize echotexture assessment methodology. Twelve Limousin bulls (12 months to 5 years of age) were included in the study. Semen analysis revealed that 41.6% of the bulls exhibited low semen quality. Testicular echotexture values recorded before ejaculation were significantly lower (p < 0.01) than those obtained after semen collection, and differences between testes were observed in some bulls, suggesting reduced reproductive efficiency. For methodological standardization, echotexture values obtained from a single large region of interest were compared with those from six smaller parenchymal regions; the larger region yielded lower pixel intensity values. Echotexture did not differ among imaging planes (proximal, middle, and distal), although probe–testis distance significantly affected measurements. Thermographic analysis showed that the proximal scrotal region was approximately 4 °C warmer than the distal region, and both regions exhibited a temperature decrease of approximately 3 °C following ejaculation. No correlations were identified between semen quality parameters and imaging variables. In conclusion, testicular echotexture increased, whereas scrotal surface temperature decreased after ejaculation. Although ultrasonography and thermography were not associated with semen quality, they provide complementary information for the detection of subclinical testicular alterations. Further studies are warranted to determine whether disparities between testes may serve as indicators of subfertility in bulls. Full article
(This article belongs to the Special Issue Applications of Ultrasonic Technology in Biomedical Sciences)
Show Figures

Figure 1

18 pages, 2199 KB  
Article
Brain-Oct-Pvt: A Physics-Guided Transformer with Radial Prior and Deformable Alignment for Neurovascular Segmentation
by Quan Lan, Jianuo Huang, Chenxi Huang, Songyuan Song, Yuhao Shi, Zijun Zhao, Wenwen Wu, Hongbin Chen and Nan Liu
Bioengineering 2026, 13(3), 332; https://doi.org/10.3390/bioengineering13030332 - 13 Mar 2026
Viewed by 299
Abstract
The primary objective of this study is to develop a specialized deep learning framework specifically adapted for the unique physical characteristics of neurovascular Optical Coherence Tomography (OCT) imaging. Although Polyp-PVT, originally designed for polyp segmentation, shows promise for OCT analysis, it faces limitations [...] Read more.
The primary objective of this study is to develop a specialized deep learning framework specifically adapted for the unique physical characteristics of neurovascular Optical Coherence Tomography (OCT) imaging. Although Polyp-PVT, originally designed for polyp segmentation, shows promise for OCT analysis, it faces limitations in neurovascular applications. The default RGB input wastes resources on duplicated grayscale data, while its fixed-scale fusion struggles with vascular curvature variations. Furthermore, the attention mechanism fails to capture radial vessel patterns, and geometric constraints limit thin boundary detection. To address these challenges, we propose Brain-OCT-PVT with key innovations: a single-channel input stem reducing parameters by two-thirds; a Radial Intensity Module (RIM) using polar transforms and angular convolution to model annular structures; and a Deformable Cross-scale Fusion Module (D-CFM) with learnable offsets. The Boundary-aware Attention Module (BAM) combines Laplace edge detection with Swin-Transformer for sub-pixel consistency. A specialized loss function combines Dice Similarity Coefficient (Dice), BoundaryIoU on 2-pixel dilated edges, and Focal Tversky to handle extreme class imbalance. Evaluation on 13 clinical cases achieves a Dice score of 95.06% and an 95% Hausdorff Distance (HD95) of 0.269 mm, demonstrating superior performance compared to existing approaches. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Graphical abstract

26 pages, 2632 KB  
Article
Automated Malaria Ring Form Classification in Blood Smear Images Using Ensemble Parallel Neural Networks
by Pongphan Pongpanitanont, Naparat Suttidate, Manit Nuinoon, Natthida Khampeeramao, Sakhone Laymanivong and Penchom Janwan
J. Imaging 2026, 12(3), 127; https://doi.org/10.3390/jimaging12030127 - 12 Mar 2026
Viewed by 158
Abstract
Manual microscopy for malaria diagnosis is labor-intensive and prone to inter-observer variability. This study presents an automated binary classification approach for detecting malaria ring-form infections in thin blood smear single-cell images using a parallel neural network framework. Utilizing a balanced Kaggle dataset of [...] Read more.
Manual microscopy for malaria diagnosis is labor-intensive and prone to inter-observer variability. This study presents an automated binary classification approach for detecting malaria ring-form infections in thin blood smear single-cell images using a parallel neural network framework. Utilizing a balanced Kaggle dataset of 27,558 erythrocyte crops, images were standardized to 128 × 128 pixels and subjected to on-the-fly augmentation. The proposed architecture employs a dual-branch fusion strategy, integrating a convolutional neural network for local morphological feature extraction with a multi-head self-attention branch to capture global spatial relationships. Performance was rigorously evaluated using 10-fold stratified cross-validation and an independent 10% hold-out test set. Results demonstrated high-level discrimination, with all models achieving an ROC–AUC of approximately 0.99. The primary model (Model#1) attained a peak mean accuracy of 0.9567 during cross-validation and 0.97 accuracy (macro F1-score: 0.97) on the independent test set. In contrast, increasing architectural complexity in Model#3 led to a performance decline (0.95 accuracy) due to higher false-positive rates. These findings suggest that moderate-capacity feature fusion, combining convolutional descriptors with attention-based aggregation, provides a robust and generalizable solution for automated malaria screening without the risks associated with over-parameterization. Despite a strong performance, immediate clinical use remains limited because the model was developed on pre-segmented single-cell images, and external validation is still required before routine implementation. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

24 pages, 8525 KB  
Article
Consistency-Driven Dual-Teacher Framework for Semi-Supervised Zooplankton Microscopic Image Segmentation
by Zhongwei Li, Yinglin Wang, Dekun Yuan, Yanping Qi and Xiaoli Song
J. Imaging 2026, 12(3), 125; https://doi.org/10.3390/jimaging12030125 - 12 Mar 2026
Viewed by 168
Abstract
In-depth research on marine biodiversity is essential for understanding and protecting marine ecosystems, where semantic segmentation of marine species plays a crucial role. However, segmenting microscopic zooplankton images remains challenging due to highly variable morphologies, complex boundaries, and the scarcity of high-quality pixel-level [...] Read more.
In-depth research on marine biodiversity is essential for understanding and protecting marine ecosystems, where semantic segmentation of marine species plays a crucial role. However, segmenting microscopic zooplankton images remains challenging due to highly variable morphologies, complex boundaries, and the scarcity of high-quality pixel-level annotations that require expert knowledge. Existing semi-supervised methods often rely on single-model perspectives, producing unreliable pseudo-labels and limiting performance in such complex scenarios. To address these challenges, this paper proposes a consistency-driven dual-teacher framework tailored for zooplankton segmentation. Two heterogeneous teacher networks are employed: one captures global morphological features, while the other focuses on local fine-grained details, providing complementary and diverse supervision and alleviating overfitting under limited annotations. In addition, a dynamic fusion-based pseudo-label filtering strategy is introduced to adaptively integrate hard and soft labels by jointly considering prediction consistency and confidence scores, thereby enhancing supervision flexibility. Extensive experiments on the Zooplankton-21 Microscopic Segmentation Dataset (ZMS-21), a self-constructed microscopic zooplankton dataset demonstrate that the proposed method consistently outperforms existing semi-supervised segmentation approaches under various annotation ratios, achieving mIoU scores of 64.80%, 69.58%, 70.32%, and 73.92% with 1/16, 1/8, 1/4, and 1/2 labeled data, respectively. Full article
Show Figures

Figure 1

23 pages, 7611 KB  
Article
Design and Optimization of a Twisted Photodiode Pixel Structure for All-Directional Phase-Detection Autofocus CMOS Image Sensors
by Daiki Shirahige, Koichi Fukuda, Hajime Ikeda, Yusuke Onuki, Ginjiro Toyoguchi, Kohei Okamoto, Shunichi Wakashima, Hiroshi Sekine, Shuhei Hayashi, Ryo Yoshida, Junji Iwata, Yasushi Matsuno, Katsuhito Sakurai, Hiroshi Yuzurihara and Takeshi Ichikawa
Sensors 2026, 26(6), 1758; https://doi.org/10.3390/s26061758 - 10 Mar 2026
Viewed by 395
Abstract
To achieve an all-directional and high-speed, high-accuracy autofocus (AF) function, we propose a CMOS image sensor with a Twisted Photodiode (PD) structure. The developed 3D-stacked back-side illuminated (BSI) sensor employs the Twisted PD, which enables equivalent angular response characteristics in both the horizontal [...] Read more.
To achieve an all-directional and high-speed, high-accuracy autofocus (AF) function, we propose a CMOS image sensor with a Twisted Photodiode (PD) structure. The developed 3D-stacked back-side illuminated (BSI) sensor employs the Twisted PD, which enables equivalent angular response characteristics in both the horizontal and vertical directions for the two PDs integrated within a single pixel, thereby realizing AF detection for all pixels and all directions. This paper describes the Twisted PD structure that enables all-directional AF and presents an analysis of charge transfer behavior in this unique 3D configuration. In this paper, “all-directional” refers to robustness with respect to subject direction. Full article
Show Figures

Figure 1

27 pages, 15861 KB  
Article
Explorable 3D Hyperspectral Models from Multi-Angle Gimballed LWIR Pushbroom Imagery
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2026, 18(5), 781; https://doi.org/10.3390/rs18050781 - 4 Mar 2026
Viewed by 273
Abstract
Hyperspectral imaging in the long-wave infrared (LWIR) range enables identification of chemical compositions and material properties, but reconstructing 3D models from gimballed pushbroom sensors remains challenging because their unique acquisition geometry is incompatible with conventional photogrammetric software designed for frame cameras. This study [...] Read more.
Hyperspectral imaging in the long-wave infrared (LWIR) range enables identification of chemical compositions and material properties, but reconstructing 3D models from gimballed pushbroom sensors remains challenging because their unique acquisition geometry is incompatible with conventional photogrammetric software designed for frame cameras. This study presents a workflow for creating explorable 3D models from multi-angle LWIR hyperspectral imagery by co-registering hyperspectral line-scan data with simultaneously acquired RGB frame camera imagery using deep learning-based image matching. The co-registered images are processed in commercial photogrammetric software (Agisoft Metashape), and a texture-to-image mapping algorithm preserves correspondences between 3D model coordinates and original hyperspectral pixels across multiple viewing angles. Quantitative evaluation against reference data demonstrates that co-registration reduces geometric error approaching the accuracy of models built from high-resolution RGB imagery. The resulting models enable the retrieval of 8–50 spectral signatures per surface point, captured from different viewing geometries. This approach facilitates interactive exploration of angular variations in thermal infrared spectra, supporting material identification for non-Lambertian surfaces where single-angle observations may be insufficient for reliable classification. Full article
Show Figures

Figure 1

22 pages, 27725 KB  
Article
A Shadow Geometry Approach for Olive Tree Canopy Volume Estimation Using WorldView-3 Multispectral Imagery
by Raffaella Brigante, Valerio Baiocchi, Laura Marconi, Alessandra Vinci, Roberto Calisti, Luca Regni, Fabio Radicioni and Primo Proietti
Remote Sens. 2026, 18(5), 779; https://doi.org/10.3390/rs18050779 - 4 Mar 2026
Viewed by 317
Abstract
The accurate estimation of tree canopy volume is fundamental in precision agriculture for quantifying vegetation structure, biomass, and productivity in perennial cropping systems. This study investigates a shadow geometry approach for estimating olive tree canopy volumes from a single, very high-resolution WorldView-3 multispectral [...] Read more.
The accurate estimation of tree canopy volume is fundamental in precision agriculture for quantifying vegetation structure, biomass, and productivity in perennial cropping systems. This study investigates a shadow geometry approach for estimating olive tree canopy volumes from a single, very high-resolution WorldView-3 multispectral image. The method integrates multispectral classification for canopy and shadow delineation with a geometric model that infers canopy height from shadow measurements, accounting for solar position and terrain morphology. Two classification strategies were evaluated: object-based image analysis (OBIA) and pixel-based (PB) classification, each applied to the original eight-band multispectral image and to a derived dataset enriched with vegetation indices (NDVI—Normalized Difference Vegetation Index; NDRE—Normalized Difference Red Edge Index) and principal component analysis (PCA) components. The canopy volume was estimated by integrating classified canopy and shadow areas with shadow-derived canopy height. The methodology was tested in a Mediterranean olive orchard and validated against UAV-derived point clouds for approximately 700 trees. The results indicate that the approach captures spatial variability in canopy structure. The Object-Based Image Analysis (OBIA) applied to filtered PCA-enhanced imagery achieved the highest accuracy in canopy volume estimation (RMSE = 2.04 m3; R2 = 0.56), outperforming the alternative pixel-based (PB) classification applied to the original multispectral data. Overall, the study demonstrates the potential of single-image WorldView-3 data for rapid and scalable three-dimensional canopy characterization in precision agriculture. Full article
Show Figures

Graphical abstract

25 pages, 6670 KB  
Article
A Novel Clustering-Based Methodology for Mapping Lunar Surface Minerals Using Moon Mineralogy Mapper (M3) Hyperspectral Data
by George Messinios, Konstantinos Koutroumbas and Olga Sykioti
Remote Sens. 2026, 18(5), 776; https://doi.org/10.3390/rs18050776 - 4 Mar 2026
Viewed by 648
Abstract
In this study, we introduce a novel clustering-based methodology for mapping lunar surface mineralogy using hyperspectral data from the Moon Mineralogy Mapper (M3). The proposed methodology utilizes the Hapke photometric model to convert reflectance values to Single Scattering Albedo (SSA) values [...] Read more.
In this study, we introduce a novel clustering-based methodology for mapping lunar surface mineralogy using hyperspectral data from the Moon Mineralogy Mapper (M3). The proposed methodology utilizes the Hapke photometric model to convert reflectance values to Single Scattering Albedo (SSA) values and constructs the distribution/histogram of the positions where the SSA signatures of the pixels in the entire image exhibit absorptions. Then, based on this histogram, an appropriate feature representation for each pixel is defined, oriented to the characterization of the pixel mineral composition. The k-means clustering algorithm is then applied on the new pixel representations. Mineral composition analysis for each cluster is then performed by evaluating whether the absorption features of each pixel match predefined absorption rules based on the spectral characteristics of eight selected typical minerals found on the Moon. Moreover, the proposed methodology provides mineral compositions within each pixel. The methodology is applied on an M3 dataset covering an area at the eastern Mare Serenitatis and the western Mare Tranquillitatis. The main results reveal spatial and compositional variations among clusters, and they are compatible with prior knowledge on various lunar maria basaltic compositions, demonstrating the reliability of the proposed methodology applied on planetary mineral exploration. Full article
Show Figures

Figure 1

15 pages, 1404 KB  
Article
A Deep Learning-Based Decision Support System for Cholelithiasis in MRI Data
by Ebru Hasbay, Caglar Cengizler, Mahmut Ucar, Nagihan Durgun, Hayriye Ulkucan Disli and Deniz Bolat
J. Clin. Med. 2026, 15(5), 1891; https://doi.org/10.3390/jcm15051891 - 2 Mar 2026
Viewed by 291
Abstract
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time [...] Read more.
Background: Cholelithiasis can lead to significant complications if not diagnosed and treated promptly. Recent advances in deep learning and the improved ability of computer systems to detect clinically significant textural and morphological patterns in magnetic resonance imaging (MRI) can help reduce the time and resources required for the radiological evaluation of the gallbladder and cholelithiasis. Objective: To detect cholelithiasis, a support system with a graphical user interface for magnetic resonance (MR) images of the gallbladder was implemented to reduce the manual effort and time required to identify gallstones. Method: A commonly used deep learning model for pixel-level mask generation and instance segmentation, Mask Region Based Convolutional Neural Network (Mask R-CNN), was modified, trained, and evaluated to provide a robust pipeline for automated analysis. The primary aim was to automatically locate and label the gallbladder in T2-weighted axial MR images to detect gallstones and highlight the visual characteristics of the target region, thereby supporting radiologists. All automation was designed to operate on a single optimal slice instead of the entire volume. While this approach limits generalisability, it offers a practical starting point for method development. This setup reflects a feasibility-oriented design, rather than a comprehensive diagnostic capability. The dataset included 788 axial MR images from different patients. Each image was labeled and segmented by an experienced radiologist to train and test the models at the image level. Results: The proposed model with squeeze and excitation (SE) modification improved classification accuracy, and at the image level, stone detection improved in terms of accuracy, precision, and specificity, although recall and F1 scores slightly decreased. Conclusions: The results show that the modified Mask R-CNN model can detect gallstones with up to 0.89 accuracy, supporting the clinical applicability of the proposed method. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

Back to TopTop