Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,920)

Search Parameters:
Keywords = image classification algorithms

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2811 KB  
Article
Efficacy of Spectral-Aided Visual Enhancer in Classification of Esophageal Cancer
by Kok-Yean Koh, Arvind Mukundan, Riya Karmakar, Chaudhary Tirth Atulbhai, Tsung-Hsien Chen, Wei-Chun Weng and Hsiang-Chen Wang
Cancers 2026, 18(10), 1609; https://doi.org/10.3390/cancers18101609 - 15 May 2026
Abstract
Background/Objectives: Esophageal cancer is one of the major global causes of cancer mortality, and the 5-year survival rate remains below 20% because many cases are detected late. In this study, a Spectral-Aided Vision Enhancer (SAVE) algorithm was utilized to convert conventional white-light endoscopic [...] Read more.
Background/Objectives: Esophageal cancer is one of the major global causes of cancer mortality, and the 5-year survival rate remains below 20% because many cases are detected late. In this study, a Spectral-Aided Vision Enhancer (SAVE) algorithm was utilized to convert conventional white-light endoscopic images (WLI) into hyperspectral-like narrow-band imaging (NBI) images for machine-learning classification of Dysplasia, Normal, and Squamous Cell Carcinoma (SCC). Methods: A total of 762 WLI images obtained from Kaohsiung Medical University were augmented to 1074 using the Al bumentations library, employing vertical flipping, horizontal flipping, and rotations. The SAVE conversion pipeline employs a 24-patch Macbeth color checker for calibration, γ-correction, CIE XYZ transformation, and multivariate regression to interpolate spectral bands, yielding an average color difference of 2.79 (CIEDE2000) from true NBI. The training outcomes and performance metrics illustrate the versatility of the machine learning/deep learning models—Random Forest (RF), Support Vector Machine (SVM), and Convolutional Neural Network (CNN)—which were trained and evaluated on both the original WLI and SAVE datasets. Performance metrics were analyzed based on precision, recall, accuracy, and F1-score. Results: The CNN sample achieved an accuracy of 100 percent on SAVE data, compared to 93 percent for WLI. The accuracy of RF improved, with WLI at 91% and SAVE at 96%, while SVM increased from 79% to 84%. These improvements indicate the diagnostically valuable spectral variations that can be amplified with SAVE, resulting in significant enhancements in pre-cancer/SCC sensitivity. Conclusions: The proposed SAVE method demonstrates significant potential for enhancing endoscopic imaging and advancing computer-aided diagnosis in esophageal cancer screening, with applicability in other gastrointestinal imaging scenarios as well. Full article
(This article belongs to the Special Issue Advances in Endoscopic Management of Esophageal Cancer)
Show Figures

Figure 1

32 pages, 14314 KB  
Review
Benchmark Datasets for Satellite Image Time Series Classification: A Review
by Anming Zhang, Zheng Zhang, Keli Shi and Ping Tang
Remote Sens. 2026, 18(10), 1581; https://doi.org/10.3390/rs18101581 - 15 May 2026
Abstract
Recent advances in satellite missions, particularly the Landsat, Sentinel, and Gaofen series, have led to the rapid accumulation of high-quality remote sensing data with frequent revisits. As these data have become more widely available, Satellite Image Time Series (SITS) have become an important [...] Read more.
Recent advances in satellite missions, particularly the Landsat, Sentinel, and Gaofen series, have led to the rapid accumulation of high-quality remote sensing data with frequent revisits. As these data have become more widely available, Satellite Image Time Series (SITS) have become an important tool for monitoring Earth surface dynamics. SITS now supports a wide range of applications, including precision agriculture, Land Use/Cover Change (LULCC) monitoring, environmental management, and disaster response. This growth has also promoted the development of advanced SITS classification datasets. However, existing reviews have mainly focused on SITS classification algorithms or specific applications, while systematic comparisons of public SITS benchmark datasets remain limited. This lack of synthesis makes it difficult for researchers to navigate fragmented resources and select datasets that match specific scientific or operational tasks. To address this gap, this paper provides a comprehensive review and analysis of 29 publicly available medium-to-high-resolution SITS classification benchmark datasets released between 2017 and 2025. These datasets are intended for training, testing, and validating land-cover classification algorithms, rather than for direct use as operational map products. We conduct a detailed statistical and comparative analysis of these datasets, focusing on their key characteristics across spectral, temporal, and spatial dimensions, as well as their labeling systems. In addition, this review summarizes the SITS classification algorithms that have been developed and benchmarked using these datasets. Finally, we identify the main challenges in constructing and applying SITS classification datasets and discuss future research directions, particularly in data reconstruction, multimodal fusion, change analysis, and advanced model architectures. This survey provides the research community with a systematic overview of SITS classification benchmark datasets and aims to support continued progress in this rapidly developing field. Full article
Show Figures

Figure 1

11 pages, 388 KB  
Article
Accuracy of Deep Learning Models in Detecting Mandibular Furcation Defects on Panoramic Radiographs
by Meric Kurumlu, Fatma Karacaoglu, Mürüvvet Kalkan, Irem Ulku, Erdem Akagunduz and Kaan Orhan
Diagnostics 2026, 16(10), 1500; https://doi.org/10.3390/diagnostics16101500 - 15 May 2026
Abstract
Background/Objectives: Furcation defects pose a significant challenge in the diagnosis and treatment planning of periodontal diseases. Accurate clinical identification of furcation involvement is essential for improving treatment outcomes. This study aimed to evaluate the accuracy and effectiveness of various artificial intelligence (AI) [...] Read more.
Background/Objectives: Furcation defects pose a significant challenge in the diagnosis and treatment planning of periodontal diseases. Accurate clinical identification of furcation involvement is essential for improving treatment outcomes. This study aimed to evaluate the accuracy and effectiveness of various artificial intelligence (AI) algorithms in detecting furcation defects (FD) in mandibular molars. Methods: A total of 654 panoramic radiographs were randomly selected from patients who visited the Department of Oral and Maxillofacial Radiology at the Faculty of Dentistry, Ankara University. Each image was labeled as either “healthy” or “FD” and subsequently preprocessed. The performance of different deep learning algorithms in identifying FD was subsequently evaluated. Results: In the classification models employed, the highest scores were calculated as accuracy 97.9%, precision 97.10%, sensitivity 97.08%, and F1 score 97.09% in the Xception model. In the segmentation tests, the highest scores were calculated as accuracy 99.96%, precision 99.26%, sensitivity 97.57%, and F1 score 98.41% in the ENet model. Conclusions: Results of this study indicated that the use of artificial intelligence systems in detecting furcation involvement in mandibular molar teeth in panoramic radiography images is promising. Further studies covering larger data sets, including maxillary molar teeth, will increase the success rates in detecting furcation involvement. Full article
Show Figures

Figure 1

19 pages, 1186 KB  
Review
Applications of Artificial Intelligence in Endobronchial Ultrasound for Lung Cancer Diagnosis and Staging: A Scoping Review
by Jacobo Echeverri-Hoyos, Jaime A. Echeverri-Franco, Nicole Bonilla, Gustavo Monsalve-Morales and Eduardo Tuta-Quintero
Curr. Oncol. 2026, 33(5), 287; https://doi.org/10.3390/curroncol33050287 - 13 May 2026
Viewed by 12
Abstract
Introduction: Lung cancer remains highly lethal. Endobronchial ultrasound (EBUS) enables minimally invasive diagnosis and staging. Artificial intelligence (AI) improves image analysis and diagnostic accuracy, though current evidence is limited by retrospective, small, single center studies. Methods: A scoping review following Arksey–O’Malley, [...] Read more.
Introduction: Lung cancer remains highly lethal. Endobronchial ultrasound (EBUS) enables minimally invasive diagnosis and staging. Artificial intelligence (AI) improves image analysis and diagnostic accuracy, though current evidence is limited by retrospective, small, single center studies. Methods: A scoping review following Arksey–O’Malley, Levac, and JBI frameworks, was reported as per PRISMA-ScR. Databases were searched for studies (2015–2026) on AI in EBUS. Two reviewers screened, extracted standardized data, and performed narrative synthesis grouped by algorithm type, application, and performance metrics. Results: A total of 26 studies were included. Of these, 73.1% (19/26) employed deep learning-based models, while 26.9% (7/26) used traditional or hybrid machine learning approaches. The most frequent clinical objective was diagnostic classification of malignancy (14/26; 53.8%), followed by segmentation or cytological analysis (5/26; 19.2%), anatomical navigation or lymph node station classification (3/26; 11.5%), and multimodal predictive or staging support models (4/26; 15.4%). Most studies were based on EBUS-derived images or videos (18/26; 69.2%), including both convex-probe and radial-probe applications. Studies were distributed among Convex Probe-EBUS for mediastinal staging, Radial Probe-EBUS for peripheral lesion assessment, and rapid on-site evaluation-based cytology analysis, reflecting diverse clinical contexts. Most models were developed using static images. Conclusions: AI applications in EBUS are predominantly based on deep learning and mainly focused on diagnostic classification, with growing but still limited exploration of segmentation, navigation, and multimodal approaches. The evidence reflects diverse clinical contexts and data sources, particularly image-based inputs, but remains unevenly distributed across applications. Full article
51 pages, 5699 KB  
Review
A Review of Crop Attribute Detection for Agricultural Harvesting Machinery
by Qian Zhang, Zhenxiang Wang, Wenfei Wu, Lizhang Xu, Zhenghui Zhao and Shaowei Liang
Agronomy 2026, 16(10), 973; https://doi.org/10.3390/agronomy16100973 (registering DOI) - 13 May 2026
Viewed by 8
Abstract
Crop attribute detection, as a key component of intelligent agricultural harvesting machinery, plays a crucial role in harvesting efficiency, loss reduction, and autonomous operation control. Compared with existing reviews on artificial intelligence and sensing technologies in agriculture, this review focuses on crop attribute [...] Read more.
Crop attribute detection, as a key component of intelligent agricultural harvesting machinery, plays a crucial role in harvesting efficiency, loss reduction, and autonomous operation control. Compared with existing reviews on artificial intelligence and sensing technologies in agriculture, this review focuses on crop attribute detection scenarios oriented toward the intelligent decision-making and control requirements of agricultural harvesting machinery. It mainly analyzes crop attributes that affect harvesting operations, as well as the sensors and algorithms involved in detecting these attributes, and further clarifies the relationship between detection methods and control decisions in agricultural harvesting machinery. For grain crops, the key attributes relevant to harvesting operations include plant height, plant density, spike number, crop lodging, canopy structure, and crop position. For fruit and vegetable crops, the key attributes relevant to harvesting operations include maturity, position, and quality. From the perspectives of multi-source data acquisition, data analysis, and attribute detection algorithms, the key technologies in the field of crop attribute detection are systematically summarized and analyzed, including sensors used in crop attribute detection, such as RGB, spectral, near-infrared, and LiDAR sensors, as well as data analysis and recognition approaches, such as image classification, object detection, and point cloud analysis. The complexity of field environments and the dynamics of machine operation are analyzed, highlighting the technical bottlenecks of current detection systems in environmental adaptability, real-time responsiveness, and resistance to interference. To address these challenges, feasible optimization directions were proposed, including multi-sensor fusion, weakly supervised learning, and few-shot learning. This review aims to provide systematic references and theoretical support for the coordinated development of crop detection and control decision-making in intelligent agricultural harvesting systems. Full article
(This article belongs to the Section Precision and Digital Agriculture)
31 pages, 8109 KB  
Article
An Explainable and Robust Framework for Telkari Jewelry Recognition Using Deep Feature Representations and ANOVA-Based Feature Selection
by Sukru Aykat, Sabahattin Akgul, Fevzi Cakmak and Tarik Demir
Appl. Sci. 2026, 16(10), 4874; https://doi.org/10.3390/app16104874 - 13 May 2026
Viewed by 5
Abstract
Telkari is a decorative art based on the handcrafting of fine silver wires. Identifying Telkari jewelry is quite challenging due to its diverse designs and styles. Recognizing these jewelry items, which come in thousands of varieties, requires experts in Telkari. In this study, [...] Read more.
Telkari is a decorative art based on the handcrafting of fine silver wires. Identifying Telkari jewelry is quite challenging due to its diverse designs and styles. Recognizing these jewelry items, which come in thousands of varieties, requires experts in Telkari. In this study, we propose an approach for Telkari recognition using computer vision techniques, aiming to simulate the analysis of Telkari jewelry by Telkari experts. In the first stage, we created a dataset of Telkari by collecting images of Telkari products produced by Telkari masters in the Midyat district of Mardin, Turkey. In the second stage, the performance of the MobileNetV2 and ResNet50 deep learning models was examined using both direct classification and hyperparameter-tuned approaches. Furthermore, the feature vectors extracted from the deep learning models were trained using traditional machine learning algorithms, SVM, KNN, XGB, and RF, after selecting discriminative features with ANOVA. Among the hyperparameter-tuned models, ResNet50 outperformed MobileNetV2. Among the hybrid approaches, the SVM model trained on features obtained from ResNet50 achieved the highest performance with an overall accuracy of 99.56%. Furthermore, the interpretability techniques GradCAM, t-SNE, and LIME were used to examine the model’s decision-making processes, confirming that they focused on relevant visual regions and that predictions were based on the complex interactions among multiple features. In conclusion, this study provides a robust methodology for developing a highly accurate and transparent classification system for fields such as cultural heritage preservation and digitization. Full article
18 pages, 2017 KB  
Article
Optical Remote Sensing Image Classification Based on Quantum Statistics
by Xiaoli Li, Longlong Zhao, Hongzhong Li, Pan Chen, Luyi Sun, Shanxin Guo, Xuemei Zhao and Jinsong Chen
Electronics 2026, 15(10), 2075; https://doi.org/10.3390/electronics15102075 - 13 May 2026
Viewed by 85
Abstract
To address the difficulty of finely classifying complex optical remote sensing images, this paper innovatively proposes a new image classification method based on quantum statistics (QS) inspired by quantum physics. Each pixel in the image is regarded as a fermion, which is one [...] Read more.
To address the difficulty of finely classifying complex optical remote sensing images, this paper innovatively proposes a new image classification method based on quantum statistics (QS) inspired by quantum physics. Each pixel in the image is regarded as a fermion, which is one of the fundamental particles in quantum systems. The energy of the energy level where fermions are located is described using the negative logarithm of the distribution that the spectrum of the pixel follows. The Fermi-Dirac distribution, a quantum statistics model used to describe the complex occupation pattern of energy levels by fermions, is employed to characterize the membership relationship between pixels and classes, instead of traditional distance measures and probability measures. Then, the cost function guiding the convergence of classification is defined based on free energy, which is used to describe whether a system is in a state of thermal equilibrium according to energy, temperature, and entropy. To minimize the free energy, the derivative method and the simulated annealing algorithm are adopted to estimate the optimal solution for model parameters. The proposed method can describe complex features more effectively, obtain fine classification results, and overcome the curse of dimensionality in high-dimensional image classification. Finally, the feasibility and effectiveness are verified through qualitative and quantitative analysis of multispectral and hyperspectral image classification experiments. Full article
Show Figures

Figure 1

15 pages, 2078 KB  
Article
What You Read Is What You Classify: Highlighting Attributions to Text and Text-like Inputs
by Daniel S. Berman, Brian Merritt, Stanley Ta, Dana Udwin, Amanda Ernlund, Jeremy Ratcliff and Vijay Narayan
AI 2026, 7(5), 168; https://doi.org/10.3390/ai7050168 - 13 May 2026
Viewed by 65
Abstract
At present, there are no easily understood explainable artificial intelligence (AI) methods for discrete token inputs, like text. Most explainable AI techniques do not extend well to token sequences, where both local and global features matter, because state-of-the-art models, like transformers, tend to [...] Read more.
At present, there are no easily understood explainable artificial intelligence (AI) methods for discrete token inputs, like text. Most explainable AI techniques do not extend well to token sequences, where both local and global features matter, because state-of-the-art models, like transformers, tend to focus on global connections. Therefore, existing explainable AI algorithms fail by (i) identifying disparate tokens of importance, or (ii) assigning a large number of tokens a low value of importance. This method for explainable AI for tokens-based classifiers generalizes a mask-based explainable AI algorithm designed originally for images. It starts with an Explainer neural network that is trained to create masks to hide information not relevant for classification. Then, the Hadamard product of the mask and the continuous values of the classifier’s embedding layer is taken and passed through the classifier, changing the magnitude of the embedding vector but keeping the orientation unchanged. The Explainer is trained for a taxonomic classifier for nucleotide sequences and it is shown that the masked segments are less relevant to classification than the unmasked ones. This method focused on the importance the token as a whole (i.e., a segment of the input sequence), producing a human-readable explanation. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

27 pages, 8481 KB  
Article
High-to-Low Spectral Mapping for Cross-System Feature Adaptation in Medical Hyperspectral Imaging
by Javier Santana-Nunez, Max Verbers, Carlos Vega, Francesca Manni, Raquel Leon, Jesús Morera Molina, Juan F. Piñeiro, Alfonso Lagares, Luis Jimenez-Roldan, Gustavo M. Callico, Svitlana Zinger and Himar Fabelo
Bioengineering 2026, 13(5), 549; https://doi.org/10.3390/bioengineering13050549 (registering DOI) - 13 May 2026
Viewed by 131
Abstract
Hyperspectral (HS) imaging has proven to be a promising intraoperative tool for tissue discrimination. However, obtaining representative datasets for intraoperative imaging remains challenging due to the complexity of surgical workflows and the sensitivity of the operating environments. Hence, developing new methods for cross-system [...] Read more.
Hyperspectral (HS) imaging has proven to be a promising intraoperative tool for tissue discrimination. However, obtaining representative datasets for intraoperative imaging remains challenging due to the complexity of surgical workflows and the sensitivity of the operating environments. Hence, developing new methods for cross-system feature adaptation could address this limitation. This work proposes a method for mapping high-resolution spectral data into lower-resolution sensor-conditioned domains, generating synthetic HS data that replicate the spectral features of the target system. We assessed the mapped data using public HS datasets and quantified spectral similarities using different metrics. Additionally, we evaluated the method with a HS classification framework for an intraoperative brain tumour classification problem. Results demonstrate that the synthetic data achieve high spectral alignment to original and actual data, captured with the target system. The brain tumour classification results show comparable performance between data modalities. Overall, this work provides a way to adapt existing HS datasets to complement newly acquired data, accelerating the development of artificial intelligence algorithms. This is particularly relevant in medical research, and especially in neurosurgery, where the complexity of acquisition environments limits the collection of large datasets. Full article
Show Figures

Figure 1

26 pages, 3081 KB  
Article
Radiologic Evaluation of Odontogenic Sinusitis and Its Etiologic Factors: Lessons Learned from a Retrospective Study with a Proposed Imaging-Guided Management Pathway
by Kamil Nelke, Monika Morawska-Kochman, Maciej Janeczek, Agata Małyszek, Ömer Uranbey, Klaudiusz Łuczak, Jan Nienartowicz, India Maag, Angela Rosa Caso and Maciej Dobrzyński
J. Clin. Med. 2026, 15(10), 3724; https://doi.org/10.3390/jcm15103724 - 12 May 2026
Viewed by 110
Abstract
Introduction: Odontogenic sinusitis (ODS) is an underrecognized cause of maxillary sinus inflammation and is frequently associated with dental, periodontal, endodontic, and iatrogenic factors. Accurate identification of the odontogenic source is essential for appropriate treatment planning. Cone-beam computed tomography (CBCT) allows detailed evaluation of [...] Read more.
Introduction: Odontogenic sinusitis (ODS) is an underrecognized cause of maxillary sinus inflammation and is frequently associated with dental, periodontal, endodontic, and iatrogenic factors. Accurate identification of the odontogenic source is essential for appropriate treatment planning. Cone-beam computed tomography (CBCT) allows detailed evaluation of the maxillary sinus, adjacent teeth, alveolar bone, and periodontal structures, and may improve the radiologic differentiation of ODS. Materials and Methods: This retrospective observational study analyzed radiologic data from patients evaluated and treated by the authors for suspected odontogenic sinusitis between 2019 and 2026. The final study group included 85 patients with CBCT-based evidence of odontogenic pathology affecting the maxillary sinus. CBCT scans were reviewed to identify tooth-related and treatment-related etiologic factors associated with ODS. Based on the radiologic findings, the authors developed a CBCT-based classification of odontogenic etiologies and proposed an imaging-guided management algorithm. Results: CBCT identified a broad spectrum of odontogenic factors associated with maxillary sinus disease. The most relevant radiologic patterns included endodontic and periapical pathology, periodontal or combined endo-periodontal disease, post-extraction inflammatory changes, odontogenic cysts, oro-antral communication or fistula, retained roots or teeth, displaced endodontic materials, and grafting or implant-related complications. These findings were organized into 16 radiologic categories reflecting the principal etiologic pathways of ODS. The proposed classification facilitated correlation between radiologic presentation and the recommended dental, surgical, and otolaryngologic treatment approach. Conclusions: CBCT is a valuable imaging modality for identifying odontogenic causes of maxillary sinus inflammation and provides more precise diagnostic information than conventional radiography alone. A structured CBCT-based evaluation may improve etiologic diagnosis, support multidisciplinary decision-making, and help guide individualized management of patients with ODS. Full article
18 pages, 1505 KB  
Article
CAEP: Cross-Modal Adaptive Embedding Prediction for Self-Supervised Modulation Classification
by Yuanfeng Wu, Yuhang Hong, Zuqi Ma, Ao Wu, Xiang Huang, Mengfan Xue and Shuyuan Yang
Electronics 2026, 15(10), 2062; https://doi.org/10.3390/electronics15102062 - 12 May 2026
Viewed by 179
Abstract
Although self-supervised learning methods have shown promising progress in addressing the issue of scarce labeled data in automatic modulation classification, they remain constrained by heavy reliance on extensive negative samples and an inability to effectively capture inter-modal feature correlations. To overcome these limitations, [...] Read more.
Although self-supervised learning methods have shown promising progress in addressing the issue of scarce labeled data in automatic modulation classification, they remain constrained by heavy reliance on extensive negative samples and an inability to effectively capture inter-modal feature correlations. To overcome these limitations, we propose a novel self-supervised automatic modulation classification algorithm based on multi-path embedding prediction, termed CAEP. In CAEP, the raw signal is first dynamically segmented into current and future sub-series. Then, dedicated encoders are utilized to extract embeddings for both sub-series and leverage current information to predict future states, while randomly masking the corresponding time–frequency images transformed from the time-domain signal to predict the obscured spectral components. Furthermore, latent temporal embeddings are deployed to predict information within the time–frequency domain to achieve cross-modal retrieval. Finally, a classification head is connected alongside a temporal modal encoder, which is fine-tuned using a limited set of labeled samples to accomplish modulation classification. Experimental results on two benchmark datasets demonstrate that the proposed method achieves robust performance across varying noise conditions. Full article
Show Figures

Figure 1

24 pages, 7047 KB  
Article
Non-Contact Detection of Apnea-like Breathing Cessations Using Laser Speckle Pattern Analysis
by Ayuushi Dutta, Amir Shemer, Ariel Schwarz, Yossef Danan and Yevgeny Beiderman
Sensors 2026, 26(10), 3042; https://doi.org/10.3390/s26103042 - 12 May 2026
Viewed by 173
Abstract
Sleep apnea is a prevalent sleep-related breathing disorder characterized by recurrent cessations or reductions in airflow during sleep. It significantly impacts the quality of life, yet current diagnostic methods like polysomnography (PSG) are expensive and uncomfortable, limiting accessibility and ease of use. We [...] Read more.
Sleep apnea is a prevalent sleep-related breathing disorder characterized by recurrent cessations or reductions in airflow during sleep. It significantly impacts the quality of life, yet current diagnostic methods like polysomnography (PSG) are expensive and uncomfortable, limiting accessibility and ease of use. We developed a novel non-contact biosensing system using secondary laser speckle pattern analysis and dedicated image processing algorithms for apnea-like breathing cessations. The proposed method was tested on 14 healthy subjects with diverse body characteristics, aged 22–50 years (mean 33.1±9.3 years) and body mass index (BMI) ranging from 19.6 to 28.7 kg/m2 (mean 24.6±3.0 kg/m2) at different ‘simulated’ sleeping positions (back-lying, stomach-lying and side-lying), using voluntary breath-holding protocols to simulate apnea-like cessations lasting 10–20 s (short duration) and 20–30 s (long duration). To evaluate the performance of the system without selection bias, two complementary five-fold cross-validation procedures were applied: a participant-level and a class-level stratification. Using class-wise stratification, the system achieved an overall accuracy of 87.0±3.0% (95% CI: [85.3%, 88.7%]), long-cessation sensitivity of 91±12.4%(95%CI:[83.8%,98.2%]) and a short-cessation sensitivity of 88.0±11%(95%CI:[81.6%,94.4%]). The two-class classification strategy confirm the robustness of the approach, supporting the potential of secondary laser speckle pattern analysis as a low-cost, non-contact alternative for home-based sleep apnea screening. Full article
(This article belongs to the Special Issue Unobtrusive Sensing for Continuous Health Monitoring)
Show Figures

Figure 1

38 pages, 5046 KB  
Article
Using Sentinel-2 Time Series to Monitor the Loss of Individual Large Trees in Humanized Landscapes
by João Gonçalo Soutinho, Kerri T. Vierling, Lee A. Vierling, Jörg Müller and João F. Gonçalves
Remote Sens. 2026, 18(10), 1519; https://doi.org/10.3390/rs18101519 - 12 May 2026
Viewed by 337
Abstract
Large trees are keystone ecological structures that sustain biodiversity and ecosystem services, particularly in human-altered landscapes. However, their persistence is increasingly threatened by land-use change, urban expansion, and inadequate monitoring. This study develops and validates a scalable, automated framework for monitoring the loss [...] Read more.
Large trees are keystone ecological structures that sustain biodiversity and ecosystem services, particularly in human-altered landscapes. However, their persistence is increasingly threatened by land-use change, urban expansion, and inadequate monitoring. This study develops and validates a scalable, automated framework for monitoring the loss of large individual trees using satellite image time series and breakpoint detection. We compared four spectral indices (SIs): Enhanced Vegetation Index 2–EVI2; Normalized Burn Ratio–NBR; Normalized Difference Red Edge–NDRE, and the Normalized Difference Vegetation Index–NDVI derived from Sentinel-2 imagery (2015–2025) for 691 georeferenced trees in Lousada, northern Portugal. Data were accessed and processed in Google Earth Engine and analyzed using a custom R-based workflow, including cloud masking, gap-filling, temporal interpolation, upper-envelope smoothing, deseasonalization, and break detection. Five breakpoint detection algorithms were compared: BFAST, energy-divisive, linear regression of structural changes, wild-binary segmentation, and change point models. Detected breakpoints were subsequently post-validated to determine whether they were associated with declines in SIs, using three pre-/post-breakpoint methods: comparisons of short- and long-term medians and a randomized trend analysis. As a baseline, these algorithms/post-validation logic were compared against the Continuous Change Detection and Classification (CCDC) approach. The results indicate moderate but consistent break detection performance, with a maximum balanced accuracy of 73% (for EVI2 or NDVI and using the energy-divisive algorithm coupled with the long-term median post-validator) under conservative validation criteria and high specificity for surviving trees. CCDC ranked comparatively lower at 62%. Algorithm performance varied substantially, with the energy-divisive providing the most conservative detection and the wild-binary segmentation yielding higher sensitivity. Performance was further influenced by tree structural attributes and species identity, with larger, taller and isolated trees, as well as particular genera, showing higher detection accuracy, with genus Eucalyptus, Tilia and Celtis yielding top performance results (79–65%) and Quercus, Castanea and Platanus the lowest (62–60%). By integrating satellite observations with large-tree inventory data from the Green Giants citizen science project, this study demonstrates the potential of decentralized, Earth observation-based monitoring to support tree-level loss assessments in fragmented landscapes. The proposed framework provides a transferable foundation for wide-scale monitoring of large trees in peri-urban and mixed-use environments. Full article
(This article belongs to the Special Issue Urban Ecology Monitoring Using Remote Sensing)
Show Figures

Figure 1

25 pages, 1656 KB  
Article
Federated Learning with Differential Privacy for Ultrasound Breast Cancer Classification: An Empirical Study
by Nursultan Makhanov, Beibit Abdikenov, Tomiris Zhaksylyk and Temirlan Karibekov
J. Imaging 2026, 12(5), 205; https://doi.org/10.3390/jimaging12050205 - 11 May 2026
Viewed by 127
Abstract
Breast cancer is a critical global health challenge, and deep learning shows transformative potential for medical image classification. However, privacy regulations such as HIPAA and GDPR create barriers to centralized data aggregation across institutions. This paper presents an empirical evaluation of federated learning [...] Read more.
Breast cancer is a critical global health challenge, and deep learning shows transformative potential for medical image classification. However, privacy regulations such as HIPAA and GDPR create barriers to centralized data aggregation across institutions. This paper presents an empirical evaluation of federated learning (FL) for breast cancer classification in ultrasound images, systematically comparing seven deep learning architectures (ResNet-50, VGG16, VGG19, DenseNet-121, MobileNetV2, Vision Transformer, CoAtNet) across three FL algorithms (FedAvg, FedProx, FedOpt) with client-side differential privacy (DP). Using a simulated federation of eight institutions, we evaluate three clinically relevant classification scenarios. Federated models achieve performance comparable to centralized baselines—98.52% accuracy for normal/abnormal screening, 89.53% for three-class classification—with ViT-small and DenseNet-121 exceeding their centralized counterparts in several configurations. Under strong DP constraints (noise multiplier η=2.0, yielding conservative privacy budget estimates of ε<1.0 with δ=105), screening accuracy remains above 82%, though diagnostic tasks incur substantial degradation (best 68.42%). Our findings provide empirical guidance on architecture selection, FL algorithm choice, and privacy-utility trade-offs for privacy-preserving breast cancer diagnosis, while identifying key challenges for clinical deployment. Full article
(This article belongs to the Section Medical Imaging)
28 pages, 8354 KB  
Article
Research on Fracture Identification of Tunnel Face Based on the CBAM-UNet Model
by Wenfeng Tu, Qingpeng Ma, Weiting Wang, Chuan Wang, Xinbo Jiang, Ning Zhang, Fan Yang and Hao Zou
Electronics 2026, 15(10), 2037; https://doi.org/10.3390/electronics15102037 - 11 May 2026
Viewed by 230
Abstract
The extraction of fracture parameters and the classification of surrounding strata are crucial criteria for assessing the stability of a tunnel face. To overcome the limitations of conventional manual sketching, this paper proposes a tunnel face fracture identification, extraction, and surrounding strata classification [...] Read more.
The extraction of fracture parameters and the classification of surrounding strata are crucial criteria for assessing the stability of a tunnel face. To overcome the limitations of conventional manual sketching, this paper proposes a tunnel face fracture identification, extraction, and surrounding strata classification technique based on deep learning technology. Based on the collection of on-site tunnel face images, we construct a comprehensive database comprising 20,000 dataset samples. By refining the conventional UNet deep learning network model and incorporating the channel and spatial attention modules (Convolutional Block Attention Module, CBAM), we achieve automated identification of fracture traces on the tunnel face, yielding remarkable recognition outcomes. Through training and testing the CBAM-UNet network model on this extensive database, we conduct a comparative analysis with alternative deep learning approaches and conventional edge detection algorithms. The results unequivocally demonstrate the exceptional performance of the CBAM-UNet model in fracture recognition. Subsequently, we conduct statistical analysis and grouping of the identified fractures, as well as calculate the integrity indices of the surrounding rock mass. This enables the expeditious assessment of the tunnel face’s surrounding rock grade. Full article
Show Figures

Figure 1

Back to TopTop