Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (803)

Search Parameters:
Keywords = X-ray classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
53 pages, 29053 KB  
Article
Integration of Multispectral and Hyperspectral Satellite Imagery for Mineral Mapping of Bauxite Mining Wastes in Amphissa Region, Greece
by Evlampia Kouzeli, Ioannis Pantelidis, Konstantinos G. Nikolakopoulos, Harilaos Tsikos and Olga Sykioti
Remote Sens. 2026, 18(2), 342; https://doi.org/10.3390/rs18020342 - 20 Jan 2026
Abstract
The mineral-mapping capability of three spaceborne sensors with different spatial and spectral resolutions, the Environmental Mapping and Analysis Program (EnMap), Sentinel-2, and World View-3 (WV3), is assessed regarding bauxite mining wastes in Amphissa, Greece, with validation based on ground samples. We applied the [...] Read more.
The mineral-mapping capability of three spaceborne sensors with different spatial and spectral resolutions, the Environmental Mapping and Analysis Program (EnMap), Sentinel-2, and World View-3 (WV3), is assessed regarding bauxite mining wastes in Amphissa, Greece, with validation based on ground samples. We applied the well-established Linear Spectral Unmixing (LSU) and Spectral Angle Mapping (SAM) classification techniques utilizing endmembers of two established spectral libraries and incorporated ground data through geochemical and mineralogical analyses, X-ray fluorescence (XRF), Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS), and X-ray Diffraction (XRD), to assess classification performance. The main lithologies in this area are bauxites and limestones; therefore, aluminum oxyhydroxides, calcite, and iron oxide minerals were the dominant phases as indicated by the XRF/XRD results. Almost all target minerals were mapped with the three sensors and both methods. The performance of EnMap is affected by its coarser spatial resolution despite its higher spectral resolution using these methods. Sentinel-2 is most effective for mapping iron-bearing minerals, particularly hematite, due to its higher spatial resolution and the presence of diagnostic iron oxide absorption features in the VNIR. World View 3 Shortwave Infrared (WV3-SWIR) performs better when mapping calcite, benefiting from its eight SWIR spectral bands and very high spatial resolution (3.7 m). Hematite and calcite yield the highest accuracy, especially with SAM, indicating 0.80 for Sentinel-2 (10 m) for hematite and 0.87 for WV3-SWIR (3.7 m) for calcite. AlOOH shows higher accuracy with SAM, ranging from 0.57 to 0.80 across the sensors, while LSU shows lower accuracy, ranging from 0.20 to 0.73 across the sensors. This study showcases each sensor’s ability to map minerals while also demonstrating that spectral coverage and the spatial and spectral resolution, as well as the characteristics of the selected endmembers, exert a critical influence on the accuracy of mineral mapping in mine waste. Full article
38 pages, 16831 KB  
Article
Hybrid ConvNeXtV2–ViT Architecture with Ontology-Driven Explainability and Out-of-Distribution Awareness for Transparent Chest X-Ray Diagnosis
by Naif Almughamisi, Gibrael Abosamra, Adnan Albar and Mostafa Saleh
Diagnostics 2026, 16(2), 294; https://doi.org/10.3390/diagnostics16020294 (registering DOI) - 16 Jan 2026
Viewed by 143
Abstract
Background: Chest X-ray (CXR) is widely used for the assessment of thoracic diseases, yet automated multi-label interpretation remains challenging due to subtle visual patterns, overlapping anatomical structures, and frequent co-occurrence of abnormalities. While recent deep learning models have shown strong performance, limitations in [...] Read more.
Background: Chest X-ray (CXR) is widely used for the assessment of thoracic diseases, yet automated multi-label interpretation remains challenging due to subtle visual patterns, overlapping anatomical structures, and frequent co-occurrence of abnormalities. While recent deep learning models have shown strong performance, limitations in interpretability, anatomical awareness, and robustness continue to hinder their clinical adoption. Methods: The proposed framework employs a hybrid ConvNeXtV2–Vision Transformer (ViT) architecture that combines convolutional feature extraction for capturing fine-grained local patterns with transformer-based global reasoning to model long-range contextual dependencies. The model is trained exclusively using image-level annotations. In addition to classification, three complementary post hoc components are integrated to enhance model trust and interpretability. A segmentation-aware Gradient-weighted class activation mapping (Grad-CAM) module leverages CheXmask lung and heart segmentations to highlight anatomically relevant regions and quantify predictive evidence inside and outside the lungs. An ontology-driven neuro-symbolic reasoning layer translates Grad-CAM activations into structured, rule-based explanations aligned with clinical concepts such as “basal effusion” and “enlarged cardiac silhouette”. Furthermore, a lightweight out-of-distribution (OOD) detection module based on confidence scores, energy scores, and Mahalanobis distance scores is employed to identify inputs that deviate from the training distribution. Results: On the VinBigData test set, the model achieved a macro-AUROC of 0.9525 and a Micro AUROC of 0.9777 when trained solely with image-level annotations. External evaluation further demonstrated strong generalisation, yielding macro-AUROC scores of 0.9106 on NIH ChestXray14 and 0.8487 on CheXpert (frontal views). Both Grad-CAM visualisations and ontology-based reasoning remained coherent on unseen data, while the OOD module successfully flagged non-thoracic images. Conclusions: Overall, the proposed approach demonstrates that hybrid convolutional neural network (CNN)–vision transformer (ViT) architectures, combined with anatomy-aware explainability and symbolic reasoning, can support automated chest X-ray diagnosis in a manner that is accurate, transparent, and safety-aware. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 1289 KB  
Article
Machine Learning-Based Automatic Diagnosis of Osteoporosis Using Bone Mineral Density Measurements
by Nilüfer Aygün Bilecik, Levent Uğur, Erol Öten and Mustafa Çapraz
J. Clin. Med. 2026, 15(2), 549; https://doi.org/10.3390/jcm15020549 - 9 Jan 2026
Viewed by 217
Abstract
Background: Osteoporosis and osteopenia are prevalent bone diseases characterized by reduced bone mineral density (BMD) and an increased risk of fractures, particularly in postmenopausal women. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, it has limitations regarding accessibility, cost, and [...] Read more.
Background: Osteoporosis and osteopenia are prevalent bone diseases characterized by reduced bone mineral density (BMD) and an increased risk of fractures, particularly in postmenopausal women. While dual-energy X-ray absorptiometry (DXA) remains the gold standard for diagnosis, it has limitations regarding accessibility, cost, and predictive capacity for fracture risk. Machine learning (ML) approaches offer an opportunity to develop automated and more accurate diagnostic models by incorporating both BMD values and clinical variables. Method: This study retrospectively analyzed BMD data from 142 postmenopausal women, classified into 3 diagnostic groups: normal, osteopenia, and osteoporosis. Various supervised ML algorithms—including Support Vector Machines (SVM), k-Nearest Neighbors (k-NN), Decision Trees (DT), Naive Bayes (NB), Linear Discriminant Analysis (LDA), and Artificial Neural Networks (ANN)—were applied. Feature selection techniques such as ANOVA, CHI2, MRMR, and Kruskal–Wallis were used to enhance model performance, reduce dimensionality, and improve interpretability. Model performance was evaluated using 10-fold cross-validation based on accuracy, true positive rate (TPR), false negative rate (FNR), and AUC values. Results: Among all models and feature selection combinations, SVM with ANOVA-selected features achieved the highest classification accuracy (94.30%) and 100% TPR for the normal class. Feature sets based on traditional diagnostic regions (L1–L4, femoral neck, total femur) also showed high accuracy (up to 90.70%) but were generally outperformed by statistically selected features. CHI2 and MRMR methods also yielded robust results, particularly when paired with SVM and k-NN classifiers. The results highlight the effectiveness of combining statistical feature selection with ML to enhance diagnostic precision for osteoporosis and osteopenia. Conclusions: Machine learning algorithms, when integrated with data-driven feature selection strategies, provide a promising framework for automated classification of osteoporosis and osteopenia based on BMD data. ANOVA emerged as the most effective feature selection method, yielding superior accuracy across all classifiers. These findings support the integration of ML-based decision support tools into clinical workflows to facilitate early diagnosis and personalized treatment planning. Future studies should explore more diverse and larger datasets, incorporating genetic, lifestyle, and hormonal factors for further model enhancement. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

9 pages, 1301 KB  
Article
The Impact of CT Imaging on the Diagnosis of Fragility Fractures of the Pelvis: An Observational Prospective Multicenter Study
by Michał Kułakowski, Karol Elster, Wojciech Iluk, Dawid Pacek, Tomasz Gieroba, Michał Wojciechowski, Łukasz Pruffer, Magdalena Krupka, Jarosław Witkowski, Magdalena Grzonkowska and Mariusz Baumgart
J. Clin. Med. 2026, 15(2), 531; https://doi.org/10.3390/jcm15020531 - 9 Jan 2026
Viewed by 167
Abstract
Background/Objectives: Fragility fractures of the pelvis (FFPs) are a significant concern in the elderly population, often leading to severe morbidity and mortality. This study aims to evaluate the diagnostic challenges, clinical outcomes, and mortality rates associated with FFPs in patients referred to [...] Read more.
Background/Objectives: Fragility fractures of the pelvis (FFPs) are a significant concern in the elderly population, often leading to severe morbidity and mortality. This study aims to evaluate the diagnostic challenges, clinical outcomes, and mortality rates associated with FFPs in patients referred to multiple hospitals. Methods: A total of 99 patients with suspected pelvic fragility fractures were enrolled between January 2023 and June 2025. Initial diagnoses were made using plain X-rays, with computed tomography (CT) utilized to assess posterior ring fractures. Data on demographics, fracture types according to the Fragility Fracture of the Pelvis (FFP) Classification, hemoglobin levels, and mortality rates were collected and analyzed. Results: The findings revealed that while plain X-rays identified only anterior pelvic ring fractures, CT scans detected posterior ring fractures in 60.6% of cases. Patients with Nakatani II and III pelvic ramus fractures exhibited the most significant decreases in hemoglobin levels. The overall mortality rate was found to be 13.13%, with the highest rates observed in FFP I (13.5%) and FFP II (11.9%) groups. Conclusions: The findings of this study underscore the importance of CT imaging in the diagnosis of FFPs and highlight the need for close monitoring of hemoglobin levels in affected patients. This study also emphasizes the increased mortality risk associated with more complex fracture types. Future research should focus on evaluating functional independence and treatment outcomes to guide clinical decision-making in managing fragility fractures of the pelvis. Full article
Show Figures

Figure 1

28 pages, 3824 KB  
Article
Comparison Between Early and Intermediate Fusion of Multimodal Techniques: Lung Disease Diagnosis
by Ahad Alloqmani and Yoosef B. Abushark
AI 2026, 7(1), 16; https://doi.org/10.3390/ai7010016 - 7 Jan 2026
Viewed by 289
Abstract
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph [...] Read more.
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph (X-rays), and computed tomography (CT) scans to enhance disease classification performance. Two fusion strategies, early and intermediate fusion, were implemented and evaluated against three single-modality baselines. The dataset was collected from different sources. Each dataset underwent preprocessing steps, including noise removal, grayscale conversion, image cropping, and class balancing, to ensure data quality. Convolutional neural network (CNN) and Extreme Inception (Xception) architectures were used for feature extraction and classification. The results show that multimodal learning achieves superior performance compared with single models. The intermediate fusion model achieved 98% accuracy, while the early fusion model reached 97%. In contrast, single CXR and CT models achieved 94%, and the cough sound model achieved 79%. These results confirm that multimodal integration, particularly intermediate fusion, offers a more reliable framework for automated lung disease diagnosis. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

41 pages, 2644 KB  
Article
Anatomy-Guided Hybrid CNN–ViT Model with Neuro-Symbolic Reasoning for Early Diagnosis of Thoracic Diseases Multilabel
by Naif Almughamisi, Gibrael Abosamra, Adnan Albar and Mostafa Saleh
Diagnostics 2026, 16(1), 159; https://doi.org/10.3390/diagnostics16010159 - 4 Jan 2026
Viewed by 338
Abstract
Background/Objectives: The clinical adoption of AI in radiology requires models that balance high accuracy with interpretable, anatomically plausible reasoning. This study presents an integrated diagnostic framework that addresses this need by unifying a hybrid deep-learning architecture with explicit anatomical guidance and neuro-symbolic [...] Read more.
Background/Objectives: The clinical adoption of AI in radiology requires models that balance high accuracy with interpretable, anatomically plausible reasoning. This study presents an integrated diagnostic framework that addresses this need by unifying a hybrid deep-learning architecture with explicit anatomical guidance and neuro-symbolic inference. Methods: The proposed system employs a dual-path model: an enhanced EfficientNetV2 backbone extracts hierarchical local features, whereas a refined Vision Transformer captures global contextual dependencies across the thoracic cavity. These representations are fused and critically disciplined through auxiliary segmentation supervision using CheXmask. This anchors the learned features to lung and cardiac anatomy, reducing reliance on spurious artifacts. This anatomical basis is fundamental to the interpretability pipeline. It confines Gradient-weighted Class Activation Mapping (Grad-CAM) visual explanations to clinically valid regions. Then, a novel neuro-symbolic reasoning layer is introduced. Using a fuzzy logic engine and radiological ontology, this module translates anatomically aligned neural activations into structured, human-readable diagnostic statements that explicitly articulate the model’s clinical rationale. Results: Evaluated on the NIH ChestX-ray14 dataset, the framework achieved a macro-AUROC of 0.9056 and a macro-accuracy of 93.9% across 14 pathologies, with outstanding performance on emphysema (0.9694), hernia (0.9711), and cardiomegaly (0.9589). The model’s generalizability was confirmed through external validation on the CheXpert dataset, yielding a macro-AUROC of 0.85. Conclusions: This study demonstrates a cohesive path toward clinically transparent and trustworthy AI by seamlessly integrating data-driven learning with anatomical knowledge and symbolic reasoning. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

20 pages, 1611 KB  
Article
Portable X-Ray Fluorescence as a Proxy for Aerinite in Pigments of Medieval Alto Aragón Cultural Heritage
by José Antonio Manso-Alonso, María Puértolas-Clavero, Sheila Ayerbe-Lalueza, Pablo Martín-Ramos and José Antonio Cuchí-Oterino
Spectrosc. J. 2026, 4(1), 1; https://doi.org/10.3390/spectroscj4010001 - 3 Jan 2026
Viewed by 232
Abstract
Aerinite is a rare blue aluminosilicate mineral whose identification as a pigment in Pyrenean medieval artworks typically requires invasive microsampling. This study evaluates portable X-ray fluorescence spectroscopy (pXRF) as a noninvasive screening tool for aerinite in Alto Aragón (Spain) cultural heritage. Elemental compositions [...] Read more.
Aerinite is a rare blue aluminosilicate mineral whose identification as a pigment in Pyrenean medieval artworks typically requires invasive microsampling. This study evaluates portable X-ray fluorescence spectroscopy (pXRF) as a noninvasive screening tool for aerinite in Alto Aragón (Spain) cultural heritage. Elemental compositions of aerinite and lapis lazuli references, ceramics, polychromed capitals, and thirteenth- to fifteenth-century painted panels were measured with a Niton XL3t GOLDD+ spectrometer. Data were analyzed using log-ratio linear discriminant analysis (LDA), with silicon as an internal normalizer. Aerinite references showed Cu and Co levels below instrumental detection limits, along with Fe (6.99 ± 1.04 wt%), Al (4.91 ± 1.38 wt%), and Si (15.95 ± 1.60 wt%). High-confidence aerinite classifications were obtained for Cu-free and Co-free blue pigments in the Barbastro Chrismon, the Buira altar frontal, and other panels. Extension of the protocol to green pigments revealed that two samples—from the Saint Anthony Abbot panel and Portaspana retable—were also classified as aerinite, providing the analytical evidence for “verde de Juseu” as a naturally occurring greenish aerinite variety. Despite known pXRF limitations, this technique effectively screens candidate aerinite-containing passages for subsequent microanalytical confirmation. Full article
Show Figures

Graphical abstract

22 pages, 1494 KB  
Article
Leveraging Large-Scale Public Data for Artificial Intelligence-Driven Chest X-Ray Analysis and Diagnosis
by Farzeen Khalid Khan, Waleed Bin Tahir, Mu Sook Lee, Jin Young Kim, Shi Sub Byon, Sun-Woo Pi and Byoung-Dai Lee
Diagnostics 2026, 16(1), 146; https://doi.org/10.3390/diagnostics16010146 - 1 Jan 2026
Viewed by 374
Abstract
Background: Chest X-ray (CXR) imaging is crucial for diagnosing thoracic abnormalities; however, the rising demand burdens radiologists, particularly in resource-limited settings. Method: We used large-scale, diverse public CXR datasets with noisy labels to train general-purpose deep learning models (ResNet, DenseNet, EfficientNet, [...] Read more.
Background: Chest X-ray (CXR) imaging is crucial for diagnosing thoracic abnormalities; however, the rising demand burdens radiologists, particularly in resource-limited settings. Method: We used large-scale, diverse public CXR datasets with noisy labels to train general-purpose deep learning models (ResNet, DenseNet, EfficientNet, and DLAD-10) for multi-label classification of thoracic conditions. Uncertainty quantification was incorporated to assess model reliability. Performance was evaluated on both internal and external validation sets, with analyses of data scale, diversity, and fine-tuning effects. Result: EfficientNet achieved the highest overall area under the receiver operating characteristic curve (0.8944) with improved sensitivity and F1-score. Moreover, as training data volume increased—particularly using multi-source datasets—both diagnostic performance and generalizability were enhanced. Although larger datasets reduced predictive uncertainty, conditions such as tuberculosis remained challenging due to limited high-quality samples. Conclusions: General-purpose deep learning models can achieve robust CXR diagnostic performance when trained on large-scale, diverse public datasets despite noisy labels. However, further targeted strategies are needed for underrepresented conditions. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

20 pages, 7543 KB  
Article
Contrastive Learning with Feature Space Interpolation for Retrieval-Based Chest X-Ray Report Generation
by Zahid Ur Rahman, Gwanghyun Yu, Lee Jin and Jin Young Kim
Appl. Sci. 2026, 16(1), 470; https://doi.org/10.3390/app16010470 - 1 Jan 2026
Viewed by 420
Abstract
Automated radiology report generation from chest X-rays presents a critical challenge in medical imaging. Traditional image-captioning models struggle with clinical specificity and rare pathologies. Recently, contrastive vision language learning has emerged as a robust alternative that learns joint visual–textual representations. However, applying contrastive [...] Read more.
Automated radiology report generation from chest X-rays presents a critical challenge in medical imaging. Traditional image-captioning models struggle with clinical specificity and rare pathologies. Recently, contrastive vision language learning has emerged as a robust alternative that learns joint visual–textual representations. However, applying contrastive learning (CL) to radiology remains challenging due to severe data scarcity. Prior work has employed input space augmentation, but these approaches incur computational overhead and risk distorting diagnostic features. This work presents CL with feature space interpolation for retrieval (CLFIR), a novel CL framework operating on learned embeddings. The method generates interpolated pairs in the feature embedding space by mixing original and shuffled embeddings in batches using a mixing coefficient λU(0.85,0.99). This approach increases batch diversity via synthetic samples, addressing the limitations of CL on medical data while preserving diagnostic integrity. Extensive experiments demonstrate state-of-the-art performance across critical clinical validation tasks. For report generation, CLFIR achieves BLEU-1/ROUGE/METEOR scores of 0.51/0.40/0.26 (Indiana university [IU] X-ray) and 0.45/0.34/0.22 (MIMIC-CXR). Moreover, CLFIR excels at image-to-text retrieval with R@1 scores of 4.14% (IU X-ray) and 24.3% (MIMIC-CXR) and achieves 0.65 accuracy in zero-shot classification on the CheXpert5×200 dataset, surpassing the established vision-language models. Full article
Show Figures

Figure 1

23 pages, 4108 KB  
Article
Adaptive Normalization Enhances the Generalization of Deep Learning Model in Chest X-Ray Classification
by Jatsada Singthongchai and Tanachapong Wangkhamhan
J. Imaging 2026, 12(1), 14; https://doi.org/10.3390/jimaging12010014 - 28 Dec 2025
Viewed by 436
Abstract
This study presents a controlled benchmarking analysis of min–max scaling, Z-score normalization, and an adaptive preprocessing pipeline that combines percentile-based ROI cropping with histogram standardization. The evaluation was conducted across four public chest X-ray (CXR) datasets and three convolutional neural network architectures under [...] Read more.
This study presents a controlled benchmarking analysis of min–max scaling, Z-score normalization, and an adaptive preprocessing pipeline that combines percentile-based ROI cropping with histogram standardization. The evaluation was conducted across four public chest X-ray (CXR) datasets and three convolutional neural network architectures under controlled experimental settings. The adaptive pipeline generally improved accuracy, F1-score, and training stability on datasets with relatively stable contrast characteristics while yielding limited gains on MIMIC-CXR due to strong acquisition heterogeneity. Ablation experiments showed that histogram standardization provided the primary performance contribution, with ROI cropping offering complementary benefits, and the full pipeline achieving the best overall performance. The computational overhead of the adaptive preprocessing was minimal (+6.3% training-time cost; 5.2 ms per batch). Friedman–Nemenyi and Wilcoxon signed-rank tests confirmed that the observed improvements were statistically significant across most dataset–model configurations. Overall, adaptive normalization is positioned not as a novel algorithmic contribution, but as a practical preprocessing design choice that can enhance cross-dataset robustness and reliability in chest X-ray classification workflows. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Medical Imaging Applications)
Show Figures

Figure 1

17 pages, 7231 KB  
Article
Feasibility Study for Determination of Trace Iron in Red Sandstone via O-Phenanthroline Spectrophotometry
by Dajuan Wang, Genlan Yang, Wenbing Shi and Yong Wang
Appl. Sci. 2026, 16(1), 243; https://doi.org/10.3390/app16010243 - 25 Dec 2025
Viewed by 293
Abstract
Fe3+ and Fe2+ are widely present in red sandstone. However, systematic studies on the establishment of a quantitative relationship between the Fe3+/Fe2+ ratio and weathering degree of rock are scarce. In this study, on the basis of the [...] Read more.
Fe3+ and Fe2+ are widely present in red sandstone. However, systematic studies on the establishment of a quantitative relationship between the Fe3+/Fe2+ ratio and weathering degree of rock are scarce. In this study, on the basis of the coexistence characteristics of Fe2+ and Fe3+, as well as the ability of Fe2+ to form a stable orange–red complex with o-phenanthroline, an optimized o-phenanthroline spectrophotometric method for the multitarget determination of total iron, Fe2+, and Fe3+ was proposed and used to measure trace iron in the vertical profile of red sandstone. The effectiveness and reliability of the proposed method were validated via X-ray fluorescence spectroscopy (XRFS) and potassium dichromate titration. The results demonstrate that o-phenanthroline spectrophotometry offers advantages such as a low detection limit, high precision, and simple operation for the determination of trace iron in red sandstone. The vertical distribution pattern of the Fe2+/Fe3+ ratio is generally consistent with the macroscopic weathering intervals indicated by traditional chemical weathering indices. These findings suggest that the Fe2+/Fe3+ ratio has the potential to characterize vertical weathering zones and can serve as a simple auxiliary indicator for the rapid preliminary identification and classification of the relative weathering zones of red sandstone. Full article
Show Figures

Figure 1

15 pages, 4191 KB  
Article
Assessment of Optical Light Microscopy for Classification of Real Coal Mine Dust Samples
by Nestor Santa, Lizeth Jaramillo and Emily Sarver
Minerals 2026, 16(1), 15; https://doi.org/10.3390/min16010015 - 23 Dec 2025
Viewed by 310
Abstract
Occupational exposure to respirable coal mine dust remains a significant health risk, especially for underground workers. Rapid dust monitoring methods are sought to support timely identification of hazards and corrective actions. Recent research has investigated how optical light microscopy (OLM) with automated image [...] Read more.
Occupational exposure to respirable coal mine dust remains a significant health risk, especially for underground workers. Rapid dust monitoring methods are sought to support timely identification of hazards and corrective actions. Recent research has investigated how optical light microscopy (OLM) with automated image processing might meet this need. In laboratory studies, this approach has been demonstrated to classify particles into three primary classes—coal, silicates and carbonates. If the same is achievable in the field, results could support both hazard monitoring and dust source apportionment. The objective of the current study is to evaluate the performance of OLM with image processing to classify real coal mine dust particles, employing scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM-EDX) as a reference method. The results highlight two possible challenges for field implementation. First, particle agglomeration can effectively yield mixed particles that are difficult to classify, so integration of a dispersion method into the dust collection or sample preparation should be considered. Second, optical differences can exist between dust particles used for classification model development (i.e., typically generated in the lab from high-purity materials) versus real mine dust, so our results demonstrate the necessity of site-specific model calibration. Full article
Show Figures

Graphical abstract

20 pages, 1554 KB  
Article
Impact of Soil Profile Mineralogy on the Elemental Composition of Chardonnay Grapes and Wines in the Anapa Region
by Zaual Temerdashev, Aleksey Abakumov, Mikhail Bolshov, Alexan Khalafyan, Evgeniy Gipich, Aleksey Lukyanov and Alexander Vasilev
Beverages 2026, 12(1), 1; https://doi.org/10.3390/beverages12010001 - 22 Dec 2025
Viewed by 414
Abstract
The aim of this work is to study the correlations of the elemental composition in the “soil–grape–wine” chain to determine the regional origin of Chardonnay grapes and wine. Soil samples (n = 40) from five vineyards in the Anapa region, Russia, taken [...] Read more.
The aim of this work is to study the correlations of the elemental composition in the “soil–grape–wine” chain to determine the regional origin of Chardonnay grapes and wine. Soil samples (n = 40) from five vineyards in the Anapa region, Russia, taken from eight different depths, grapes from these vineyards (n = 75), and wines obtained from these grapes (n = 5) were analyzed using inductively coupled plasma atomic emission spectrometry and inductively coupled plasma mass spectrometry. The mineralogical composition of the soils was determined using thermal and X-ray phase analysis. The mineralogical composition of vineyard soils mainly consists of calcite, quartz, nontronite, vermiculite, and muscovite. According to spectrometric analysis, the distribution of both the total content and the mobile forms of elements in soil profiles turned out to be similar. The content of Na, Ca, and Sr increased with increasing sampling depth, while the content of Co, Cu, Fe, Ni, Mn, Pb, and Zn decreased. Regardless of the area of cultivation, the predominant elements in grapes are K, Ca, Na, and Mg. It is established that the elemental profiles of grapes and wine are correlated. At the same time, during the winemaking process, a decrease in the concentration of most elements (Al, Ba, Ca, Cu, K, Mg, Mn, Ni, Rb, Sr, Ti, and Zn) is observed. It has been shown that the vine is able to accumulate not only mobile but also less bioavailable forms of metals from the soil (Cu, Fe, K, Rb, Ti, and Zn), while the migration of Ca and Na remains low (<7%). Using discriminant analysis, a model of grape identification based on the concentrations of Al, Li, Mn, Na, Pb, and Rb was developed. This model demonstrated a high accuracy (100% for training and test datasets) in grape classification by region, confirming that the elemental “fingerprint” is a reliable marker of terroir. Full article
Show Figures

Graphical abstract

27 pages, 22957 KB  
Article
Lung Disease Classification Using Deep Learning and ROI-Based Chest X-Ray Images
by Antonio Nadal-Martínez, Lidia Talavera-Martínez, Marc Munar and Manuel González-Hidalgo
Technologies 2026, 14(1), 1; https://doi.org/10.3390/technologies14010001 - 19 Dec 2025
Viewed by 474
Abstract
Deep learning applied to chest X-ray (CXR) images has gained wide attention for its potential to improve diagnostic accuracy and accessibility in resource-limited healthcare settings. This study compares two deep learning strategies for lung disease classification: a Two-Stage approach that first detects abnormalities [...] Read more.
Deep learning applied to chest X-ray (CXR) images has gained wide attention for its potential to improve diagnostic accuracy and accessibility in resource-limited healthcare settings. This study compares two deep learning strategies for lung disease classification: a Two-Stage approach that first detects abnormalities before classifying specific pathologies and a Direct multiclass classification approach. Using a curated database of CXR images covering diverse lung diseases, including COVID-19, pneumonia, pulmonary fibrosis, and tuberculosis, we evaluate the performance of various convolutional neural network architectures, the impact of lung segmentation, and explainability techniques. Our results show that the Two-Stage framework achieves higher diagnostic performance and fewer false positives than the Direct approach. Additionally, we highlight the limitations of segmentation and data augmentation techniques, emphasizing the need for further advancements in explainability and robust model design to support real-world diagnostic applications. Finally, we conduct a complementary evaluation of bone suppression techniques to assess their potential impact on disease classification performance. Full article
Show Figures

Figure 1

22 pages, 2503 KB  
Article
COPD Multi-Task Diagnosis on Chest X-Ray Using CNN-Based Slot Attention
by Wangsu Jeon, Hyeonung Jang, Hongchang Lee and Seongjun Choi
Appl. Sci. 2026, 16(1), 14; https://doi.org/10.3390/app16010014 - 19 Dec 2025
Viewed by 508
Abstract
This study proposes a unified deep-learning framework for the concurrent classification of Chronic Obstructive Pulmonary Disease (COPD) severity and regression of the FEV1/FVC ratio from chest X-ray (CXR) images. We integrated a ConvNeXt-Large backbone with a Slot Attention mechanism to effectively disentangle and [...] Read more.
This study proposes a unified deep-learning framework for the concurrent classification of Chronic Obstructive Pulmonary Disease (COPD) severity and regression of the FEV1/FVC ratio from chest X-ray (CXR) images. We integrated a ConvNeXt-Large backbone with a Slot Attention mechanism to effectively disentangle and refine disease-relevant features for multi-task learning. Evaluation on a clinical dataset demonstrated that the proposed model with a 5-slot configuration achieved superior performance compared to standard CNN and Vision Transformer baselines. On the independent test set, the model attained an Accuracy of 0.9107, Sensitivity of 0.8603, and Specificity of 0.9324 for three-class severity stratification. Simultaneously, it achieved a Mean Absolute Error (MAE) of 8.2649 and a Mean Squared Error (MSE) of 151.4704, and an R2 of 0.7591 for FEV1/FVC ratio estimation. Qualitative analysis using saliency maps also suggested that the slot-based approach contributes to attention patterns that are more constrained to clinically relevant pulmonary structures. These results suggest that our slot-attention-based multi-task model offers a robust solution for automated COPD assessment from standard radiographs. Full article
Show Figures

Figure 1

Back to TopTop