Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (522)

Search Parameters:
Keywords = Matthew 5:17

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2859 KiB  
Article
Feature-Based Normality Models for Anomaly Detection
by Hui Yie Teh, Kevin I-Kai Wang and Andreas W. Kempa-Liehr
Sensors 2025, 25(15), 4757; https://doi.org/10.3390/s25154757 - 1 Aug 2025
Viewed by 238
Abstract
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important [...] Read more.
Detecting previously unseen anomalies in sensor data is a challenging problem for artificial intelligence when sensor-specific and deployment-specific characteristics of the time series need to be learned from a short calibration period. From the application point of view, this challenge becomes increasingly important because many applications are gravitating towards utilising low-cost sensors for Internet of Things deployments. While these sensors offer cost-effectiveness and customisation, their data quality does not match that of their high-end counterparts. To improve sensor data quality while addressing the challenges of anomaly detection in Internet of Things applications, we present an anomaly detection framework that learns a normality model of sensor data. The framework models the typical behaviour of individual sensors, which is crucial for the reliable detection of sensor data anomalies, especially when dealing with sensors observing significantly different signal characteristics. Our framework learns sensor-specific normality models from a small set of anomaly-free training data while employing an unsupervised feature engineering approach to select statistically significant features. The selected features are subsequently used to train a Local Outlier Factor anomaly detection model, which adaptively determines the boundary separating normal data from anomalies. The proposed anomaly detection framework is evaluated on three real-world public environmental monitoring datasets with heterogeneous sensor readings. The sensor-specific normality models are learned from extremely short calibration periods (as short as the first 3 days or 10% of the total recorded data) and outperform four other state-of-the-art anomaly detection approaches with respect to F1-score (between 5.4% and 9.3% better) and Matthews correlation coefficient (between 4.0% and 7.6% better). Full article
(This article belongs to the Special Issue Innovative Approaches to Cybersecurity for IoT and Wireless Networks)
Show Figures

Figure 1

16 pages, 1471 KiB  
Article
Leveraging Machine Learning Techniques to Predict Cardiovascular Heart Disease
by Remzi Başar, Öznur Ocak, Alper Erturk and Marcelle de la Roche
Information 2025, 16(8), 639; https://doi.org/10.3390/info16080639 - 27 Jul 2025
Viewed by 367
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death globally, underscoring the urgent need for data-driven early diagnostic tools. This study proposes a multilayer artificial neural network (ANN) model for heart disease prediction, developed using a real-world clinical dataset comprising 13,981 patient records. [...] Read more.
Cardiovascular diseases (CVDs) remain the leading cause of death globally, underscoring the urgent need for data-driven early diagnostic tools. This study proposes a multilayer artificial neural network (ANN) model for heart disease prediction, developed using a real-world clinical dataset comprising 13,981 patient records. Implemented on the Orange data mining platform, the ANN was trained using backpropagation and validated through 10-fold cross-validation. Dimensionality reduction via principal component analysis (PCA) enhanced computational efficiency, while Shapley additive explanations (SHAP) were used to interpret model outputs. Despite achieving 83.4% accuracy and high specificity, the model exhibited poor sensitivity to disease cases, identifying only 76 of 2233 positive samples, with a Matthews correlation coefficient (MCC) of 0.058. Comparative benchmarks showed that random forest and support vector machines significantly outperformed the ANN in terms of discrimination (AUC up to 91.6%). SHAP analysis revealed serum creatinine, diabetes, and hemoglobin levels to be the dominant predictors. To address the current study’s limitations, future work will explore LIME, Grad-CAM, and ensemble techniques like XGBoost to improve interpretability and balance. This research emphasizes the importance of explainability, data representativeness, and robust evaluation in the development of clinically reliable AI tools for heart disease detection. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

24 pages, 6552 KiB  
Article
Assessing Flooding from Changes in Extreme Rainfall: Using the Design Rainfall Approach in Hydrologic Modeling
by Anna M. Jalowska, Daniel E. Line, Tanya L. Spero, J. Jack Kurki-Fox, Barbara A. Doll, Jared H. Bowden and Geneva M. E. Gray
Water 2025, 17(15), 2228; https://doi.org/10.3390/w17152228 - 26 Jul 2025
Viewed by 390
Abstract
Quantifying future changes in extreme events and associated flooding is challenging yet fundamental for stormwater managers. Along the U.S. Atlantic Coast, Eastern North Carolina (ENC) is frequently exposed to catastrophic floods from extreme rainfall that is typically associated with tropical cyclones. This study [...] Read more.
Quantifying future changes in extreme events and associated flooding is challenging yet fundamental for stormwater managers. Along the U.S. Atlantic Coast, Eastern North Carolina (ENC) is frequently exposed to catastrophic floods from extreme rainfall that is typically associated with tropical cyclones. This study presents a novel approach that uses rainfall data from five dynamically and statistically downscaled (DD and SD) global climate models under two scenarios to visualize a potential future extent of flooding in ENC. Here, we use DD data (at 36-km grid spacing) to compute future changes in precipitation intensity–duration–frequency (PIDF) curves at the end of the 21st century. These PIDF curves are further applied to observed rainfall from Hurricane Matthew—a landfalling storm that created widespread flooding across ENC in 2016—to project versions of “Matthew 2100” that reflect changes in extreme precipitation under those scenarios. Each Matthew-2100 rainfall distribution was then used in hydrologic models (HEC-HMS and HEC-RAS) to simulate “2100” discharges and flooding extents in the Neuse River Basin (4686 km2) in ENC. The results show that DD datasets better represented historical changes in extreme rainfall than SD datasets. The projected changes in ENC rainfall (up to 112%) exceed values published for the U.S. but do not exceed historical values. The peak discharges for Matthew-2100 could increase by 23–69%, with 0.4–3 m increases in water surface elevation and 8–57% increases in flooded area. The projected increases in flooding would threaten people, ecosystems, agriculture, infrastructure, and the economy throughout ENC. Full article
(This article belongs to the Section Water and Climate Change)
Show Figures

Figure 1

16 pages, 291 KiB  
Article
Praying for the Coming of the Kingdom, Crystallizing Biblical Themes in Second Temple Prayers: The Shema, the Qaddish, and the Lord’s Prayer
by Pino Di Luccio
Religions 2025, 16(8), 969; https://doi.org/10.3390/rel16080969 - 26 Jul 2025
Viewed by 360
Abstract
Some studies have pointed to the Jewish background of the prayer that, according to the gospels of Matthew and Luke, Jesus taught his disciples. However, the formulations of LP’s words do not necessarily presuppose the conclusion of the formation of Jewish prayers and [...] Read more.
Some studies have pointed to the Jewish background of the prayer that, according to the gospels of Matthew and Luke, Jesus taught his disciples. However, the formulations of LP’s words do not necessarily presuppose the conclusion of the formation of Jewish prayers and do not necessarily presuppose a unidirectional influence of Jewish prayers on the formation of LP. This prayer and its “midrash” in John 17 may have influenced the formulation and final formation of some Jewish prayers. The differences between these prayers may indicate the mutual influence that, in some cases, took place throughout the history of their formation. This reciprocity may be due to the intention to establish and define the differences between the religious groups of Judaic origin that inherited these prayers and between the communities that recited them. The crystallization of biblical themes in these prayers highlights the common heritage of these groups and a different understanding of the fulfilment of God’s word in relation to the coming of his kingdom. While this process, characterized by a conflict of interpretations, took place “within Judaism,” it also led to the parting of the ways of Judeo-Christians from the Synagogue. Full article
(This article belongs to the Special Issue The Hebrew Bible: A Journey Through History and Literature)
31 pages, 7723 KiB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 473
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

14 pages, 4981 KiB  
Article
Integrating Graph Convolution and Attention Mechanism for Kinase Inhibition Prediction
by Hamza Zahid, Kil To Chong and Hilal Tayara
Molecules 2025, 30(13), 2871; https://doi.org/10.3390/molecules30132871 - 6 Jul 2025
Viewed by 479
Abstract
Kinase is an enzyme responsible for cell signaling and other complex processes. Mutations or changes in kinase can cause cancer and other diseases in humans, including leukemia, neuroblastomas, glioblastomas, and more. Considering these concerns, inhibiting overexpressed or dysregulated kinases through small drug molecules [...] Read more.
Kinase is an enzyme responsible for cell signaling and other complex processes. Mutations or changes in kinase can cause cancer and other diseases in humans, including leukemia, neuroblastomas, glioblastomas, and more. Considering these concerns, inhibiting overexpressed or dysregulated kinases through small drug molecules is very important. In the past, many machine learning and deep learning approaches have been used to inhibit unregulated kinase enzymes. In this work, we employ a Graph Neural Network (GNN) to predict the inhibition activities of kinases. A separate Graph Convolution Network (GCN) and combined Graph Convolution and Graph Attention Network (GCN_GAT) are developed and trained on two large datasets (Kinase Datasets 1 and 2) consisting of small drug molecules against the targeted kinase using 10-fold cross-validation. Furthermore, a wide range of molecules are used as independent datasets on which the performance of the models is evaluated. On both independent kinase datasets, our model combining GCN and GAT provides the best evaluation and outperforms previous models in terms of accuracy, Matthews Correlation Coefficient (MCC), sensitivity, specificity, and precision. On the independent Kinase Dataset 1, the values of accuracy, MCC, sensitivity, specificity, and precision are 0.96, 0.89, 0.90, 0.98, and 0.91, respectively. Similarly, the performance of our model combining GCN and GAT on the independent Kinase Dataset 2 is 0.97, 0.90, 0.91, 0.99, and 0.92 in terms of accuracy, MCC, sensitivity, specificity, and precision, respectively. Full article
(This article belongs to the Special Issue Molecular Modeling: Advancements and Applications, 3rd Edition)
Show Figures

Figure 1

21 pages, 3919 KiB  
Article
Comparative Analysis of Resampling Techniques for Class Imbalance in Financial Distress Prediction Using XGBoost
by Guodong Hou, Dong Ling Tong, Soung Yue Liew and Peng Yin Choo
Mathematics 2025, 13(13), 2186; https://doi.org/10.3390/math13132186 - 4 Jul 2025
Viewed by 403
Abstract
One of the key challenges in financial distress data is class imbalance, where the data are characterized by a highly imbalanced ratio between the number of distressed and non-distressed samples. This study examines eight resampling techniques for improving distress prediction using the XGBoost [...] Read more.
One of the key challenges in financial distress data is class imbalance, where the data are characterized by a highly imbalanced ratio between the number of distressed and non-distressed samples. This study examines eight resampling techniques for improving distress prediction using the XGBoost algorithm. The study was performed on a dataset acquired from the CSMAR database, containing 26,383 firm-quarter samples from 639 Chinese A-share listed companies (2007–2024), with only 12.1% of the cases being distressed. Results show that standard Synthetic Minority Oversampling Technique (SMOTE) enhanced F1-score (up to 0.73) and Matthews Correlation Coefficient (MCC, up to 0.70), while SMOTE-Tomek and Borderline-SMOTE further boosted recall, slightly sacrificing precision. These oversampling and hybrid methods also maintained reasonable computational efficiency. However, Random Undersampling (RUS), though yielding high recall (0.85), suffered from low precision (0.46) and weaker generalization, but was the fastest method. Among all techniques, Bagging-SMOTE achieved balanced performance (AUC 0.96, F1 0.72, PR-AUC 0.80, MCC 0.68) using a minority-to-majority ratio of 0.15, demonstrating that ensemble-based resampling can improve robustness with minimal impact on the original class distribution, albeit with higher computational cost. The compared findings highlight that no single approach fits all use cases, and technique selection should align with specific goals. Techniques favoring recall (e.g., Bagging-SMOTE, SMOTE-Tomek) are suited for early warning, while conservative techniques (e.g., Tomek Links) help reduce false positives in risk-sensitive applications, and efficient methods such as RUS are preferable when computational speed is a priority. Full article
Show Figures

Figure 1

30 pages, 5474 KiB  
Article
Multiclass Fault Diagnosis in Power Transformers Using Dissolved Gas Analysis and Grid Search-Optimized Machine Learning
by Andrew Adewunmi Adekunle, Issouf Fofana, Patrick Picher, Esperanza Mariela Rodriguez-Celis, Oscar Henry Arroyo-Fernandez, Hugo Simard and Marc-André Lavoie
Energies 2025, 18(13), 3535; https://doi.org/10.3390/en18133535 - 4 Jul 2025
Viewed by 437
Abstract
Dissolved gas analysis remains the most widely utilized non-intrusive diagnostic method for detecting incipient faults in insulating liquid-immersed transformers. Despite their prevalence, conventional ratio-based methods often suffer from ambiguity and limited potential for automation applicrations. To address these limitations, this study proposes a [...] Read more.
Dissolved gas analysis remains the most widely utilized non-intrusive diagnostic method for detecting incipient faults in insulating liquid-immersed transformers. Despite their prevalence, conventional ratio-based methods often suffer from ambiguity and limited potential for automation applicrations. To address these limitations, this study proposes a unified multiclass classification model that integrates traditional gas ratio features with supervised machine learning algorithms to enhance fault diagnosis accuracy. The performance of six machine learning classifiers was systematically evaluated using training and testing data generated through four widely recognized gas ratio schemes. Grid search optimization was employed to fine-tune the hyperparameters of each model, while model evaluation was conducted using 10-fold cross-validation and six performance metrics. Across all the diagnostic approaches, ensemble models, namely random forest, XGBoost, and LightGBM, consistently outperformed non-ensemble models. Notably, random forest and LightGBM classifiers demonstrated the most robust and superior performance across all schemes, achieving accuracy, precision, recall, and F1 scores between 0.99 and 1, along with Matthew correlation coefficient values exceeding 0.98 in all cases. This robustness suggests that ensemble models are effective at capturing complex decision boundaries and relationships among gas ratio features. Furthermore, beyond numerical classification, the integration of physicochemical and dielectric properties in this study revealed degradation signatures that strongly correlate with thermal fault indicators. Particularly, the CIGRÉ-based classification using a random forest classifier demonstrated high sensitivity in detecting thermally stressed units, corroborating trends observed in chemical deterioration parameters such as interfacial tension and CO2/CO ratios. Access to over 80 years of operational data provides a rare and invaluable perspective on the long-term performance and degradation of power equipment. This extended dataset enables a more accurate assessment of ageing trends, enhances the reliability of predictive maintenance models, and supports informed decision-making for asset management in legacy power systems. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

22 pages, 3232 KiB  
Article
From Clusters to Communities: Enhancing Wetland Vegetation Mapping Using Unsupervised and Supervised Synergy
by Li Wen, Shawn Ryan, Megan Powell and Joanne E. Ling
Remote Sens. 2025, 17(13), 2279; https://doi.org/10.3390/rs17132279 - 3 Jul 2025
Viewed by 368
Abstract
High thematic resolution vegetation mapping is essential for monitoring wetland ecosystems, supporting conservation, and guiding water management. However, producing accurate, fine-scale vegetation maps in large, heterogeneous floodplain wetlands remains challenging due to complex hydrology, spectral similarity among vegetation types, and the high cost [...] Read more.
High thematic resolution vegetation mapping is essential for monitoring wetland ecosystems, supporting conservation, and guiding water management. However, producing accurate, fine-scale vegetation maps in large, heterogeneous floodplain wetlands remains challenging due to complex hydrology, spectral similarity among vegetation types, and the high cost of extensive field surveys. This study addresses these challenges by developing a scalable vegetation classification framework that integrates cluster-guided sample selection, Random Forest modelling, and multi-source remote-sensing data. The approach combines multi-temporal Sentinel-1 SAR, Sentinel-2 optical imagery, and hydro-morphological predictors derived from LiDAR and hydrologically enforced SRTM DEMs. Applied to the Great Cumbung Swamp, a structurally and hydrologically complex terminal wetland in the lower Lachlan River floodplain of Australia, the framework produced vegetation maps at three hierarchical levels: formations (9 classes), functional groups (14 classes), and plant community types (PCTs; 23 classes). The PCT-level classification achieved an overall accuracy of 93.2%, a kappa coefficient of 0.91, and a Matthews correlation coefficient (MCC) of 0.89, with broader classification levels exceeding 95% accuracy. These results demonstrate that, through targeted sample selection and integration of spectral, structural, and terrain-derived data, high-accuracy, high-resolution wetland vegetation mapping is achievable with reduced field data requirements. The hierarchical structure further enables broader vegetation categories to be efficiently derived from detailed PCT outputs, providing a practical, transferable tool for wetland monitoring, habitat assessment, and conservation planning. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

16 pages, 1687 KiB  
Article
Towards Precision Medicine in Sinonasal Tumors: Low-Dimensional Radiomic Signature Extraction from MRI
by Riccardo Biondi, Giacomo Gravante, Daniel Remondini, Sara Peluso, Serena Cominetti, Francesco D’Amore, Maurizio Bignami, Alberto Daniele Arosio and Nico Curti
Diagnostics 2025, 15(13), 1675; https://doi.org/10.3390/diagnostics15131675 - 30 Jun 2025
Viewed by 410
Abstract
Background: Sinonasal tumors are rare, accounting for 3–5% of head and neck neoplasms. Machine learning (ML) and radiomics have shown promise in tumor classification, but current models lack detailed morphological and textural characterization. Methods: This study analyzed MRI data from 145 patients (76 [...] Read more.
Background: Sinonasal tumors are rare, accounting for 3–5% of head and neck neoplasms. Machine learning (ML) and radiomics have shown promise in tumor classification, but current models lack detailed morphological and textural characterization. Methods: This study analyzed MRI data from 145 patients (76 malignant and 69 benign) across multiple centers. Radiomic features were extracted from T1-weighted (T1-w) images with contrast and T2-weighted (T2-w) images based on manually annotated tumor volumes. A dedicated ML pipeline assessed the effectiveness of different radiomic features and their integration with clinical variables. The DNetPRO algorithm was used to extract signatures combining radiomic and clinical data. Results: The results showed that ML classification using both data types achieved a median Matthews Correlation Coefficient (MCC) of 0.60 ± 0.07. The best-performing DNetPRO models reached an MCC of 0.73 (T1-w + T2-w) and 0.61 (T1-w only). Key clinical features included symptoms and tumor size, while radiomic features provided additional diagnostic insights, particularly regarding gray-level distribution in T2-w and texture complexity in T1-w images. Conclusions: Despite its potential, ML-based radiomics faces challenges in clinical adoption due to data variability and model diversity. Standardization and interpretability are crucial for reliability. The DNetPRO approach helps explain feature importance and relationships, reinforcing the clinical relevance of integrating radiomic and clinical data for sinonasal tumor classification. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

67 pages, 482 KiB  
Article
King Jesus of Nazareth: An Evidential Inquiry
by Joshua Sijuwade
Religions 2025, 16(7), 808; https://doi.org/10.3390/rel16070808 - 20 Jun 2025
Viewed by 1744
Abstract
This article examines the ‘King Jesus Gospel’ concept proposed by Matthew Bates and Scott McKnight, which frames the biblical gospel as a proclamation of Jesus’ kingship. It addresses the ‘Failure Objection’ that Jesus was merely a failed apocalyptic prophet who died without fulfilling [...] Read more.
This article examines the ‘King Jesus Gospel’ concept proposed by Matthew Bates and Scott McKnight, which frames the biblical gospel as a proclamation of Jesus’ kingship. It addresses the ‘Failure Objection’ that Jesus was merely a failed apocalyptic prophet who died without fulfilling his predictions. Drawing on N.T. Wright’s work, this article constructs the ‘King Jesus Hypothesis’ and evaluates it using evidence from religious transformation, cultural values, and human progress. Employing the Criterion of Predictive Power, it argues that historical religious innovations (drawing on the work of Larry Hurtado), Western moral values (drawing on the work of Tom Holland), and measurable human flourishing (drawing on the work of Steven Pinker) are best explained by Jesus successfully inaugurating God’s Kingdom through cultural transformation rather than apocalyptic intervention. Through this analysis, the article demonstrates that compelling evidence supports Jesus’ kingship despite the Failure Objection. Full article
(This article belongs to the Special Issue Spirituality in Action: Perspectives on New Evangelization)
39 pages, 30587 KiB  
Article
Hierarchical Swin Transformer Ensemble with Explainable AI for Robust and Decentralized Breast Cancer Diagnosis
by Md. Redwan Ahmed, Hamdadur Rahman, Zishad Hossain Limon, Md Ismail Hossain Siddiqui, Mahbub Alam Khan, Al Shahriar Uddin Khondakar Pranta, Rezaul Haque, S M Masfequier Rahman Swapno, Young-Im Cho and Mohamed S. Abdallah
Bioengineering 2025, 12(6), 651; https://doi.org/10.3390/bioengineering12060651 - 13 Jun 2025
Cited by 1 | Viewed by 886
Abstract
Early and accurate detection of breast cancer is essential for reducing mortality rates and improving clinical outcomes. However, deep learning (DL) models used in healthcare face significant challenges, including concerns about data privacy, domain-specific overfitting, and limited interpretability. To address these issues, we [...] Read more.
Early and accurate detection of breast cancer is essential for reducing mortality rates and improving clinical outcomes. However, deep learning (DL) models used in healthcare face significant challenges, including concerns about data privacy, domain-specific overfitting, and limited interpretability. To address these issues, we propose BreastSwinFedNetX, a federated learning (FL)-enabled ensemble system that combines four hierarchical variants of the Swin Transformer (Tiny, Small, Base, and Large) with a Random Forest (RF) meta-learner. By utilizing FL, our approach ensures collaborative model training across decentralized and institution-specific datasets while preserving data locality and preventing raw patient data exposure. The model exhibits strong generalization and performs exceptionally well across five benchmark datasets—BreakHis, BUSI, INbreast, CBIS-DDSM, and a Combined dataset—achieving an F1 score of 99.34% on BreakHis, a PR AUC of 98.89% on INbreast, and a Matthews Correlation Coefficient (MCC) of 99.61% on the Combined dataset. To enhance transparency and clinical adoption, we incorporate explainable AI (XAI) through Grad-CAM, which highlights class-discriminative features. Additionally, we deploy the model in a real-time web application that supports uncertainty-aware predictions and clinician interaction and ensures compliance with GDPR and HIPAA through secure federated deployment. Extensive ablation studies and paired statistical analyses further confirm the significance and robustness of each architectural component. By integrating transformer-based architectures, secure collaborative training, and explainable outputs, BreastSwinFedNetX provides a scalable and trustworthy AI solution for real-world breast cancer diagnostics. Full article
(This article belongs to the Special Issue Breast Cancer: From Precision Medicine to Diagnostics)
Show Figures

Figure 1

14 pages, 4450 KiB  
Article
Somatostatin Receptor Scintigraphy in Autoimmune Syndrome Induced by Silicone Breast Implants: Pre- and Postexplantation Findings
by Luz Kelly Anzola, Sara Ramirez, Sergio Moreno, Camilo Vargas, Sebastian Rojas and José Nelson Rivera
J. Clin. Med. 2025, 14(12), 4141; https://doi.org/10.3390/jcm14124141 - 11 Jun 2025
Viewed by 412
Abstract
Background: Silicone breast implants have been linked to autoimmune/inflammatory syndrome induced by adjuvants (ASIA). This study evaluates the role of 99mTc-HYNIC-TOC somatostatin receptor scintigraphy in assessing somatostatin-mediated inflammation and the impact of explantation on inflammatory activity. Methods: Fifty patients with silicone breast [...] Read more.
Background: Silicone breast implants have been linked to autoimmune/inflammatory syndrome induced by adjuvants (ASIA). This study evaluates the role of 99mTc-HYNIC-TOC somatostatin receptor scintigraphy in assessing somatostatin-mediated inflammation and the impact of explantation on inflammatory activity. Methods: Fifty patients with silicone breast implants and symptoms suggestive of ASIA were evaluated. Pre- and postexplantation imaging was performed using 99mTc-HYNIC-TOC scintigraphy. Matthews correlation coefficients quantified associations between clinical symptoms and imaging findings, and autoantibody profiles were analysed. Results: Scintigraphy identified a significant uptake in organs associated with autoimmune symptoms, particularly joints and salivary glands. Strong correlations were found between imaging findings and symptoms, including knee pain (MCC = 0.81) and sicca syndrome (MCC = 0.96). Explantation resolved abnormal uptake in the surgical bed, though variable uptake persisted in other organs, reflecting systemic inflammatory heterogeneity. Autoantibody analysis revealed positivity in 66% of patients, with antinuclear antibodies being most frequent (30%). Conclusions: 99mTc-HYNIC-TOC scintigraphy effectively evaluates organ-specific inflammation in ASIA. Explantation reduces localized inflammation but does not consistently address systemic autoimmune responses. Larger prospective studies are needed to validate these findings and improve management strategies for ASIA. Full article
Show Figures

Figure 1

21 pages, 2335 KiB  
Article
The Spatial Correlation Network of China’s Urban Digital Economy and Its Formation Mechanism
by Jing Huang and Kai Liu
Sustainability 2025, 17(12), 5382; https://doi.org/10.3390/su17125382 - 11 Jun 2025
Viewed by 436
Abstract
Based on digital patent data from 359 Chinese cities between 2006 and 2022, this paper calculates the gravitational value of the digital economy using a modified gravity model and employs social network analysis and QAP analysis to investigate the correlation network of cities’ [...] Read more.
Based on digital patent data from 359 Chinese cities between 2006 and 2022, this paper calculates the gravitational value of the digital economy using a modified gravity model and employs social network analysis and QAP analysis to investigate the correlation network of cities’ digital economy and the influencing factors. The study found the following: (1) Chinese cities have a high level of digital economy, showing a consistent increase in growth rate, and density and relevance are rising without revealing a distinct hierarchical network structure. (2) The inner economic network demonstrates a significant imbalance, as illustrated by the “Matthew effect”. Core cities like Shenzhen and Beijing show greater net spillover, indicating their role as network hubs, while less developed cities have lower net spillover, necessitating improvements in interconnection capacity. (3) Differences in economic scale, population quality, scientific and technological innovation, and infrastructure construction, which have a positive effect, are the main sources of linkage network formation. At the same time, the difference in urbanization rates is stage-specific, reflecting the dual logic of factor complementarity and policy synergy. Overall, this study reveals the dynamic evolution of the digital economic spatial network through city-scale innovation and provides theoretical support for promoting the region’s sustainable and coordinated development. Full article
Show Figures

Figure 1

21 pages, 2609 KiB  
Article
Assessing the Role of EEG Biosignal Preprocessing to Enhance Multiscale Fuzzy Entropy in Alzheimer’s Disease Detection
by Pasquale Arpaia, Maria Cacciapuoti, Andrea Cataldo, Sabatina Criscuolo, Egidio De Benedetto, Antonio Masciullo, Marisa Pesola and Raissa Schiavoni
Biosensors 2025, 15(6), 374; https://doi.org/10.3390/bios15060374 - 10 Jun 2025
Viewed by 631
Abstract
Quantitative electroencephalography (QEEG) has emerged as a promising tool for detecting Alzheimer’s disease (AD). Among QEEG measures, Multiscale Fuzzy Entropy (MFE) shows great potential in identifying AD-related changes in EEG complexity. However, MFE is intrinsically linked to signal amplitude, which can vary substantially [...] Read more.
Quantitative electroencephalography (QEEG) has emerged as a promising tool for detecting Alzheimer’s disease (AD). Among QEEG measures, Multiscale Fuzzy Entropy (MFE) shows great potential in identifying AD-related changes in EEG complexity. However, MFE is intrinsically linked to signal amplitude, which can vary substantially among EEG systems, and this hinders the adoption of this metric for AD detection. To overcome this issue, this study investigates different preprocessing strategies to make the calculation of MFE less dependent on the specific amplitude characteristics of the EEG signals at hand. This contributes to generalizing and making more robust the adoption of MFE for AD detection. To demonstrate the robustness of the proposed preprocessing methods, binary classification tasks with Support Vector Machines (SVMs), Random Forest (RF), and K-Nearest Neighbor (KNN) classifiers are used. Performance metrics, such as classification accuracy and Matthews Correlation Coefficient (MCC), are employed to assess the results. The methodology is validated on two public EEG datasets. Results show that amplitude transformation, particularly normalization, significantly enhances AD detection, achieving mean classification accuracy values exceeding 80% with an uncertainty of 10% across all classifiers. These results highlight the importance of preprocessing in improving the accuracy and the reliability of EEG-based AD diagnostic tools, offering potential advancements in patient management and treatment planning. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Figure 1

Back to TopTop