Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,484)

Search Parameters:
Keywords = benchmark tool

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 252 KiB  
Article
Mortal vs. Machine: A Compact Two-Factor Model for Comparing Trust in Humans and Robots
by Andrew Prahl
Robotics 2025, 14(8), 112; https://doi.org/10.3390/robotics14080112 (registering DOI) - 16 Aug 2025
Abstract
Trust in robots is often analyzed with scales built for either humans or automation, making cross-species comparisons imprecise. Addressing that gap, this paper distils decades of trust scholarship, from clinical vs. actuarial judgement to modern human–robot teaming, into a lean two-factor framework: Mortal [...] Read more.
Trust in robots is often analyzed with scales built for either humans or automation, making cross-species comparisons imprecise. Addressing that gap, this paper distils decades of trust scholarship, from clinical vs. actuarial judgement to modern human–robot teaming, into a lean two-factor framework: Mortal vs. Machine (MvM). We first surveyed classic technology-acceptance and automation-reliance research and then integrated empirical findings in human–robot interaction to identify diagnostic cues that can be instantiated by both human and machine agents. The model includes (i) ability—perceived task competence and reliability—and (ii) value congruence—alignment of decision weights and trade-off priorities. Benevolence, oft-included in trust studies, was excluded because current robots cannot manifest genuine goodwill and existing items elicit high dropout. The resulting scale travels across contexts, allowing for researchers to benchmark a robot against a human co-worker on identical terms and enabling practitioners to pinpoint whether performance deficits or priority clashes drive acceptance. By reconciling anthropocentric and technocentric trust literature in a deployable diagnostic, MvM offers a field-ready tool and a conceptual bridge for future studies of AI-empowered robotics. Full article
(This article belongs to the Section Humanoid and Human Robotics)
30 pages, 2797 KiB  
Article
Global Sustainability Performance and Regional Disparities: A Machine Learning Approach Based on the 2025 SDG Index
by Sadullah Çelik, Ömer Faruk Öztürk, Ulas Akkucuk and Mahmut Ünsal Şaşmaz
Sustainability 2025, 17(16), 7411; https://doi.org/10.3390/su17167411 - 15 Aug 2025
Abstract
Sustainability performance varies significantly across countries, yet global assessments overlook the underlying structural trends. This study bridges this gap using machine learning to uncover meaningful clustering in global sustainability outcomes based on the 2025 Sustainable Development Goals (SDG) Index. We applied K-Means clustering [...] Read more.
Sustainability performance varies significantly across countries, yet global assessments overlook the underlying structural trends. This study bridges this gap using machine learning to uncover meaningful clustering in global sustainability outcomes based on the 2025 Sustainable Development Goals (SDG) Index. We applied K-Means clustering to group 166 countries into five standardized indicators: SDG score, spillover effects, regional score, population size, and recent progress. The five-cluster solution was confirmed by the Elbow and Silhouette procedures, with ANOVA and MANOVA tests subsequently indicating statistically significant cluster differences. For the validation and interpretation of the results, six supervised learning algorithms were employed. Random Forest, SVM, and ANN performed best in classification accuracy (97.7%) with perfect ROC-AUC scores (AUC = 1.0). Feature importance analysis showed that SDG and regional scores were most predictive of cluster membership, while population size was the least. This supervised–unsupervised hybrid approach offers a reproducible blueprint for cross-country benchmarking of sustainability. It also offers actionable insights for tailoring policy to groups of countries, whether high-income OECD nations, emerging markets, or resource-scarce countries. Our findings demonstrate that machine learning is a useful tool for revealing structural disparities in sustainability and informing cluster-specific policy interventions toward the 2030 Agenda. Full article
33 pages, 11121 KiB  
Review
LEED v4 Adoption Patterns and Regional Variations Across US-Based Projects
by Tayyab Ahmad, Muhammad Shoaib and Razal Abdul Kadar
Sustainability 2025, 17(16), 7403; https://doi.org/10.3390/su17167403 - 15 Aug 2025
Abstract
Despite the widespread adoption of the Leadership in Energy and Environmental Design (LEED) rating system, there is limited empirical research examining how different sustainability categories are implemented in practice or how methodological patterns influence certification outcomes. This study contributes to this understanding by [...] Read more.
Despite the widespread adoption of the Leadership in Energy and Environmental Design (LEED) rating system, there is limited empirical research examining how different sustainability categories are implemented in practice or how methodological patterns influence certification outcomes. This study contributes to this understanding by analysing LEED v4 Building Design + Construction certification patterns across 1252 newly constructed buildings in the United States to understand the methodological foundations and identify improvement opportunities for the LEED framework. Using credit achievement degree (CAD) analysis, regional variation assessment, and correlation analysis, we examined category adoption patterns across nine US climate regions, investigated relationships between LEED categories, and analysed certification level influences. The analysis reveals significant disparities in category adoption, with innovation (80.7%) and regional priority (66.6%) achieving high implementation rates while the category of material and resources (41.1%) consistently underperforms. Statistically significant regional variations exist across eight of nine categories (p < 0.05), with location and transportation showing the highest variability (CV = 20.1%). The category of energy and atmosphere demonstrates the strongest relationship with overall project performance (R2 = 0.38), explaining 43% of total score variation and serving as the primary driver of higher certification levels. Most critically, inter-category correlations are weak (typically R2 < 0.05), indicating that projects treat sustainability domains as separate challenges rather than integrated systems. Positive skewness across all certification levels (z-scores > 1.96) provides statistical evidence of strategic “point-chasing” behaviour, where teams target minimum thresholds rather than maximising comprehensive sustainability performance. These findings reveal fundamental methodological patterns that may limit LEED’s effectiveness in promoting holistic sustainability approaches. The compartmentalised implementation patterns and threshold-focused strategies suggest opportunities for structural refinements, including enhanced integration incentives, region-sensitive benchmarking, and certification frameworks that reward comprehensive rather than minimal compliance. This research contributes empirical evidence for evidence-based improvements to green building certification methodology and provides insights for more effective sustainability assessment tools. Full article
(This article belongs to the Special Issue Building Sustainability within a Smart Built Environment)
Show Figures

Figure 1

17 pages, 1623 KiB  
Article
Accelerating Neoantigen Discovery: A High-Throughput Approach to Immunogenic Target Identification
by Lena Pfitzer, Gitta Boons, Lien Lybaert, Wim van Criekinge, Cedric Bogaert and Bruno Fant
Vaccines 2025, 13(8), 865; https://doi.org/10.3390/vaccines13080865 - 15 Aug 2025
Abstract
Background: Antigen-targeting immunotherapies hinge on the accurate identification of immunogenic epitopes that elicit robust T-cell responses. However, current computational approaches focus primarily on MHC binding affinity, leading to high false-positive rates and limiting the clinical utility of antigen selection methods. Methods: [...] Read more.
Background: Antigen-targeting immunotherapies hinge on the accurate identification of immunogenic epitopes that elicit robust T-cell responses. However, current computational approaches focus primarily on MHC binding affinity, leading to high false-positive rates and limiting the clinical utility of antigen selection methods. Methods: We developed the neoIM (for “neoantigen immunogenicity”) model, a first-in-class, high-precision immunogenicity prediction tool that overcomes these limitations by focusing exclusively on overall CD8 T-cell response rather than MHC binding. neoIM, a random forest classifier, was trained solely on MHC-presented non-self peptides (n = 61.829). Its performance was assessed against that of currently existing alternatives on several in vitro immunogenicity datasets. In addition, its clinical impact was investigated in two retrospective analyses of clinical trial data by assessing the effect of neoIM-based antigen selection on the positive immunogenicity rate of personal vaccine designs. Finally, the potential for neoIM as a biomarker was investigated by assessing the correlation between neoIM scores and overall survival in a melanoma patient cohort treated with checkpoint inhibitors (CPI). Results: neoIM was found to substantially outperform publicly available tools in regards to in vitro benchmarks based on ELISpot assays, with an increase in predictive power of at least 30%, reducing false positives and improving target selection efficiency. In addition, using neoIM scores during patient-specific antigen prioritization and selection was shown to yield up to 50% more clinically actionable antigens for individual patients in two recent clinical trials. Finally, we showed that neoIM could further refine response prediction to checkpoint inhibition therapy, further demonstrating the importance of evaluating neoantigen immunogenicity. Conclusions: These findings establish neoIM as the first computational tool capable of accurately predicting epitope immunogenicity beyond MHC affinity. By enabling more precise target discovery and prioritization, neoIM has the potential to accelerate the development of next-generation antigen-based immunotherapies. Full article
Show Figures

Figure 1

22 pages, 6785 KiB  
Article
Spatiality–Frequency Domain Video Forgery Detection System Based on ResNet-LSTM-CBAM and DCT Hybrid Network
by Zihao Liao, Sheng Hong and Yu Chen
Appl. Sci. 2025, 15(16), 9006; https://doi.org/10.3390/app15169006 - 15 Aug 2025
Abstract
As information technology advances, digital content has become widely adopted across diverse fields such as news broadcasting, entertainment, commerce, and forensic investigation. However, the availability of sophisticated multimedia editing tools has significantly increased the risk of video and image forgery, raising serious concerns [...] Read more.
As information technology advances, digital content has become widely adopted across diverse fields such as news broadcasting, entertainment, commerce, and forensic investigation. However, the availability of sophisticated multimedia editing tools has significantly increased the risk of video and image forgery, raising serious concerns about content authenticity at both societal and individual levels. To address the growing need for robust and accurate detection methods, this study proposes a novel video forgery detection model that integrates both spatial and frequency-domain features. The model is built on a ResNet-LSTM framework enhanced by a Convolutional Block Attention Module (CBAM) for spatial feature extraction, and further incorporates Discrete Cosine Transform (DCT) to capture frequency domain information. Comprehensive experiments were conducted on several mainstream benchmark datasets, encompassing a wide range of forgery scenarios. The results demonstrate that the proposed model achieves superior performance in distinguishing between authentic and manipulated videos. Additional ablation and comparative studies confirm the contribution of each component in the architecture, offering deeper insight into the model’s capacity. Overall, the findings support the proposed approach as a promising solution for enhancing the reliability of video authenticity analysis under complex conditions. Full article
Show Figures

Figure 1

17 pages, 3520 KiB  
Article
A Hybrid Air Quality Prediction Model Integrating KL-PV-CBGRU: Case Studies of Shijiazhuang and Beijing
by Sijie Chen, Qichao Zhao, Zhao Chen, Yongtao Jin and Chao Zhang
Atmosphere 2025, 16(8), 965; https://doi.org/10.3390/atmos16080965 - 15 Aug 2025
Abstract
Accurate prediction of the Air Quality Index (AQI) is crucial for protecting public health; however, the inherent instability and high volatility of AQI present significant challenges. To address this, the present study introduces a novel hybrid deep learning model, KL-PV-CBGRU, which utilizes Kalman [...] Read more.
Accurate prediction of the Air Quality Index (AQI) is crucial for protecting public health; however, the inherent instability and high volatility of AQI present significant challenges. To address this, the present study introduces a novel hybrid deep learning model, KL-PV-CBGRU, which utilizes Kalman filtering to decompose AQI data into features and residuals, effectively mitigating volatility at the initial stage. For residual components that continue to exhibit substantial fluctuations, a secondary decomposition is conducted using variational mode decomposition (VMD), further optimized by the particle swarm optimization (PSO) algorithm to enhance stability. To overcome the limited predictive capabilities of single models, this hybrid framework integrates bidirectional gated recurrent units (BiGRU) with convolutional neural networks (CNNs) and convolutional attention modules, thereby improving prediction accuracy and feature fusion. Experimental results demonstrate the superior performance of KL-PV-CBGRU, achieving R2 values of 0.993, 0.963, 0.935, and 0.940 and corresponding MAE values of 2.397, 8.668, 11.001, and 14.035 at 1 h, 8 h, 16 h, and 24 h intervals, respectively, in Shijiazhuang—surpassing all benchmark models. Ablation studies further confirm the critical roles of both the secondary decomposition process and the hybrid architecture in enhancing predictive accuracy. Additionally, comparative experiments conducted in Beijing validate the model’s strong transferability and consistent outperformance over competing models, highlighting its robust generalization capability. These findings underscore the potential of the KL-PV-CBGRU model as a powerful and reliable tool for air quality forecasting across varied urban settings. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

36 pages, 2065 KiB  
Article
Methodological Framework for the Integrated Technical, Economic, and Environmental Evaluation of Solar Photovoltaic Systems in Agroindustrial Environments
by Reinier Jiménez Borges, Yoisdel Castillo Alvarez, Luis Angel Iturralde Carrera, Mariano Garduño Aparicio, Berlan Rodríguez Pérez and Juvenal Rodríguez-Reséndiz
Technologies 2025, 13(8), 360; https://doi.org/10.3390/technologies13080360 - 14 Aug 2025
Abstract
The transition to sustainable energy systems in the agroindustrial sector requires rigorous methodologies that enable a comprehensive and quantitative assessment of the technical and economic viability and environmental impact of photovoltaic integration. This study develops and validates a hybrid multi-criteria methodology structured in [...] Read more.
The transition to sustainable energy systems in the agroindustrial sector requires rigorous methodologies that enable a comprehensive and quantitative assessment of the technical and economic viability and environmental impact of photovoltaic integration. This study develops and validates a hybrid multi-criteria methodology structured in three phases: (i) analytical modeling of the load profile and preliminary sizing, (ii) advanced energy simulation using PVsyst for operational optimization and validation against empirical data, and (iii) environmental assessment using life cycle analysis (LCA) under ISO 14040/44 standards. The methodology is applied to a Cuban agroindustrial plant with an annual electricity demand of 290,870 kWh, resulting in the design of a 200 kWp photovoltaic system capable of supplying 291,513 kWh/year, thereby achieving total coverage of the electricity demand. The economic analysis yields an LCOE of 0.064 USD/kWh and an NPV of USD 139,408, while the environmental component allows for a mitigation of 113 t CO2-eq/year. The robustness of the model is validated by comparison with historical records, yielding an MBE of 0.65%, an RMSE of 2.87%, an MAPE of 2.62%, and an R2 of 0.98. This comprehensive approach demonstrates its superiority over previous methodologies by effectively integrating the three pillars of sustainability in an agroindustrial context, thus offering a scientifically sound, replicable, and adaptable tool for decision-making in advanced energy projects. The results position this methodology as a benchmark for future research and applications in emerging production scales. Full article
(This article belongs to the Special Issue Sustainable Water and Environmental Technologies of Global Relevance)
Show Figures

Figure 1

22 pages, 2608 KiB  
Article
Fast Buckling Analysis of Stiffened Composite Structures for Preliminary Aerospace Design
by Dimitrios G. Stamatelos and George N. Labeas
Aerospace 2025, 12(8), 726; https://doi.org/10.3390/aerospace12080726 - 14 Aug 2025
Abstract
Predicting buckling in large-scale composite structures is hindered by the need for highly detailed Finite Element (FE) models, which are computationally expensive and impractical for early-stage design iterations. This study introduces a macromodelling buckling framework that reduces those models to plate-level size without [...] Read more.
Predicting buckling in large-scale composite structures is hindered by the need for highly detailed Finite Element (FE) models, which are computationally expensive and impractical for early-stage design iterations. This study introduces a macromodelling buckling framework that reduces those models to plate-level size without sacrificing accuracy. An equivalent bending stiffness matrix is derived from strain–energy equivalence, rigorously retaining orthotropic in-plane terms, bending–extensional coupling, and—crucially—the eccentricity of compressive loads about an unsymmetrically stiffened mid-plane, effects overlooked by conventional Parallel-Axis smearing. These stiffness terms contribute to closed-form analytical solutions for homogeneous orthotropic plates, providing millisecond-level evaluations ideal for gradient-based design optimisation. The method is benchmarked against detailed FE simulations of panels with three to ten stringers under longitudinal and transverse compression, showing less than 5% deviation in critical load prediction. Its utility is demonstrated in the sizing optimisation of the upper cover of a scaled Airbus A330 composite wing-box, where the proposed model explores the design space in minutes on a standard workstation, orders of magnitude faster than full FE analyses. By combining analytical plate theory, enhanced smearing, and rapid optimisation capability, the framework provides an accurate, ultra-fast tool for buckling analysis and the preliminary design of large-scale stiffened composite structures. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

24 pages, 430 KiB  
Article
A Proposed Typology for the Validation of Corporate Sustainability
by Joan R. Sanchis, Vanessa Campos and Ana Ejarque
Sustainability 2025, 17(16), 7358; https://doi.org/10.3390/su17167358 - 14 Aug 2025
Abstract
Corporate sustainability is a multi-stakeholder approach with a Triple or Quadruple Bottom Line focused on long-term horizons and the creation of shared or triple value. The objective of this study is to present a theoretical framework for the implementation and measurement of corporate [...] Read more.
Corporate sustainability is a multi-stakeholder approach with a Triple or Quadruple Bottom Line focused on long-term horizons and the creation of shared or triple value. The objective of this study is to present a theoretical framework for the implementation and measurement of corporate sustainability within companies and to propose a set of measurement scales for assessing levels of sustainability implementation. This study offers a novel conceptual model that enables the systematic classification and benchmarking of sustainability maturity levels in companies. The methodology involves a literature review and the application of the Dyllick–Muff matrix (2016), which identifies three distinct levels of sustainability: Sustainability 1.0, Sustainability 2.0, and Sustainability 3.0. This framework uses three dimensions: objectives (the “what”), value creation (the “why”), and organizational perspective (the “how”). The study highlights the scarcity of research on the practical implementation of sustainability. Furthermore, there is no widely adopted framework for measuring sustainability implementation. The diversity and fragmentation of existing sustainability measurement models make comparative analysis particularly challenging. This study concludes that the Dyllick–Muff matrix is a suitable tool for determining a company’s level of sustainability and for conducting comparative analyses across organizations. Full article
18 pages, 775 KiB  
Review
Machine Learning for the Optimization of the Bioplastics Design
by Neelesh Ashok, Pilar Garcia-Diaz, Marta E. G. Mosquera and Valentina Sessini
Macromol 2025, 5(3), 38; https://doi.org/10.3390/macromol5030038 - 14 Aug 2025
Abstract
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review [...] Read more.
Biodegradable polyesters have gained attention due to their sustainability benefits, considering the escalating environmental challenges posed by synthetic polymers. Advances in artificial intelligence (AI), including machine learning (ML) and deep learning (DL), are expected to significantly accelerate research in polymer science. This review article explores “bio” polymer informatics by harnessing insights from the AI techniques used to predict structure–property relationships and to optimize the synthesis of bioplastics. This review also discusses PolyID, a machine learning-based tool that employs message-passing graph neural networks to provide a framework capable of accelerating the discovery of bioplastics. An extensive literature review is conducted on explainable AI (XAI) and generative AI techniques, as well as on benchmarking data repositories in polymer science. The current state-of-the art in ML methods for ring-opening polymerizations and the synthesizability of biodegradable polyesters is also presented. This review offers an in-depth insight and comprehensive knowledge of current AI-based models for polymerizations, molecular descriptors, structure–property relationships, predictive modeling, and open-source benchmarked datasets for sustainable polymers. This study serves as a reference and provides critical insights into the capabilities of AI for the accelerated design and discovery of green polymers aimed at achieving a sustainable future. Full article
23 pages, 1227 KiB  
Review
Comparative Assessment of LEED, BREEAM, and WELL: Advancing Sustainable Built Environments
by Elias Tsirovasilis, Martha Katafygiotou and Chrystala Psathiti
Energies 2025, 18(16), 4322; https://doi.org/10.3390/en18164322 - 14 Aug 2025
Viewed by 29
Abstract
This study compares the LEED, BREEAM, and WELL certification systems using the Triple Bottom Line (TBL) framework to assess their performance across environmental, social, and economic dimensions and their alignment with sustainable development goals. A structured secondary analysis was conducted on over 50 [...] Read more.
This study compares the LEED, BREEAM, and WELL certification systems using the Triple Bottom Line (TBL) framework to assess their performance across environmental, social, and economic dimensions and their alignment with sustainable development goals. A structured secondary analysis was conducted on over 50 peer-reviewed articles, case studies, and official certification manuals. Inclusion criteria required documented design targets and post-occupancy outcomes for certified buildings (2014–2024). A two-phase analytical model was applied: first, evaluating each system’s structure and priorities; then benchmarking them using the TBL framework to assess how holistically each addresses sustainability. Results show that LEED leads to energy optimization, BREEAM to lifecycle integration, and WELL to occupant health and indoor environmental quality. However, all systems exhibit post-occupancy performance gaps: LEED and BREEAM underperform by 15–30% in energy use, while WELL-certified projects may exceed 30% due to stringent indoor comfort demands. These findings highlight the need to integrate real-time post-occupancy evaluation into certification protocols. To improve overall effectiveness, the study proposes enhancements such as adaptive performance tracking, occupant feedback loops, and dynamic benchmarking aligned with actual building use. By identifying both the comparative strengths and systemic limitations of the three frameworks, this research contributes to the refinement of green building assessment tools. Practical implications include (1) integrating post-occupancy evaluation into certification renewal cycles, (2) adopting hybrid certification strategies to improve sustainability coverage, and (3) designing benchmarking tools that reflect real-world operational data. Full article
(This article belongs to the Special Issue Advanced Technologies for Energy-Efficient Buildings)
Show Figures

Figure 1

30 pages, 5536 KiB  
Article
Explainable Artificial Intelligence for the Rapid Identification and Characterization of Ocean Microplastics
by Dimitris Kalatzis, Angeliki I. Katsafadou, Eleni I. Katsarou, Dimitrios C. Chatzopoulos and Yiannis Kiouvrekis
Microplastics 2025, 4(3), 51; https://doi.org/10.3390/microplastics4030051 - 14 Aug 2025
Viewed by 75
Abstract
Accurate identification of microplastic polymers in marine environments is essential for tracing pollution sources, understanding ecological impacts, and guiding mitigation strategies. This study presents a comprehensive, explainable-AI framework that uses Raman spectroscopy to classify pristine and weathered microplastics versus biological materials. Using a [...] Read more.
Accurate identification of microplastic polymers in marine environments is essential for tracing pollution sources, understanding ecological impacts, and guiding mitigation strategies. This study presents a comprehensive, explainable-AI framework that uses Raman spectroscopy to classify pristine and weathered microplastics versus biological materials. Using a curated spectral library of 78 polymer specimens—including pristine, weathered, and biological materials—we benchmark seven supervised machine learning models (Decision Trees, Random Forest, k-Nearest Neighbours, Neural Networks, LightGBM, XGBoost and Support Vector Machines) without and with Principal Component Analysis for binary classification. Although k-Nearest Neighbours and Support Vector Machines achieved the highest single metric accuracy (82.5%), k NN also recorded the highest recall both with and without PCA, thereby offering the most balanced overall performance. To enhance interpretability, we employed SHapley Additive exPlanations, which revealed chemically meaningful spectral regions (notably near 700 cm−1 and 1080 cm−1) as critical to model predictions. Notably, models trained without Principal Component Analysis provided clearer feature attributions, suggesting improved interpretability in raw spectral space. This pipeline surpasses traditional spectral matching techniques and also delivers transparent insights into classification logic. Our findings can support scalable, real-time deployment of AI-based tools for oceanic microplastic monitoring and environmental policy development. Full article
Show Figures

Figure 1

26 pages, 4766 KiB  
Article
RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics
by Sachin Kansal, Bajrangi Kumar Mishra, Saniya Sethi, Kanika Vinayak, Priya Kansal and Jyotindra Narayan
Sensors 2025, 25(16), 5019; https://doi.org/10.3390/s25165019 - 13 Aug 2025
Viewed by 211
Abstract
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the [...] Read more.
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the pressing need for scalable, objective, and interpretable diagnostic tools, this work introduces RetinoDeep—deep learning frameworks integrating hybrid architectures and explainable AI to enhance the automated detection and classification of DR across seven severity levels. Specifically, we propose four novel models: an EfficientNetB0 combined with an SPCL transformer for robust global feature extraction; a ResNet50 ensembled with Bi-LSTM to synergize spatial and sequential learning; a Bi-LSTM optimized through genetic algorithms for hyperparameter tuning; and a Bi-LSTM with SHAP explainability to enhance model transparency and clinical trustworthiness. The models were trained and evaluated on a curated dataset of 757 retinal fundus images, augmented to improve generalization, and benchmarked against state-of-the-art baselines (including EfficientNetB0, Hybrid Bi-LSTM with EfficientNetB0, Hybrid Bi-GRU with EfficientNetB0, ResNet with filter enhancements, Bi-LSTM optimized using Random Search Algorithm (RSA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and a standard Convolutional Neural Network (CNN)), using metrics such as accuracy, F1-score, and precision. Notably, the Bi-LSTM with Particle Swarm Optimization (PSO) outperformed other configurations, achieving superior stability and generalization, while SHAP visualizations confirmed alignment between learned features and key retinal biomarkers, reinforcing the system’s interpretability. By combining cutting-edge neural architectures, advanced optimization, and explainable AI, this work sets a new standard for DR screening systems, promising not only improved diagnostic performance but also potential integration into real-world clinical workflows. Full article
Show Figures

Figure 1

20 pages, 2448 KiB  
Article
CCESC: A Crisscross-Enhanced Escape Algorithm for Global and Reservoir Production Optimization
by Youdao Zhao and Xiangdong Li
Biomimetics 2025, 10(8), 529; https://doi.org/10.3390/biomimetics10080529 - 12 Aug 2025
Viewed by 195
Abstract
Global optimization problems, ubiquitous scientific research, and engineering applications necessitate sophisticated algorithms adept at navigating intricate, high-dimensional search landscapes. The Escape (ESC) algorithm, inspired by the complex dynamics of crowd evacuation behavior—where individuals exhibit calm, herding, or panic responses—offers a compelling nature-inspired paradigm [...] Read more.
Global optimization problems, ubiquitous scientific research, and engineering applications necessitate sophisticated algorithms adept at navigating intricate, high-dimensional search landscapes. The Escape (ESC) algorithm, inspired by the complex dynamics of crowd evacuation behavior—where individuals exhibit calm, herding, or panic responses—offers a compelling nature-inspired paradigm for addressing these challenges. While ESC demonstrates a strong intrinsic balance between exploration and exploitation, opportunities exist to enhance its inter-agent communication and search trajectory diversification. This paper introduces an advanced bio-inspired algorithm, termed Crisscross Escape Algorithm (CCESC), which strategically incorporates a Crisscross (CC) information exchange mechanism. This CC strategy, by promoting multi-directional interaction and information sharing among individuals irrespective of their behavioral group (calm, herding, panic), fosters a richer exploration of the solution space, helps to circumvent local optima, and accelerates convergence towards superior solutions. The CCESC’s performance is extensively validated on the demanding CEC2017 benchmark suites, alongside several standard engineering design problems, and compared against a comprehensive set of prominent metaheuristic algorithms. Experimental results consistently reveal CCESC’s superior or highly competitive performance across a wide array of benchmark functions. Furthermore, CCESC is effectively applied to a complex reservoir production optimization problem, demonstrating its capacity to achieve significantly improved Net Present Value (NPV) over other established methods. This successful application underscores CCESC’s robustness and efficacy as a powerful optimization tool for tackling multifaceted real-world problems, particularly in reservoir production optimization within complex sedimentary environments. Full article
Show Figures

Figure 1

14 pages, 591 KiB  
Review
Artificial Intelligence and Extended Reality in the Training of Vascular Surgeons: A Narrative Review
by Joanna Halman, Sonia Tencer and Mariusz Siemiński
Med. Sci. 2025, 13(3), 126; https://doi.org/10.3390/medsci13030126 - 12 Aug 2025
Viewed by 204
Abstract
Background: The rapid shift from open to endovascular techniques in vascular surgery has significantly decreased trainee exposure to high-stakes open procedures. Simulation-based training, especially that incorporating virtual reality (VR) and artificial intelligence (AI), provides a promising way to bridge this skill gap. Objective: [...] Read more.
Background: The rapid shift from open to endovascular techniques in vascular surgery has significantly decreased trainee exposure to high-stakes open procedures. Simulation-based training, especially that incorporating virtual reality (VR) and artificial intelligence (AI), provides a promising way to bridge this skill gap. Objective: This narrative review aims to assess the current evidence on the integration of extended reality (XR) and AI into vascular surgeon training, focusing on technical skill development, performance evaluation, and educational results. Methods: We reviewed the literature on AI- and XR-enhanced surgical education across various specialties, focusing on validated cognitive learning theories, simulation methods, and procedure-specific training. This review covered studies on general, neurosurgical, orthopedic, and vascular procedures, along with recent systematic reviews and consensus statements. Results: VR-based training speeds up skill learning, reduces procedural mistakes, and enhances both technical and non-technical skills. AI-powered platforms provide real-time feedback, performance benchmarking, and objective skill evaluations. In vascular surgery, high-fidelity simulations have proven effective for training in carotid artery stenting, EVAR, rAAA management, and peripheral interventions. Patient-specific rehearsal, haptic feedback, and mixed-reality tools further improve realism and readiness. However, challenges like cost, data security, algorithmic bias, and the absence of long-term outcome data remain. Conclusions: XR and AI technologies are transforming vascular surgical education by providing scalable, evidence-based alternatives to traditional training methods. Future integration into curricula should focus on ethical use, thorough validation, and alignment with cognitive learning frameworks. A structured approach that combines VR, simulation, cadaver labs, and supervised practice may be the safest and most effective way to train the next generation of vascular surgeons. Full article
(This article belongs to the Section Cardiovascular Disease)
Show Figures

Figure 1

Back to TopTop