Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (457)

Search Parameters:
Keywords = Open Repositories

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1559 KB  
Article
TCEPVDB: Artificial Intelligence-Based Proteome-Wide Screening of Antigens and Linear T-Cell Epitopes in the Poxviruses and the Development of a Repository
by Mansi Dutt, Anuj Kumar, Ali Toloue Ostadgavahi, David J. Kelvin and Gustavo Sganzerla Martinez
Proteomes 2025, 13(4), 58; https://doi.org/10.3390/proteomes13040058 (registering DOI) - 6 Nov 2025
Abstract
Background: Poxviruses constitute a family of large dsDNA viruses that can infect a plethora of species including humans. Historically, poxviruses have caused a health burden in multiple outbreaks. The large genome of poxviruses favors reverse vaccinology approaches that can determine potential antigens and [...] Read more.
Background: Poxviruses constitute a family of large dsDNA viruses that can infect a plethora of species including humans. Historically, poxviruses have caused a health burden in multiple outbreaks. The large genome of poxviruses favors reverse vaccinology approaches that can determine potential antigens and epitopes. Here, we propose the modeling of a user-friendly database containing the predicted antigens and epitopes of a large cohort of poxvirus proteomes using the existing PoxiPred method for reverse vaccinology of poxviruses. Methods: In the present study, we obtained the whole proteomes of as many as 37 distinct poxviruses. We utilized each proteome to predict both antigenic proteins and T-cell epitopes of poxviruses with the aid of an Artificial Intelligence method, namely the PoxiPred method. Results: In total, we predicted 3966 proteins as potential antigen targets. Of note, we considered that this protein may exist in a set of proteoforms. Subsets of these proteins constituted a comprehensive repository of 54,291 linear T-cell epitopes. We combined the outcome of the predictions in the format of a web tool that delivers a database of antigens and epitopes of poxviruses. We also developed a comprehensive repository dedicated to providing access to end-users to obtain AI-based screened antigens and T-cell epitopes of poxviruses in a user-friendly manner. These antigens and epitopes can be utilized to design experiments for the development of effective vaccines against a plethora of poxviruses. Conclusions: The TCEPVDB repository, already deployed to the web under an open-source coding philosophy, is free to use, does not require any login, does not store any information from its users. Full article
Show Figures

Figure 1

41 pages, 3250 KB  
Article
OpenAM-SimCCX: An Open-Source Framework for Thermo-Mechanical Analysis of Additive Manufacturing with CalculiX
by Jesus Romero-Hdz, Baidya Nath Saha, Jobish Vallikavungal and Patricia Zambrano-Robledo
Materials 2025, 18(21), 4990; https://doi.org/10.3390/ma18214990 - 31 Oct 2025
Viewed by 292
Abstract
Additive Manufacturing (AM) has emerged as a transformative technology for rapid prototyping and fabrication of geometrically complex structures. However, the inherent thermal cycling and rapid solidification in processes such as Selective Laser Sintering (SLS) frequently induce deformation and residual stresses, leading to dimensional [...] Read more.
Additive Manufacturing (AM) has emerged as a transformative technology for rapid prototyping and fabrication of geometrically complex structures. However, the inherent thermal cycling and rapid solidification in processes such as Selective Laser Sintering (SLS) frequently induce deformation and residual stresses, leading to dimensional deviations and potential premature failure. This paper presents OpenAM-SimCCX, an open-source workflow for finite element-based thermo-mechanical simulation of AM processes using CalculiX 2.21. The framework employs a time-dependent thermo-mechanical model with layer-by-layer element activation to capture key aspects of SLS, including laser–material interaction and scanning strategy effects. Systematic comparisons of different scanning strategies demonstrate clear correlations between path planning, residual stress distributions, and distortion, while computational time analyses confirm the framework’s efficiency. By providing comprehensive documentation, implementation guides, and open repositories, OpenAM-SimCCX offers an accessible and economically viable alternative to commercial software, particularly for academic institutions and small- to medium-sized enterprises. This framework advances open-source simulation tools for AM and promotes broader adoption in both research and industry. Full article
Show Figures

Figure 1

8 pages, 1274 KB  
Brief Report
Identification and Full-Genome Characterisation of Genomoviruses in Cassava Leaves Infected with Cassava Mosaic Disease
by Olabode Onile-ere, Oluwagboadurami John, Oreoluwa Sonowo, Pakyendou Estel Name, Ezechiel Bionimian Tibiri, Fidèle Tiendrébéogo, Justin Pita, Solomon Oranusi and Angela O. Eni
Viruses 2025, 17(11), 1418; https://doi.org/10.3390/v17111418 - 25 Oct 2025
Viewed by 456
Abstract
This study identified and characterised three Genomoviruses during a circular DNA-enriched sequencing project aimed at assessing the evolution of Cassava mosaic begomoviruses in Nigeria. Using a combination of rolling circle amplification, Oxford Nanopore Sequencing and targeted amplicon sequencing, three full-length Genomovirus genomes were [...] Read more.
This study identified and characterised three Genomoviruses during a circular DNA-enriched sequencing project aimed at assessing the evolution of Cassava mosaic begomoviruses in Nigeria. Using a combination of rolling circle amplification, Oxford Nanopore Sequencing and targeted amplicon sequencing, three full-length Genomovirus genomes were recovered. The recovered genomes ranged from 2090 to 2188 nucleotides in length, contained two open reading frames (Rep and CP) in an ambisense orientation and shared between 84.81 and 95.37% nucleotide similarity with isolates in the NCBI GenBank repository. Motif analyses confirmed the presence of conserved rolling circle replication (RCR) and helicase motifs in all three isolates; however, one isolate lacked the RCR II motif. Phylogenetic inference using Rep and CP nucleotide sequences suggested that the isolates belonged to a divergent lineage within the Genomovirus family. These findings expand current knowledge of Genomovirus diversity and highlight the potential of cassava as a source for identifying novel CRESS-DNA viruses. Full article
(This article belongs to the Special Issue Economically Important Viruses in African Crops)
Show Figures

Figure 1

28 pages, 1247 KB  
Systematic Review
Systematic Review of Environmental Education in Morocco: Policies, Practices, and Post-Pandemic Challenges in the Context of the Sustainable Development Goals
by Abderrahmane Riouch and Saad Benamar
Sustainability 2025, 17(21), 9494; https://doi.org/10.3390/su17219494 - 25 Oct 2025
Viewed by 580
Abstract
Environmental education (EE) is central to achieving the Sustainable Development Goals (SDGs), particularly where inequalities constrain access to quality learning. Following PRISMA 2020, this review synthesizes 35 peer-reviewed studies and policy documents to examine Morocco’s EE policies and practices against global frameworks and [...] Read more.
Environmental education (EE) is central to achieving the Sustainable Development Goals (SDGs), particularly where inequalities constrain access to quality learning. Following PRISMA 2020, this review synthesizes 35 peer-reviewed studies and policy documents to examine Morocco’s EE policies and practices against global frameworks and post-pandemic challenges. A systematic search was conducted in Scopus, Web of Science, ERIC, ProQuest/EBSCO, Google Scholar, and national repositories (January 2000–December 2024; executed 15–17 March 2024). Findings show strong discursive alignment with SDG 4.7 and UNESCO’s ESD 2030 Roadmap but persistent implementation gaps: rural and peri-urban schools face resource shortages; teacher preparation for participatory, interdisciplinary approaches remains limited; and environmental clubs often rely on short-term projects without stable institutional support. The COVID-19 period exacerbated these pressures yet opened opportunities to integrate health–environment linkages, digital tools, and adaptive pedagogy. Equity reporting was limited (31% gender; 37% residence; 9% socio-economic status). Arabic-only records were identified (n = 42) and title/abstract-screened (n = 17) but excluded due to translation constraints (language bias). To advance transformative EE, we recommend prioritizing participatory, place-based teacher education; institutionalizing school clubs with light monitoring and baseline grants; targeting support to reduce territorial inequities; and developing an SDG-aligned national dashboard. Expanding longitudinal, quasi-experimental, and participatory designs is critical to strengthen causal claims and inform policy. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

32 pages, 2758 KB  
Article
A Hybrid Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM)–Attention Model Architecture for Precise Medical Image Analysis and Disease Diagnosis
by Md. Tanvir Hayat, Yazan M. Allawi, Wasan Alamro, Salman Md Sultan, Ahmad Abadleh, Hunseok Kang and Aymen I. Zreikat
Diagnostics 2025, 15(21), 2673; https://doi.org/10.3390/diagnostics15212673 - 23 Oct 2025
Viewed by 718
Abstract
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional [...] Read more.
Background: Deep learning (DL)-based medical image classification is becoming increasingly reliable, enabling physicians to make faster and more accurate decisions in diagnosis and treatment. A plethora of algorithms have been developed to classify and analyze various types of medical images. Among them, Convolutional Neural Networks (CNNs) have proven highly effective, particularly in medical image analysis and disease detection. Methods: To further enhance these capabilities, this research introduces MediVision, a hybrid DL-based model that integrates a vision backbone based on CNNs for feature extraction, capturing detailed patterns and structures essential for precise classification. These features are then processed through Long Short-Term Memory (LSTM), which identifies sequential dependencies to better recognize disease progression. An attention mechanism is then incorporated that selectively focuses on salient features detected by the LSTM, improving the model’s ability to highlight critical abnormalities. Additionally, MediVision utilizes a skip connection, merging attention outputs with LSTM outputs along with Grad-CAM heatmap to visualize the most important regions of the analyzed medical image and further enhance feature representation and classification accuracy. Results: Tested on ten diverse medical image datasets (including, Alzheimer’s disease, breast ultrasound, blood cell, chest X-ray, chest CT scans, diabetic retinopathy, kidney diseases, bone fracture multi-region, retinal OCT, and brain tumor), MediVision consistently achieved classification accuracies above 95%, with a peak of 98%. Conclusions: The proposed MediVision model offers a robust and effective framework for medical image classification, improving interpretability, reliability, and automated disease diagnosis. To support research reproducibility, the codes and datasets used in this study have been publicly made available through an open-access repository. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

35 pages, 1285 KB  
Article
Uncensored AI in the Wild: Tracking Publicly Available and Locally Deployable LLMs
by Bahrad A. Sokhansanj
Future Internet 2025, 17(10), 477; https://doi.org/10.3390/fi17100477 - 18 Oct 2025
Viewed by 1232
Abstract
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and [...] Read more.
Open-weight generative large language models (LLMs) can be freely downloaded and modified. Yet, little empirical evidence exists on how these models are systematically altered and redistributed. This study provides a large-scale empirical analysis of safety-modified open-weight LLMs, drawing on 8608 model repositories and evaluating 20 representative modified models on unsafe prompts designed to elicit, for example, election disinformation, criminal instruction, and regulatory evasion. This study demonstrates that modified models exhibit substantially higher compliance: while an average of unmodified models complied with only 19.2% of unsafe requests, modified variants complied at an average rate of 80.0%. Modification effectiveness was independent of model size, with smaller, 14-billion-parameter variants sometimes matching or exceeding the compliance levels of 70B parameter versions. The ecosystem is highly concentrated yet structurally decentralized; for example, the top 5% of providers account for over 60% of downloads and the top 20 for nearly 86%. Moreover, more than half of the identified models use GGUF packaging, optimized for consumer hardware, and 4-bit quantization methods proliferate widely, though full-precision and lossless 16-bit models remain the most downloaded. These findings demonstrate how locally deployable, modified LLMs represent a paradigm shift for Internet safety governance, calling for new regulatory approaches suited to decentralized AI. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Graphical abstract

19 pages, 1591 KB  
Systematic Review
A Meta-Analysis of Artificial Intelligence in the Built Environment: High-Efficacy Silos and Fragmented Ecosystems
by Omar Alrasbi and Samuel T. Ariaratnam
Smart Cities 2025, 8(5), 174; https://doi.org/10.3390/smartcities8050174 - 15 Oct 2025
Viewed by 324
Abstract
Cities face mounting pressures to deliver reliable, low-carbon services amid rapid urbanization and budget constraints. Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and the Internet of Things (IoT) are widely promoted to automate operations and strengthen decision-support across the built environment; [...] Read more.
Cities face mounting pressures to deliver reliable, low-carbon services amid rapid urbanization and budget constraints. Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL), and the Internet of Things (IoT) are widely promoted to automate operations and strengthen decision-support across the built environment; however, it remains unclear whether these interventions are both effective and systemically integrated across domains. We conducted a Preferred Reporting Items for Systematic Reviews (PRISMA) aligned systematic review and meta-analysis (January 2015–July 2025) of empirical AI/ML/DL/IoT interventions in urban infrastructure. Searches across five open-access indices Multidisciplinary Digital Publishing Institute (MDPI), Directory of Open Access Journals (DOAJ), Connecting Repositories (CORE), Bielefeld Academic Search Engine (BASE), and Open Access Infrastructure for Research in Europe (OpenAIRE)returned 7432 records; after screening, 71 studies met the inclusion criteria for quantitative synthesis. A random-effects model shows a large, pooled effect (Hedges’ g = 0.92; 95% CI: 0.78–1.06; p < 0.001) for within-domain performance/sustainability outcomes. Yet 91.5% of implementations operate at integration Levels 0–1 (isolated or minimal data sharing), and only 1.4% achieve real-time multi-domain integration (Level 3). Publication bias is likely (Egger’s test p = 0.03); a conservative bias-adjusted estimate suggests a still-positive effect of g ≈ 0.68–0.70. Findings indicate a dual reality: high efficacy in silos but pervasive fragmentation that prevents cross-domain synergies. We outline actions, mandating open standards and APIs, establishing city-level data governance, funding Level-2/3 integration pilots, and adopting cross-domain evaluation metrics to translate local gains into system-wide value. Overall certainty of evidence is rated Moderate based on Grading of Recommendations Assessment, Development, and Evaluation (GRADE) due to heterogeneity and small-study effects, offset by the magnitude and consistency of benefits. Full article
Show Figures

Figure 1

25 pages, 1839 KB  
Article
Modeling the Emergence of Insight via Quantum Interference on Semantic Graphs
by Arianna Pavone and Simone Faro
Mathematics 2025, 13(19), 3171; https://doi.org/10.3390/math13193171 - 3 Oct 2025
Viewed by 249
Abstract
Creative insight is a core phenomenon of human cognition, often characterized by the sudden emergence of novel and contextually appropriate ideas. Classical models based on symbolic search or associative networks struggle to capture the non-linear, context-sensitive, and interference-driven aspects of insight. In this [...] Read more.
Creative insight is a core phenomenon of human cognition, often characterized by the sudden emergence of novel and contextually appropriate ideas. Classical models based on symbolic search or associative networks struggle to capture the non-linear, context-sensitive, and interference-driven aspects of insight. In this work, we propose a computational model of insight generation grounded in continuous-time quantum walks over weighted semantic graphs, where nodes represent conceptual units and edges encode associative relationships. By exploiting the principles of quantum superposition and interference, the model enables the probabilistic amplification of semantically distant but contextually relevant concepts, providing a plausible account of non-local transitions in thought. The model is implemented using standard Python 3.10 libraries and is available both as an interactive fully reproducible Google Colab notebook and a public repository with code and derived datasets. Comparative experiments on ConceptNet-derived subgraphs, including the Candle Problem, 20 Remote Associates Test triads, and Alternative Uses, show that, relative to classical diffusion, quantum walks concentrate more probability on correct targets (higher AUC and peaks reached earlier) and, in open-ended settings, explore more broadly and deeply (higher entropy and coverage, larger expected radius, and faster access to distant regions). These findings are robust under normalized generators and a common time normalization, align with our formal conditions for transient interference-driven amplification, and support quantum-like dynamics as a principled process model for key features of insight. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

19 pages, 29304 KB  
Article
Generating Synthetic Facial Expression Images Using EmoStyle
by Clément Gérard Daniel Darne, Changqin Quan and Zhiwei Luo
Appl. Sci. 2025, 15(19), 10636; https://doi.org/10.3390/app151910636 - 1 Oct 2025
Viewed by 608
Abstract
Synthetic data has emerged as a significant alternative to more costly and time-consuming data collection methods. This assertion is particularly salient in the context of training facial expression recognition (FER) and generation models. The EmoStyle model represents a state-of-the-art method for editing images [...] Read more.
Synthetic data has emerged as a significant alternative to more costly and time-consuming data collection methods. This assertion is particularly salient in the context of training facial expression recognition (FER) and generation models. The EmoStyle model represents a state-of-the-art method for editing images of facial expressions in the latent space of StyleGAN2, using a continuous valence–arousal (VA) representation of emotions. While the model has demonstrated promising results in terms of high-quality image generation and strong identity preservation, its accuracy in reproducing facial expressions across the VA space remains to be systematically examined. To address this gap, the present study proposes a systematic evaluation of EmoStyle’s ability to generate facial expressions across the full VA space, including four levels of emotional intensity. While prior work on expression manipulation has mainly focused its evaluations on perceptual quality, diversity, identity preservation, or classification accuracy, to the best of our knowledge, no study to date has systematically evaluated the accuracy of generated expressions across the VA space. The evaluation’s findings include a consistent weakness in the VA direction range of 242–329°, where EmoStyle demonstrates the inability to produce distinct expressions. Building on these findings, we outline recommendations for enhancing the generation pipeline and release an open-source EmoStyle-based toolkit that integrates fixes to the original EmoStyle repository, an API wrapper, and our experiment scripts. Collectively, these contributions furnish both novel insights into the model’s capacities and practical resources for further research. Full article
Show Figures

Figure 1

10 pages, 2446 KB  
Data Descriptor
A Multi-Class Labeled Ionospheric Dataset for Machine Learning Anomaly Detection
by Aleksandra Kolarski, Filip Arnaut, Sreten Jevremović, Zoran R. Mijić and Vladimir A. Srećković
Data 2025, 10(10), 157; https://doi.org/10.3390/data10100157 - 30 Sep 2025
Viewed by 517
Abstract
The binary anomaly detection (classification) of ionospheric data related to Very Low Frequency (VLF) signal amplitude in prior research demonstrated the potential for development and further advancement. Further data quality improvement is integral for advancing the development of machine learning (ML)-based ionospheric data [...] Read more.
The binary anomaly detection (classification) of ionospheric data related to Very Low Frequency (VLF) signal amplitude in prior research demonstrated the potential for development and further advancement. Further data quality improvement is integral for advancing the development of machine learning (ML)-based ionospheric data (VLF signal amplitude) anomaly detection. This paper presents the transition from binary to multi-class classification of ionospheric signal amplitude datasets. The dataset comprises 19 transmitter–receiver pairs and 383,041 manually labeled amplitude instances. The target variable was reclassified from a binary classification (normal and anomalous data points) to a six-class classification that distinguishes between daytime undisturbed signals, nighttime signals, solar flare effects, instrument errors, instrumental noise, and outlier data points. Furthermore, in addition to the dataset, we developed a freely accessible web-based tool designed to facilitate the conversion of MATLAB data files to TRAINSET-compatible formats, thereby establishing a completely free and open data pipeline from the WALDO world data repository to data labeling software. This novel dataset facilitates further research in ionospheric signal amplitude anomaly detection, concentrating on effective and efficient anomaly detection in ionospheric signal amplitude data. The potential outcomes of employing anomaly detection techniques on ionospheric signal amplitude data may be extended to other space weather parameters in the future, such as ELF/LF datasets and other relevant datasets. Full article
(This article belongs to the Section Spatial Data Science and Digital Earth)
Show Figures

Figure 1

14 pages, 2937 KB  
Article
Organization and Community Usage of a Neuron Type Circuitry Knowledge Base of the Hippocampal Formation
by Kasturi Nadella, Diek W. Wheeler and Giorgio A. Ascoli
Biomedicines 2025, 13(10), 2363; https://doi.org/10.3390/biomedicines13102363 - 26 Sep 2025
Viewed by 324
Abstract
Background/Objectives: Understanding the diverse neuron types within the hippocampal formation is essential for advancing our knowledge of its fundamental roles in learning and memory. Hippocampome.org serves as a comprehensive, evidence-based knowledge repository that integrates morphological, electrophysiological, and molecular features of neurons across [...] Read more.
Background/Objectives: Understanding the diverse neuron types within the hippocampal formation is essential for advancing our knowledge of its fundamental roles in learning and memory. Hippocampome.org serves as a comprehensive, evidence-based knowledge repository that integrates morphological, electrophysiological, and molecular features of neurons across the rodent dentate gyrus, CA3, CA2, CA1, subiculum, and entorhinal cortex. In addition to these core properties, this open access resource includes detailed information on synaptic connectivity, signal propagation, and plasticity, facilitating sophisticated modeling of hippocampal circuits. A distinguishing feature of Hippocampome.org is its emphasis on quantitative, literature-backed data that can help constrain and validate spiking neural network simulations via an interactive web interface. Methods: To assess and enhance its utility to the neuroscience community, we integrated Google Analytics (GA) into the platform to monitor user behavior, identify high-impact content, and evaluate geographic reach. Results: GA data provided valuable page view metrics, revealing usage trends, frequently accessed neuron properties, and the progressive adoption of new functionalities. Conclusions: These insights directly inform iterative development, particularly in the design of a robust Application Programming Interface (API) to support programmatic access. Ultimately, the integration of GA empowers data-driven optimization of this public resource to better serve the global neuroscience community. Full article
Show Figures

Figure 1

28 pages, 616 KB  
Article
UAVThreatBench: A UAV Cybersecurity Risk Assessment Dataset and Empirical Benchmarking of LLMs for Threat Identification
by Padma Iyenghar
Drones 2025, 9(9), 657; https://doi.org/10.3390/drones9090657 - 18 Sep 2025
Viewed by 906
Abstract
UAVThreatBench introduces the first structured benchmark for evaluating large language models in cybersecurity threat identification for unmanned aerial vehicles operating within industrial indoor settings, aligned with the European Radio Equipment Directive. The benchmark consists of 924 expert-curated industrial scenarios, each annotated with five [...] Read more.
UAVThreatBench introduces the first structured benchmark for evaluating large language models in cybersecurity threat identification for unmanned aerial vehicles operating within industrial indoor settings, aligned with the European Radio Equipment Directive. The benchmark consists of 924 expert-curated industrial scenarios, each annotated with five cybersecurity threats, yielding a total of 4620 threats mapped to directive articles on network and device integrity, personal data and privacy protection, and prevention of fraud and economic harm. Seven state-of-the-art models from the OpenAI GPT family and the LLaMA family were systematically assessed on a representative subset of 100 scenarios from the UAVThreatBench dataset. The evaluation applied a fuzzy matching threshold of 70 to compare model-generated threats against expert-defined ground truth. The strongest model identified nearly nine out of ten threats correctly, with close to half of the scenarios achieving perfect alignment, while other models achieved lower but still substantial alignment. Semantic error analysis revealed systematic weaknesses, particularly in identifying availability-related threats, backend-layer vulnerabilities, and clause-level regulatory mappings. UAVThreatBench therefore establishes a reproducible foundation for regulatory-compliant cybersecurity threat identification in safety-critical unmanned aerial vehicle environments. The complete benchmark dataset and evaluation results are openly released under the MIT license through a dedicated online repository. Full article
Show Figures

Figure 1

37 pages, 3172 KB  
Review
Life Cycle Assessment (LCA) Challenges in Evaluating Emerging Battery Technologies: A Review
by Renata Costa
Materials 2025, 18(18), 4321; https://doi.org/10.3390/ma18184321 - 15 Sep 2025
Viewed by 1738
Abstract
As the demand for more efficient energy storage solutions grows, emerging battery chemistries are being developed to complement or potentially replace conventional lithium-ion technologies. This review explores the circular economy potential of sodium (Na), magnesium (Mg), zinc (Zn), and aluminum (Al) battery systems [...] Read more.
As the demand for more efficient energy storage solutions grows, emerging battery chemistries are being developed to complement or potentially replace conventional lithium-ion technologies. This review explores the circular economy potential of sodium (Na), magnesium (Mg), zinc (Zn), and aluminum (Al) battery systems as alternative post-lithium configurations. Through a comparative literature analysis, it identifies key barriers related to material complexity, recovery efficiency, and regulatory gaps, while highlighting opportunities for design improvements and policy alignment to enhance sustainability across battery life cycles. However, end-of-life (EoL) material recovery remains constrained by complex chemistries, low technology readiness levels, and fragmented regulatory frameworks. Embedding materials/battery design principles, transparent life cycle assessment (LCA) data (e.g., publishing LCAs in open repositories using a standard functional unit), and harmonized policy early could close material loops and transform the rising post-lithium battery stream into a circular-economy resource rather than a waste burden. Full article
(This article belongs to the Special Issue Emerging Trends and Innovations in Engineered Nanomaterials)
Show Figures

Figure 1

17 pages, 1081 KB  
Article
Detection of Fault Events in Software Tools Integrated with Human–Computer Interface Using Machine Learning
by Jasem Alostad, Fayez Eid Alazmi, Ali Alfayly and Abdullah Jasim Alshehab
Appl. Sci. 2025, 15(18), 10030; https://doi.org/10.3390/app151810030 - 14 Sep 2025
Viewed by 771
Abstract
Software defect prediction (SDP) has emerged as a crucial task in ensuring software quality and reliability. The early and accurate identification of defect-prone modules significantly reduces maintenance costs and improves system performance. In this study, we introduce a novel hybrid model that combines [...] Read more.
Software defect prediction (SDP) has emerged as a crucial task in ensuring software quality and reliability. The early and accurate identification of defect-prone modules significantly reduces maintenance costs and improves system performance. In this study, we introduce a novel hybrid model that combines Restricted Boltzmann Machines (RBM) for nonlinear feature extraction with Logistic Regression (LR) for classification. The model is validated across 21 benchmark datasets from the PROMISE and OpenML repositories. We conducted extensive experiments, including analyses of computational complexity and runtime comparisons, to assess performance in terms of accuracy, precision, recall, F1-score, and AUC. The results indicate that the RBM-LR model consistently outperforms baseline LR, as well as other leading classifiers such as Random Forest, XGBoost, and SVM. Statistical significance was affirmed using paired t-tests (p < 0.05). The proposed framework strikes a balance between interpretability and performance, with future work aimed at extending this approach through hybrid deep learning techniques and validation on industrial datasets to enhance scalability. Full article
(This article belongs to the Special Issue Emerging Technologies of Human-Computer Interaction)
Show Figures

Figure 1

27 pages, 432 KB  
Article
Refactoring Loops in the Era of LLMs: A Comprehensive Study
by Alessandro Midolo and Emiliano Tramontana
Future Internet 2025, 17(9), 418; https://doi.org/10.3390/fi17090418 - 12 Sep 2025
Viewed by 882
Abstract
Java 8 brought functional programming to the Java language and library, enabling more expressive and concise code to replace loops by using streams. Despite such advantages, for-loops remain prevalent in current codebases as the transition to the functional paradigm requires a significant shift [...] Read more.
Java 8 brought functional programming to the Java language and library, enabling more expressive and concise code to replace loops by using streams. Despite such advantages, for-loops remain prevalent in current codebases as the transition to the functional paradigm requires a significant shift in the developer mindset. Traditional approaches for assisting refactoring loops into streams check a set of strict preconditions to ensure correct transformation, hence limiting their applicability. Conversely, generative artificial intelligence (AI), particularly ChatGPT, is a promising tool for automating software engineering tasks, including refactoring. While prior studies examined ChatGPT’s assistance in various development contexts, none have specifically investigated its ability to refactor for-loops into streams. This paper addresses such a gap by evaluating ChatGPT’s effectiveness in transforming loops into streams. We analyzed 2132 loops extracted from four open-source GitHub repositories and classified them according to traditional refactoring templates and preconditions. We then tasked ChatGPT with the refactoring of such loops and evaluated the correctness and quality of the generated code. Our findings revealed that ChatGPT could successfully refactor many more loops than traditional approaches, although it struggled with complex control flows and implicit dependencies. This study provides new insights into the strengths and limitations of ChatGPT in loop-to-stream refactoring and outlines potential improvements for future AI-driven refactoring tools. Full article
Show Figures

Figure 1

Back to TopTop