Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,027)

Search Parameters:
Keywords = AI-model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 13461 KB  
Article
3D Environment Generation from Sparse Inputs for Automated Driving Function Development
by Till Temmen, Jasper Debougnoux, Li Li, Björn Krautwig, Tobias Brinkmann, Markus Eisenbarth and Jakob Andert
Vehicles 2026, 8(3), 47; https://doi.org/10.3390/vehicles8030047 (registering DOI) - 2 Mar 2026
Abstract
The development of AI-driven automated driving functions requires vast amounts of diverse, high-quality data to ensure road safety and reliability. However, both the manual collection of real-world data and creation of 3D environments are costly, time-consuming, and hard to scale. Most automatic environment [...] Read more.
The development of AI-driven automated driving functions requires vast amounts of diverse, high-quality data to ensure road safety and reliability. However, both the manual collection of real-world data and creation of 3D environments are costly, time-consuming, and hard to scale. Most automatic environment generation methods still rely heavily on manual effort, and only a few are tailored for Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) training and validation. We propose an automated generative framework that learns ground-truth features to reconstruct 3D environments from a road definition and two simple parameters for country and area type. Environment generation is structured into three modules—map-based data generation, semantic city generation, and final detailing. The overall framework is validated by training a perception network on a mixed set of real and synthetic data, validating it solely on real data, and comparing performance to assess the practical value of the environments we generated. By constructing a Pareto front over combinations of training set sizes and real-to-synthetic data ratios, we show that our synthetic data can replace up to 85% of real data without significant quality degradation. Our results demonstrate how multi-layered environment generation frameworks enable flexible and scalable data generation for perception tasks while incorporating ground-truth 3D environment data. This reduces reliance on costly field data and supports automated rapid scenario exploration for finding safety-critical edge cases. Full article
Show Figures

Figure 1

27 pages, 1246 KB  
Review
Deep Learning-Enabled Multi-Omics Integration: A New Frontier in Precise Drug Target Discovery
by Yufei Ren, Haotian Bai, Jihan Wang, Yanning Yang and Yangyang Wang
Biology 2026, 15(5), 410; https://doi.org/10.3390/biology15050410 (registering DOI) - 2 Mar 2026
Abstract
Precise drug target discovery is pivotal to mitigating the escalating costs and high attrition rates that characterize pharmaceutical research and development. Given that traditional single-omics methods often fail to elucidate the systemic complexity of human diseases, deep learning (DL)-enabled multi-omics integration has emerged [...] Read more.
Precise drug target discovery is pivotal to mitigating the escalating costs and high attrition rates that characterize pharmaceutical research and development. Given that traditional single-omics methods often fail to elucidate the systemic complexity of human diseases, deep learning (DL)-enabled multi-omics integration has emerged as a transformative frontier. This review systematically summarizes the advancements in DL-driven multi-omics integration for drug target discovery. First, the multi-omics data foundation and integration strategies are delineated, followed by an exploration of the DL architectures utilized for processing such data. Subsequently, the efficacy of DL-driven multi-omics integration is examined regarding the identification of novel disease drivers, prediction of synthetic lethality interactions, and prioritization of therapeutic targets. Finally, addressing persistent challenges related to data sparsity, model interpretability, and target druggability and validation hurdles, emerging opportunities driven by Generative AI, Large Multimodal Models (LMMs), Explainable AI (XAI), and multidimensional feasibility assessment frameworks are discussed in the context of advancing precision medicine. Full article
(This article belongs to the Special Issue AI Deep Learning Approach to Study Biological Questions (2nd Edition))
Show Figures

Graphical abstract

25 pages, 2662 KB  
Review
Optimizing Biomass Feedstock Logistics Using AI for Integrated Multimodal Transport in Bioenergy and Bioproduct Systems: A Review
by Johanna Gonzalez and Jingxin Wang
Logistics 2026, 10(3), 54; https://doi.org/10.3390/logistics10030054 (registering DOI) - 2 Mar 2026
Abstract
Background: The constant growth in demand for sustainable energy products and the development of the circular economy have created a critical need for an efficient supply chain for biomass. However, the inherent challenges of biomass make its harvesting, collection, storage, and transport [...] Read more.
Background: The constant growth in demand for sustainable energy products and the development of the circular economy have created a critical need for an efficient supply chain for biomass. However, the inherent challenges of biomass make its harvesting, collection, storage, and transport difficult, impacting logistical efficiency and the viability of bioenergy and bioproduct production. This study analyzes how combining artificial intelligence (AI) with multimodal transport can optimize and improve efficiency, as well as reduce costs, in biomass logistics. Methods: The study uses a tiered research framework that encompasses the physical domain (biomass limitations), the structural domain (mathematical modeling for multimodal transport), the intelligence domain (AI-based decision making), and the strategic approach. Results: The outcomes indicate that while truck transport is ideal for short distances, integrating rail and water transport through AI-driven optimization reduces costs and greenhouse gas emissions for long-distance travel. AI technologies, such as digital twins and machine learning, improve demand forecasting, real-time routing, and cargo consolidation, leading to enhanced prediction accuracy for transport costs. Conclusions: The integration of AI and multimodal networks builds resilient and sustainable biomass supply chains. However, full implementation requires addressing data fragmentation and investing in digital infrastructure to enable seamless coordination between supply chain stakeholders. Full article
Show Figures

Graphical abstract

39 pages, 16079 KB  
Review
Laboratory Synthesis and Characterization of Natural Gas Hydrates for Sustainable Gas Production from Hydrate-Bearing Sediments
by Naser Golsanami, Emmanuel Gyimah, Guanlin Wu, Shanilka G. Fernando, Zhi Zhang, Xinqi Wang, Bin Gong, Huaimin Dong, Behzad Saberali, Mahmoud Behnia, Fan Feng and Madusanka Nirosh Jayasuriya
Sustainability 2026, 18(5), 2401; https://doi.org/10.3390/su18052401 (registering DOI) - 2 Mar 2026
Abstract
Natural gas hydrate (NGH) deposits represent a vast and clean energy source. However, sustainable gas production from these resources remains an unsolved technical problem due to potential geohazards and climate challenges. A critical issue in this regard is the difficulty of obtaining in [...] Read more.
Natural gas hydrate (NGH) deposits represent a vast and clean energy source. However, sustainable gas production from these resources remains an unsolved technical problem due to potential geohazards and climate challenges. A critical issue in this regard is the difficulty of obtaining in situ samples, which are essential for detailed laboratory studies of NGH’s geomechanical and chemical behavior for safe and green gas production after hydrate dissociation. Currently, the retrieval of representative samples from NGH reservoirs is hindered by significant technological limitations and high costs. Consequently, laboratory-synthesized gas hydrate-bearing sediment (HBS) samples are crucial for controlled research purposes and validating numerical simulation models and are used in the majority of research studies. With this in mind and considering the complexity of synthesizing HBS samples, this study comprehensively reviews different methods of synthesizing gas hydrates in porous media, including excess-gas, excess-water, dissolved-gas, spray, bubble injection, and hybrid techniques. Each method produces distinct hydrate morphologies (e.g., pore-filling, cementing, grain-coating, etc.) and saturation levels, with trade-offs in speed, uniformity, reproducibility, and ease of control. Furthermore, the current review details the synergic application of non-invasive characterization techniques, i.e., X-ray Computed Tomography (CT) and Nuclear Magnetic Resonance (NMR), in studying gas hydrates. CT provides high-resolution three-dimensional (3D) structural images of pore geometry and hydrate distribution, while NMR/MRI (Magnetic Resonance Imaging) quantifies fluid saturations and tracks hydrate formation/dissociation dynamics in real time. The synergistic use of CT and NMR offers a powerful multimodal approach, overcoming individual limitations such as CT’s poor hydrate–water contrast detection and NMR’s indirect hydrate inference, which could help in the sustainable synthesis of particular hydrate morphologies. Finally, the critical analysis of current technological challenges or gaps and also the emerging trends and future directions in the study of HBS, including advanced imaging techniques, AI-assisted analysis, and standardization efforts, etc., are discussed. It was found that the selection of the most appropriate method for natural gas hydrate synthesis is mostly task-specific, and the emerging technologies have facilitated the synthesis of HBS samples with more precise control of morphology, saturation, etc. This review provides the required insights for sustainable synthesis and characterization of hydrate-bearing sediments samples and serves sustainable gas production from natural gas hydrate reservoirs. Full article
Show Figures

Figure 1

25 pages, 1239 KB  
Article
Human–AI Collaboration in Programming Education: Student Perspectives on LLM-Based Coding Assistants
by Hebah Alquran and Shadi Banitaan
Computers 2026, 15(3), 154; https://doi.org/10.3390/computers15030154 - 2 Mar 2026
Abstract
The integration of large language models (LLMs) such as GitHub Copilot, ChatGPT, and DeepSeek into programming education has introduced a new form of human–AI collaboration. These tools provide real-time code suggestions, debugging assistance, and design support, yet their effects on learning, trust, productivity, [...] Read more.
The integration of large language models (LLMs) such as GitHub Copilot, ChatGPT, and DeepSeek into programming education has introduced a new form of human–AI collaboration. These tools provide real-time code suggestions, debugging assistance, and design support, yet their effects on learning, trust, productivity, and coding practices remain underexplored. We surveyed 248 students to examine relationships among these constructs, usage patterns by programming experience and academic level, the most frequently used assistants and programming languages, group differences in perceived learning and coding practices, and the extent to which learning, trust, and coding practices predict productivity. Students reported high adoption of ChatGPT and Python, generally positive perceptions of learning and productivity, and significant positive correlations among all constructs. Kruskal–Wallis tests indicated no significant differences in perceived learning across Basic, Intermediate, and Expert programmers, nor in coding practices across academic years (Years 1–4). Multiple regression showed that learning, trust, and coding practices jointly explained a substantial proportion of productivity variance (R2 = 0.628). These findings emphasize both opportunities and risks of AI integration and offer guidance for educators aiming to integrate AI tools while maintaining pedagogical rigor. Full article
Show Figures

Figure 1

24 pages, 4694 KB  
Article
AI-Driven Thermal Management Optimization for Lithium-Ion Battery Packs: A Surrogate Model Approach to Cell Spacing Design
by Florin Mariasiu, Ioan Szabo and George E. Mariasiu
Batteries 2026, 12(3), 86; https://doi.org/10.3390/batteries12030086 (registering DOI) - 2 Mar 2026
Abstract
The article presents the possibilities of integrating artificial intelligence (through specific machine learning techniques) in the design and construction process of a battery in order to optimize its thermal management. The workflow starts from CFD thermal simulations (1C-rate) of a battery (16 Li-ion [...] Read more.
The article presents the possibilities of integrating artificial intelligence (through specific machine learning techniques) in the design and construction process of a battery in order to optimize its thermal management. The workflow starts from CFD thermal simulations (1C-rate) of a battery (16 Li-ion cells, type 18650, 4 × 4 arrangement), and based on the results, a complex thermal landscape is created through radial basis function (Rbf) interpolation. Furthermore, a robust neural network (NN) model is proposed and validated through the obtained performances, which is used further for the optimization of the design space (DSO) and multi-objective optimization (MOO) processes. The obtained results show that for DSO, a cell spacing of 1.37 mm is proposed for a maximum cell temperature of 25.53 °C, and in the case of MOO, a cell spacing of 2.64 mm (for minimum fan energy consumption). The main conclusion of the obtained results shows that the use of the NN model as a surrogate (the Digital Twin of a physical model) presents two great advantages in the process of designing a battery: running a CFD simulation for each point on the 2D grid would take hours, while the NN model can generate the entire map and find the optimum in less than 2 s, and moreover, thousands of additional points can be evaluated to find the thin limit of optimal models, effectively filtering out thousands of energy-consuming “suboptimal” configurations. Full article
Show Figures

Figure 1

17 pages, 1676 KB  
Article
Construction Accident Prediction via Generative AI and AutoML Approaches
by Sungchul Seo, Dahyun Oh, Kyubyung Kang, HyunJung Park and JungHo Jeon
Appl. Sci. 2026, 16(5), 2412; https://doi.org/10.3390/app16052412 (registering DOI) - 2 Mar 2026
Abstract
The construction industry remains one of the most hazardous sectors, with a high incidence of injuries and fatalities, making accurate accident prediction essential for improving safety performance. Although machine learning and deep learning approaches have been widely applied to construction accident prediction, most [...] Read more.
The construction industry remains one of the most hazardous sectors, with a high incidence of injuries and fatalities, making accurate accident prediction essential for improving safety performance. Although machine learning and deep learning approaches have been widely applied to construction accident prediction, most prior studies have primarily focused on optimizing predictive accuracy within structured modeling pipelines under internal validation settings. In contrast, the application of Generative Artificial Intelligence (Generative AI) for accident prediction remains relatively underexplored, and systematic comparisons between Generative AI and Automated Machine Learning (AutoML), particularly under standardized and external validation conditions, are limited. To address this research gap, this study provides a structured comparative evaluation of AutoML and a fine-tuned Generative Pre-trained Transformer (GPT) model in terms of predictive performance, training efficiency, robustness under external validation, and operational usability. A dataset comprising construction accident cases obtained from Korea’s Construction Safety Management Integrated Information (CSI) was used. AutoML was employed to evaluate multiple machine learning classifiers, while a GPT-based model was fine-tuned to classify accident severity. Model performance was assessed using accuracy, precision, recall, and F1-score metrics. The results indicate that AutoML achieved higher predictive accuracy (97.48%) under controlled training conditions, whereas the Generative AI model achieved 75.6%. However, AutoML required substantial preprocessing and optimization efforts. In contrast, the GPT-based model demonstrated greater deployment flexibility with minimal data preparation. External validation using newly observed imbalanced data revealed that AutoML experienced performance degradation, whereas the Generative AI model maintained relatively stable performance. These findings suggest that Generative AI may serve as a complementary and deployment-friendly alternative in construction accident prediction contexts where adaptability, external validation robustness, and usability are prioritized. Full article
Show Figures

Figure 1

22 pages, 340 KB  
Article
Regulating AI-Driven Triage: Fundamental Rights and Compliance Challenges in the European Union
by Guillermo Lazcoz, Josu Maiora, Íñigo de Miguel and Begoña Sanz
AI 2026, 7(3), 86; https://doi.org/10.3390/ai7030086 (registering DOI) - 2 Mar 2026
Abstract
Emergency triage is a critical healthcare action that could be improved through the use of artificial intelligence (AI) systems, as these have been shown to achieve accuracy rates of approximately 70–90% for LLMs and AUC values ranging from 0.75 to 0.95 for common [...] Read more.
Emergency triage is a critical healthcare action that could be improved through the use of artificial intelligence (AI) systems, as these have been shown to achieve accuracy rates of approximately 70–90% for LLMs and AUC values ranging from 0.75 to 0.95 for common AI models. However, these systems face challenges related to the rights and interests of the individuals involved. The European Union’s normative framework, including not only data protection regulations but also the AI Act and medical device regulations, imposes conditions on the use of AI, and these are analyzed here. Our conclusions reveal that Article 22 of the General Data Protection Regulation (GDPR) makes it difficult to justify the establishment of fully automated decision-making models for triage. That accountability obligations for implementers (Fundamental Rights Impact Assessments: FRIAs) and data controllers (data protection impact assessments: DPIAs) can contribute to better design of AI-based decision-making in triage. Furthermore, with regard to the information rights set out in the GDPR, these have been complemented by the right to an explanation under Art. 86 AI Act in the use of high-risk AI systems. Unfortunately, regulation relating to general-purpose AI models may create some gaps in this framework. The implementation of AI systems for automated decision-making in triage has the potential to improve medical care, but their use requires clarification of applicable regulations and safeguards for patients’ rights. Full article
41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 (registering DOI) - 2 Mar 2026
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

33 pages, 900 KB  
Article
Limits of Computational Selection and Their Implications for Human–AI Divergence in Convergent Creativity
by Sungwook Jung and Ken Nah
Information 2026, 17(3), 243; https://doi.org/10.3390/info17030243 (registering DOI) - 2 Mar 2026
Abstract
This study investigated whether humans and generative Large Language Models (LLMs) exhibit similar performance in divergent ideation but diverge in convergent selection. To address the critical oversight in current AI creativity research, which predominantly focuses on generative output, this study introduces the original [...] Read more.
This study investigated whether humans and generative Large Language Models (LLMs) exhibit similar performance in divergent ideation but diverge in convergent selection. To address the critical oversight in current AI creativity research, which predominantly focuses on generative output, this study introduces the original conceptual framework of ‘Selection Alignment’ and a ‘novel dual-phase experimental protocol.’ This research transcends traditional generation-centric evaluations to establish a new paradigm for assessing the evaluative stage of creativity. A controlled experiment involved 240 design professionals (120 idea generators, 120 independent selectors) and two LLM agents (GPT-4o, Gemini 1.5 Pro). Participants and LLMs responded to identical divergent prompts, including 10 Alternative Uses Task-style prompts and 10 design problems. Both humans and LLMs generated candidate idea pools, then performed convergent selection by choosing the top five items per prompt. Idea generation was evaluated based on Fluency, Flexibility, and Semantic Breadth. Selection outcomes were compared using top-5 overlap rates derived from semantic clustering. The results indicated near-parity in generation metrics, showing no statistically significant differences between human and AI outputs. However, a substantial divergence was observed in convergent selection: the mean human–AI top-5 overlap was 19.2% for Model-A and 22.4% for Model-B, both significantly below permutation-based chance levels (null mean overlap ≈ 35%). AI selections were strongly predicted by embedding- and probability-based metrics, while human choices were better predicted by context- and experience-based criteria, highlighting a fundamental mechanistic divide. This suggests that convergent selection amplifies human–AI divergence, carrying significant implications for designing co-creative interfaces that integrate human experience into AI’s selection mechanisms. Full article
Show Figures

Figure 1

27 pages, 2569 KB  
Article
A Combined Kalman Filter–LSTM to Forecast Downside Risk of BWP/USD Returns: A Bottom-Up Hierarchical Approach
by Katleho Makatjane and Diteboho Xaba
Forecasting 2026, 8(2), 21; https://doi.org/10.3390/forecast8020021 - 2 Mar 2026
Abstract
This paper offers a hybrid forecasting approach that merges a local-level state space Kalman filter with a Long-Short-Term Memory (LSTM) neural network to assess the downside risk of the Botswana Pula versus the US Dollar (BWP/USD). Inspired by the inability of conventional econometric [...] Read more.
This paper offers a hybrid forecasting approach that merges a local-level state space Kalman filter with a Long-Short-Term Memory (LSTM) neural network to assess the downside risk of the Botswana Pula versus the US Dollar (BWP/USD). Inspired by the inability of conventional econometric models to capture complex latent structural shifts and nonlinear patterns, our architecure uses a bottom-up hierarchical methodology in which the smoothed level component of the exchange rate is isolated by the Kalman filter and subsequently fed into the LSTM architecture. Three key indicators for assessing downside risk—Maximum Drawdown (MDD), Conditional Drawdown-at-Risk (CDaR), and Downside Deviation—are used to assess model performance across various time-frames (7, 30, 90, 180, and 365 days). As confirmed by Kupiec and Christoffersen’s backtesting processes, the findings show a high degree of alignment between projected and actual values, with negligible downside deviation bias and robust calibration. Moreover, global economic and geopolitical shocks, such as the COVID-19 pandemic, the Russia–Ukraine conflict, and the 2015–2016 Shanghai Stock Exchange crash, are important factors that influence exchange rate volatility, according to explainable artificial intelligence techniques, particularly SHAP (SHapley Additive exPlanations) analysis. Downside risk is also greatly increased by regional currency links, especially the impact of the ZAR/BWP exchange rate. On the other hand, domestic temporal variables, such as week, quarter, and month, have very little impact. These results emphasise how Botswana’s currency rate is structurally vulnerable to external shocks and how crucial it is to include both global and regional considerations in risk analysis. The research concludes that the accuracy and transparency of projections for exchange rate risk significantly improve when practical filtering is combined with deep learning and explainable AI. To improve macroeconomic resilience and guide successful financial risk management plans in emerging market environments, policymakers are advised to employ AI-driven forecasting techniques, enhance regional monetary coordination, and set up real-set learning systems. Full article
(This article belongs to the Section AI Forecasting)
Show Figures

Figure 1

22 pages, 344 KB  
Review
A Review of Crime at Machine Speed: Criminological Aspects of Artificial Intelligence’s Industrialisation of Deception
by Paolo Bailo, Ascanio Sirignano, Giulio Nittari, Giuseppe Visconti, Giuliano Pesel, Tommaso Spasari and Giovanna Ricci
Sci 2026, 8(3), 54; https://doi.org/10.3390/sci8030054 (registering DOI) - 2 Mar 2026
Abstract
Artificial intelligence (AI) is transforming criminal practice by industrialising deception, compressing attack cycles, and corroding evidentiary trust. This narrative review synthesises recent technical and criminological literature with institutional reporting to explain how generative models, predictive analytics, and automation enable convincing synthetic media, highly [...] Read more.
Artificial intelligence (AI) is transforming criminal practice by industrialising deception, compressing attack cycles, and corroding evidentiary trust. This narrative review synthesises recent technical and criminological literature with institutional reporting to explain how generative models, predictive analytics, and automation enable convincing synthetic media, highly targeted social engineering, document forgery, identity synthesis, and adaptive evasion. Attention is given to the convergence with organised networks that use AI to coordinate logistics, mimic normal behaviour, and launder proceeds across platforms. Furthermore, a review of the grey literature was carried out to identify applied cases and to show how heterogeneous they are. Defensive efforts are advancing, yet detection remains brittle under laundering, increasing media realism, and adversarial adaptation. Regulatory and policy responses are surveyed across jurisdictions without claiming exhaustiveness; they appear fragmented and often lag operational innovation. The objective is pragmatic: to raise attacker costs and preserve information integrity while safeguarding fundamental rights and forensic reliability. Full article
38 pages, 3007 KB  
Systematic Review
Generative AI Integration in Education: Theoretical Review and Future Directions Informed by the ADO Framework
by Raghu Raman, Krishnashree Achuthan and Prema Nedungadi
Information 2026, 17(3), 241; https://doi.org/10.3390/info17030241 (registering DOI) - 2 Mar 2026
Abstract
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how [...] Read more.
The accelerated integration of Generative Artificial Intelligence (GenAI) tools such as ChatGPT is transforming learner engagement, instructional design, and institutional governance in education. This systematic literature review synthesizes theory-driven scholarship on GenAI adoption and pedagogical use through the Antecedents–Decisions–Outcomes (ADO) framework, examining how cognitive, motivational, technological, and institutional factors collectively shape implementation and learning outcomes. Drawing primarily on the Technology Acceptance Model (TAM), Self-Determination Theory (SDT), and Institutional Theory, the review integrates complementary insights from Constructivist Learning and Diffusion of Innovations perspectives to conceptualize how antecedents influence decision-making and outcomes across educational settings. The findings indicate that learner motivation, perceived usefulness, digital literacy, and institutional readiness constitute key antecedents affecting GenAI adoption. Decision processes—spanning instructional design, ethical regulation, and pedagogical adaptation—mediate how these antecedents translate into practice. Outcomes reveal a dual trajectory: GenAI enhances personalization, feedback, and self-regulated learning, yet introduces challenges related to ethical ambiguity and overreliance. The review offers a conceptually integrated synthesis that bridges motivational, technological, and organizational perspectives, advancing a theoretical roadmap for ethical and sustainable GenAI adoption. For educators and policymakers, the findings emphasize transparent governance, faculty capacity-building, and equitable access to ensure that innovation remains aligned with pedagogical integrity and human-centered values. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1309 KB  
Article
Path Loss Considering Atmospheric Impact in 5G Networks: A Comparison of Machine Learning Models
by Vasileios P. Rekkas, Leandro dos Santos Coelho, Viviana Cocco Mariani, Adamantini Peratikou and Sotirios K. Goudos
Technologies 2026, 14(3), 151; https://doi.org/10.3390/technologies14030151 - 2 Mar 2026
Abstract
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies [...] Read more.
Accurate estimation of wireless propagation characteristics is essential for guiding the design and deployment of fifth-generation (5G) communication systems. As network demand increases and 5G infrastructure is introduced in progressive phases, reliable path loss (PL) prediction models are required to refine deployment strategies and improve network efficiency. Conventional propagation models frequently display limited flexibility when applied to diverse environmental conditions and often entail considerable computational expense, reducing their practicality for large-scale 5G planning. Recent developments in data-centric artificial intelligence (AI) have enabled more adaptive and analytically powerful approaches to propagation modeling, resulting in notable gains in PL prediction accuracyThis study employs a comprehensive dataset produced using the NYUSIM channel simulator, integrating a wide spectrum of atmospheric parameters and seasonal variations within South Asian urban microcell environments, complemented by broad empirical observations. The core objective is to construct, optimize, and evaluate four machine learning (ML) models capable of accurately predicting PL at high-frequency bands critical to 5G performance. A fully automated hyperparameter tuning pipeline, based on the Optuna framework, is applied to twelve regression algorithms, including advanced ensemble methods, regularized linear techniques, and classical baseline models. Performance assessment emphasizes predictive reliability, stability, and cross-model generalization. Furthermore, statistical analysis utilizing bootstrap confidence intervals and paired t-tests indicates that all ML methods perform equivalently (p > 0.4), while SHapley Additive exPlanations (SHAP) analysis across all models supports a consistent feature importance distribution, supporting the statistical analysis results. To showcase the superiority of the ML approaches, a comparison with conventional free-space PL modeling methods is presented, with the AI methodology demonstrating robust performance across seasonal variations and a 95.3% improvement. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

19 pages, 695 KB  
Article
Generative AI in Participatory Urban Planning: Synthetic Inhabitants and Experts
by Jussi S. Jauhiainen, Sanni Hakanpää, Heikki-Pekka Honkasaari, Niilas Kivilompolo, Matias Kurri, Luukas Lehtiranta and Mirva Nurminen
Land 2026, 15(3), 407; https://doi.org/10.3390/land15030407 (registering DOI) - 2 Mar 2026
Abstract
Generative AI (GenAI) is increasingly applied in urban planning for text production, visualization, analytics, stakeholder communication, and participatory engagement. Large language models (LLMs) enable the creation of synthetic participants to support the early-stage design, analysis, and testing of participatory tools. This article demonstrates [...] Read more.
Generative AI (GenAI) is increasingly applied in urban planning for text production, visualization, analytics, stakeholder communication, and participatory engagement. Large language models (LLMs) enable the creation of synthetic participants to support the early-stage design, analysis, and testing of participatory tools. This article demonstrates an innovative use of GenAI through synthetic inhabitants and experts in an immersive digital urban planning environment. DigitalTurku serves as a proof-of-concept for an immersive planning tool within an urban digital twin. The case relies on synthetic personas—residents and expert stakeholders—to evaluate how a GenAI-assisted urban platform may shape participation experiences and trust in local urban planning. The findings indicate that synthetic experts expressed a reduced bureaucratic distance, enhanced transparency, and more meaningful participation. However, assessments of tools and digital environment usability varied according to digital skills and demographic characteristics embedded in the personas. The use of synthetic personas helps identify opportunities and challenges in immersive urban planning environments and supports the design of digital tools in smart cities to strengthen human residents’ spatial understanding and experiential engagement in planning processes. The creation of synthetic data and participants is convenient with LLMs. Despite these tools’ limitations, they can play a valuable role in piloting participatory planning processes to support and complement human-based participation. Full article
Show Figures

Figure 1

Back to TopTop