Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (181)

Search Parameters:
Keywords = modular AI

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

50 pages, 3979 KB  
Review
Single-Molecule Detection Technologies: Advances in Devices, Transduction Mechanisms, and Functional Materials for Real-World Biomedical and Environmental Applications
by Sampa Manoranjan Barman, Arpita Parakh, A. Anny Leema, P. Balakrishnan, Ankita Avthankar, Dhiraj P. Tulaskar, Purshottam J. Assudani, Shon Nemane, Prakash Rewatkar, Madhusudan B. Kulkarni and Manish Bhaiyya
Biosensors 2025, 15(10), 696; https://doi.org/10.3390/bios15100696 - 14 Oct 2025
Viewed by 166
Abstract
Single-molecule detection (SMD) has reformed analytical science by enabling the direct observation of individual molecular events, thus overcoming the limitations of ensemble-averaged measurements. This review presents a comprehensive analysis of the principles, devices, and emerging materials that have shaped the current landscape of [...] Read more.
Single-molecule detection (SMD) has reformed analytical science by enabling the direct observation of individual molecular events, thus overcoming the limitations of ensemble-averaged measurements. This review presents a comprehensive analysis of the principles, devices, and emerging materials that have shaped the current landscape of SMD. We explore a wide range of sensing mechanisms, including surface plasmon resonance, mechanochemical transduction, transistor-based sensing, optical microfiber platforms, fluorescence-based techniques, Raman scattering, and recognition tunneling, which offer distinct advantages in terms of label-free operation, ultrasensitivity, and real-time responsiveness. Each technique is critically examined through representative case studies, revealing how innovations in device architecture and signal amplification strategies have collectively pushed the detection limits into the femtomolar to attomolar range. Beyond the sensing principles, this review highlights the transformative role of advanced nanomaterials such as graphene, carbon nanotubes, quantum dots, MnO2 nanosheets, upconversion nanocrystals, and magnetic nanoparticles. These materials enable new transduction pathways and augment the signal strength, specificity, and integration into compact and wearable biosensing platforms. We also detail the multifaceted applications of SMD across biomedical diagnostics, environmental monitoring, food safety, neuroscience, materials science, and quantum technologies, underscoring its relevance to global health, safety, and sustainability. Despite significant progress, the field faces several critical challenges, including signal reproducibility, biocompatibility, fabrication scalability, and data interpretation complexity. To address these barriers, we propose future research directions involving multimodal transduction, AI-assisted signal analytics, surface passivation techniques, and modular system design for field-deployable diagnostics. By providing a cross-disciplinary synthesis of device physics, materials science, and real-world applications, this review offers a comprehensive roadmap for the next generation of SMD technologies, poised to impact both fundamental research and translational healthcare. Full article
22 pages, 1443 KB  
Article
AI and IoT-Driven Monitoring and Visualisation for Optimising MSP Operations in Multi-Tenant Networks: A Modular Approach Using Sensor Data Integration
by Adeel Rafiq, Muhammad Zeeshan Shakir, David Gray, Julie Inglis and Fraser Ferguson
Sensors 2025, 25(19), 6248; https://doi.org/10.3390/s25196248 - 9 Oct 2025
Viewed by 778
Abstract
Despite the widespread adoption of network monitoring tools, Managed Service Providers (MSPs), specifically small- and medium-sized enterprises (SMEs), continue to face persistent challenges in achieving predictive, multi-tenant-aware visibility across distributed client networks. Existing monitoring systems lack integrated predictive analytics and edge intelligence. To [...] Read more.
Despite the widespread adoption of network monitoring tools, Managed Service Providers (MSPs), specifically small- and medium-sized enterprises (SMEs), continue to face persistent challenges in achieving predictive, multi-tenant-aware visibility across distributed client networks. Existing monitoring systems lack integrated predictive analytics and edge intelligence. To address this, we propose an AI- and IoT-driven monitoring and visualisation framework that integrates edge IoT nodes (Raspberry Pi Prometheus modules) with machine learning models to enable predictive anomaly detection, proactive alerting, and reduced downtime. This system leverages Prometheus, Grafana, and Mimir for data collection, visualisation, and long-term storage, while incorporating Simple Linear Regression (SLR), K-Means clustering, and Long Short-Term Memory (LSTM) models for anomaly prediction and fault classification. These AI modules are containerised and deployed at the edge or centrally, depending on tenant topology, with predicted risk metrics seamlessly integrated back into Prometheus. A one-month deployment across five MSP clients (500 nodes) demonstrated significant operational benefits, including a 95% reduction in downtime and a 90% reduction in incident resolution time relative to historical baselines. The system ensures secure tenant isolation via VPN tunnels and token-based authentication, while providing GDPR-compliant data handling. Unlike prior monitoring platforms, this work introduces a fully edge-embedded AI inference pipeline, validated through live deployment and operational feedback. Full article
Show Figures

Figure 1

38 pages, 2502 KB  
Review
A Modular Perspective on the Evolution of Deep Learning: Paradigm Shifts and Contributions to AI
by Yicheng Wei, Yifu Wang and Junzo Watada
Appl. Sci. 2025, 15(19), 10539; https://doi.org/10.3390/app151910539 - 29 Sep 2025
Viewed by 753
Abstract
The rapid development of deep learning (DL) has demonstrated its modular contributions to artificial intelligence (AI) techniques, such as large language models (LLMs). DL variants have proliferated across domains such as feature extraction, normalization, lightweight architecture design, and module integration, yielding substantial advancements [...] Read more.
The rapid development of deep learning (DL) has demonstrated its modular contributions to artificial intelligence (AI) techniques, such as large language models (LLMs). DL variants have proliferated across domains such as feature extraction, normalization, lightweight architecture design, and module integration, yielding substantial advancements in these subfields. However, the absence of a unified review framework to contextualize DL’s modular evolutions within AI development complicates efforts to pinpoint future research directions. Existing review papers often focus on narrow technical aspects or lack systemic analysis of modular relationships, leaving gaps in our understanding how these innovations collectively drive AI progress. This work bridges this gap by providing a roadmap for researchers to navigate DL’s modular innovations, with a focus on balancing scalability and sustainability amid evolving AI paradigms. To address this, we systematically analyze extensive literature from databases including Web of Science, Scopus, arXiv, ACM Digital Library, IEEE Xplore, SpringerLink, Elsevier, etc., with the aim of (1) summarizing and updating recent developments in DL algorithms, with performance benchmarks on standard dataset; (2) identifying innovation trends in DL from a modular viewpoint; and (3) evaluating how these modular innovations contribute to broader advances in artificial intelligence, with particular attention to scalability and sustainability amid shifting AI paradigms. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Intelligent Computing)
Show Figures

Figure 1

30 pages, 2339 KB  
Systematic Review
Artificial Intelligence-Enabled Heating, Ventilation, and Air Conditioning Systems Toward Zero-Emission Buildings: A Systematic Review of Applications, Challenges, and Future Directions
by Abdo Abdullah Ahmed Gassar and Raed Jafar
Appl. Sci. 2025, 15(19), 10497; https://doi.org/10.3390/app151910497 - 28 Sep 2025
Viewed by 625
Abstract
Heating, ventilation, and air conditioning (HVAC) systems are among the largest energy consumers in buildings, making their intelligent operation fundamental to achieving zero-emission performance and advancing climate neutrality. With recent progress in artificial intelligence (AI), new opportunities have emerged to optimize HVAC operations [...] Read more.
Heating, ventilation, and air conditioning (HVAC) systems are among the largest energy consumers in buildings, making their intelligent operation fundamental to achieving zero-emission performance and advancing climate neutrality. With recent progress in artificial intelligence (AI), new opportunities have emerged to optimize HVAC operations by enabling predictive, adaptive, and autonomous control. Several studies have explored aspects of AI-driven net-zero emission performance for building HVAC systems. However, a systematic assessment that consolidates these findings and identifies future directions is still needed. This review addresses this gap by analyzing the current state of research on AI-enabled HVAC systems in the context of zero-emission building performance, with particular attention to residential, commercial, and educational settings. In addition, it provides recommendations for future research while underscoring the importance of AI methods in achieving zero-emission performance of building HVAC systems. Based on this review, five primary application domains of AI-enabled building HVAC systems were identified and analyzed: predictive maintenance, scheduling, adaptive optimization, renewable energy integration, and IoT-enabled control. Existing research gaps are identified, including privacy-preserving AI methods, modular and interoperable frameworks, climate-adaptive and occupant-aware strategies, and computationally efficient architectures. Future directions in the field of the AI-enabled HVAC system integrations, along with lifecycle assessment, are highlighted to enable resilient, zero-emission building performance. Full article
(This article belongs to the Special Issue Advancements in HVAC Technologies and Zero-Emission Buildings)
Show Figures

Figure 1

17 pages, 1548 KB  
Article
Hybrid Deep-Ensemble Network with VAE-Based Augmentation for Imbalanced Tabular Data Classification
by Sang-Jeong Lee and You-Suk Bae
Appl. Sci. 2025, 15(19), 10360; https://doi.org/10.3390/app151910360 - 24 Sep 2025
Viewed by 330
Abstract
Background: Severe class imbalance limits reliable tabular AI in manufacturing, finance, and healthcare. Methods: We built a modular pipeline comprising correlation-aware seriation; a hybrid convolutional neural network (CNN)–transformer–Bidirectional Long Short-Term Memory (BiLSTM) encoder; variational autoencoder (VAE)-based minority augmentation; and deep/tree ensemble heads (XGBoost [...] Read more.
Background: Severe class imbalance limits reliable tabular AI in manufacturing, finance, and healthcare. Methods: We built a modular pipeline comprising correlation-aware seriation; a hybrid convolutional neural network (CNN)–transformer–Bidirectional Long Short-Term Memory (BiLSTM) encoder; variational autoencoder (VAE)-based minority augmentation; and deep/tree ensemble heads (XGBoost and Support Vector Machine, SVM). We benchmarked the Synthetic Minority Oversampling Technique (SMOTE) and ADASYN under identical protocols. Focal loss and ensemble weights were tuned per dataset. The primary metric was the Area Under the Precision–Recall Curve (AUPRC), with receiver operating characteristic area under the curve (ROC AUC) as complementary. Synthetic-data fidelity was quantified by train-on-synthetic/test-on-real (TSTR) utility, two-sample discriminability (ROC AUC of a real-vs-synthetic classifier), and Maximum Mean Discrepancy (MMD2). Results: Across five datasets (SECOM, CREDIT, THYROID, APS, and UCI), augmentation was data-dependent: VAE led on APS (+3.66 pp AUPRC vs. SMOTE) and was competitive on CREDIT (+0.10 pp vs. None); the SMOTE dominated SECOM; no augmentation performed best for THYROID and UCI. Positional embedding (PE) with seriation helped when strong local correlations were present. Ensembles typically favored XGBoost while benefiting from the hybrid encoder. Efficiency profiling and a slim variant supported latency-sensitive use. Conclusions: A data-aware recipe emerged: prefer VAE when fidelity is high, the SMOTE on smoother minority manifolds, and no augmentation when baselines suffice; apply PE/seriation selectively and tune per dataset for robust, reproducible deployment. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

36 pages, 2812 KB  
Article
Strategic Readiness for AI and Smart Technology Adoption in Emerging Hospitality Markets: A Tri-Lens Assessment of Barriers, Benefits, and Segments in Albania
by Majlinda Godolja, Tea Tavanxhiu and Kozeta Sevrani
Tour. Hosp. 2025, 6(4), 187; https://doi.org/10.3390/tourhosp6040187 - 19 Sep 2025
Viewed by 966
Abstract
The adoption of artificial intelligence (AI) and smart technologies is reshaping global hospitality. However, in emerging markets, uptake remains limited by financial, organizational, and infrastructural barriers. This study examines the digital readiness of 1821 licensed accommodation providers in Albania, a rapidly expanding tourism [...] Read more.
The adoption of artificial intelligence (AI) and smart technologies is reshaping global hospitality. However, in emerging markets, uptake remains limited by financial, organizational, and infrastructural barriers. This study examines the digital readiness of 1821 licensed accommodation providers in Albania, a rapidly expanding tourism economy, using an integrated framework that combines the Technology Acceptance Model (TAM), technology–organization–environment (TOE) framework, and Diffusion of Innovations (DOI). Data were collected via a structured survey and analyzed using descriptive statistics, exploratory factor analysis, cluster analysis, and structural equation modeling. Exploratory factor analysis identified a single robust readiness dimension, covering smart automation, environmental controls, and AI-driven systems. K-means segmentation revealed three adopter profiles: Tech Leaders (17.7%), Selective Adopters (43.5%), and Skeptics (38.8%), with statistically distinct but modest mean differences in readiness, reflecting stronger adoption in central urban and coastal hubs compared to weaker uptake in cultural heritage and non-urban regions. Structural modeling showed that environmental competitive pressure strongly enhanced perceived usefulness, which, in turn, drove behavioral intention, whereas perceived ease of use (operationalized as implementation complexity) had negligible effects. Innovation readiness was consistently associated with broader adoption, although intention was translated into actual use only among Tech Leaders. The findings highlight a fragmented digital ecosystem in which enthusiasm for AI exceeds its feasibility, underscoring the need for differentiated policy support, modular vendor solutions, and targeted capacity building to foster inclusive digital transformation. Full article
Show Figures

Graphical abstract

32 pages, 3609 KB  
Article
BPMN-Based Design of Multi-Agent Systems: Personalized Language Learning Workflow Automation with RAG-Enhanced Knowledge Access
by Hedi Tebourbi, Sana Nouzri, Yazan Mualla, Meryem El Fatimi, Amro Najjar, Abdeljalil Abbas-Turki and Mahjoub Dridi
Information 2025, 16(9), 809; https://doi.org/10.3390/info16090809 - 17 Sep 2025
Viewed by 797
Abstract
The intersection of Artificial Intelligence (AI) and education is revolutionizing learning and teaching in this digital era, with Generative AI and large language models (LLMs) providing even greater possibilities for the future. The digital transformation of language education demands innovative approaches that combine [...] Read more.
The intersection of Artificial Intelligence (AI) and education is revolutionizing learning and teaching in this digital era, with Generative AI and large language models (LLMs) providing even greater possibilities for the future. The digital transformation of language education demands innovative approaches that combine pedagogical rigor with explainable AI (XAI) principles, particularly for low-resource languages. This paper presents a novel methodology that integrates Business Process Model and Notation (BPMN) with Multi-Agent Systems (MAS) to create transparent, workflow-driven language tutors. Our approach uniquely embeds XAI through three mechanisms: (1) BPMN’s visual formalism that makes agent decision-making auditable, (2) Retrieval-Augmented Generation (RAG) with verifiable knowledge provenance from textbooks of the National Institute of Languages of Luxembourg, and (3) human-in-the-loop validation of both content and pedagogical sequencing. To ensure realism in learner interaction, we integrate speech-to-text and text-to-speech technologies, creating an immersive, human-like learning environment. The system simulates intelligent tutoring through agents’ collaboration and dynamic adaptation to learner progress. We demonstrate this framework through a Luxembourgish language learning platform where specialized agents (Conversational, Reading, Listening, QA, and Grammar) operate within BPMN-modeled workflows. The system achieves high response faithfulness (0.82) and relevance (0.85) according to RAGA metrics, while speech integration using Whisper STT and Coqui TTS enables immersive practice. Evaluation with learners showed 85.8% satisfaction with contextual responses and 71.4% engagement rates, confirming the effectiveness of our process-driven approach. This work advances AI-powered language education by showing how formal process modeling can create pedagogically coherent and explainable tutoring systems. The architecture’s modularity supports extension to other low-resource languages while maintaining the transparency critical for educational trust. Future work will expand curriculum coverage and develop teacher-facing dashboards to further improve explainability. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

45 pages, 12590 KB  
Article
An End-to-End Data and Machine Learning Pipeline for Energy Forecasting: A Systematic Approach Integrating MLOps and Domain Expertise
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2025, 16(9), 805; https://doi.org/10.3390/info16090805 - 16 Sep 2025
Viewed by 731
Abstract
Energy forecasting is critical for modern power systems, enabling proactive grid control and efficient resource optimization. However, energy forecasting projects require systematic approaches that span project inception to model deployment while ensuring technical excellence, domain alignment, regulatory compliance, and reproducibility. Existing methodologies such [...] Read more.
Energy forecasting is critical for modern power systems, enabling proactive grid control and efficient resource optimization. However, energy forecasting projects require systematic approaches that span project inception to model deployment while ensuring technical excellence, domain alignment, regulatory compliance, and reproducibility. Existing methodologies such as CRISP-DM provide a foundation but lack explicit mechanisms for iterative feedback, decision checkpoints, and continuous energy-domain-expert involvement. This paper proposes a modular end-to-end framework for energy forecasting that integrates formal decision gates in each phase, embeds domain-expert validation, and produces fully traceable artifacts. The framework supports controlled iteration, rollback, and automation within an MLOps-compatible structure. A comparative analysis demonstrates its advantages in functional coverage, workflow logic, and governance over existing approaches. A case study on short-term electricity forecasting for a 2560 m2 office building validates the framework, achieving 24-h-ahead predictions with an RNN, reaching an RMSE of 1.04 kWh and an MAE of 0.78 kWh. The results confirm that the framework enhances forecast accuracy, reliability, and regulatory readiness in real-world energy applications. Full article
Show Figures

Figure 1

20 pages, 3921 KB  
Article
Design of an Experimental Teaching Platform for Flow-Around Structures and AI-Driven Modeling in Marine Engineering
by Hongyang Zhao, Bowen Zhao, Xu Liang and Qianbin Lin
J. Mar. Sci. Eng. 2025, 13(9), 1761; https://doi.org/10.3390/jmse13091761 - 11 Sep 2025
Viewed by 562
Abstract
Flow past bluff bodies (e.g., circular cylinders) forms a canonical context for teaching external flow separation, vortex shedding, and the coupling between surface pressure and hydrodynamic forces in offshore engineering. Conventional laboratory implementations, however, often fragment local and global measurements, delay data feedback, [...] Read more.
Flow past bluff bodies (e.g., circular cylinders) forms a canonical context for teaching external flow separation, vortex shedding, and the coupling between surface pressure and hydrodynamic forces in offshore engineering. Conventional laboratory implementations, however, often fragment local and global measurements, delay data feedback, and omit intelligent modeling components, thereby limiting the development of higher-order cognitive skills and data literacy. We present a low-cost, modular, data-enabled instructional hydrodynamics platform that integrates a transparent recirculating water channel, multi-point synchronous circumferential pressure measurements, global force acquisition, and an artificial neural network (ANN) surrogate. Using feature vectors composed of Reynolds number, angle of attack, and submergence depth, we train a lightweight AI model for rapid prediction of drag and lift coefficients, closing a loop of measurement, prediction, deviation diagnosis, and feature refinement. In the subcritical Reynolds regime, the measured circumferential pressure distribution for a circular cylinder and the drag and lift coefficients for a rectangular cylinder agree with empirical correlations and published benchmarks. The ANN surrogate attains a mean absolute percentage error of approximately 4% for both drag and lift coefficients, indicating stable, physically interpretable performance under limited feature inputs. This platform will facilitate students’ cross-domain transfer spanning flow physics mechanisms, signal processing, feature engineering, and model evaluation, thereby enhancing inquiry-driven and critical analytical competencies. Key contributions include the following: (i) a synchronized local pressure and global force dataset architecture; (ii) embedding a physics-interpretable lightweight ANN surrogate in a foundational hydrodynamics experiment; and (iii) an error-tracking, iteration-oriented instructional workflow. The platform provides a replicable pathway for transitioning offshore hydrodynamics laboratories toward an integrated intelligence-plus-data literacy paradigm and establishes a foundation for future extensions to higher Reynolds numbers, multiple body geometries, and physics-constrained neural networks. Full article
Show Figures

Figure 1

29 pages, 651 KB  
Systematic Review
Retrieval-Augmented Generation (RAG) in Healthcare: A Comprehensive Review
by Fnu Neha, Deepshikha Bhati and Deepak Kumar Shukla
AI 2025, 6(9), 226; https://doi.org/10.3390/ai6090226 - 11 Sep 2025
Viewed by 4377
Abstract
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by integrating external knowledge retrieval to improve factual consistency and reduce hallucinations. Despite growing interest, its use in healthcare remains fragmented. This paper presents a Systematic Literature Review (SLR) following PRISMA guidelines, synthesizing 30 peer-reviewed [...] Read more.
Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by integrating external knowledge retrieval to improve factual consistency and reduce hallucinations. Despite growing interest, its use in healthcare remains fragmented. This paper presents a Systematic Literature Review (SLR) following PRISMA guidelines, synthesizing 30 peer-reviewed studies on RAG in clinical domains, focusing on three of its most prevalent and promising applications in diagnostic support, electronic health record (EHR) summarization, and medical question answering. We synthesize the existing architectural variants (naïve, advanced, and modular) and examine their deployment across these applications. Persistent challenges are identified, including retrieval noise (irrelevant or low-quality retrieved information), domain shift (performance degradation when models are applied to data distributions different from their training set), generation latency, and limited explainability. Evaluation strategies are compared using both standard metrics and clinical-specific metrics, FactScore, RadGraph-F1, and MED-F1, which are particularly critical for ensuring factual accuracy, medical validity, and clinical relevance. This synthesis offers a domain-focused perspective to guide researchers, healthcare providers, and policymakers in developing reliable, interpretable, and clinically aligned AI systems, laying the groundwork for future innovation in RAG-based healthcare solutions. Full article
Show Figures

Figure 1

26 pages, 4054 KB  
Article
Multi-Time-Scale Demand Response Optimization in Active Distribution Networks Using Double Deep Q-Networks
by Wei Niu, Jifeng Li, Zongle Ma, Wenliang Yin and Liang Feng
Energies 2025, 18(18), 4795; https://doi.org/10.3390/en18184795 - 9 Sep 2025
Viewed by 541
Abstract
This paper presents a deep reinforcement learning-based demand response (DR) optimization framework for active distribution networks under uncertainty and user heterogeneity. The proposed model utilizes a Double Deep Q-Network (Double DQN) to learn adaptive, multi-period DR strategies across residential, commercial, and electric vehicle [...] Read more.
This paper presents a deep reinforcement learning-based demand response (DR) optimization framework for active distribution networks under uncertainty and user heterogeneity. The proposed model utilizes a Double Deep Q-Network (Double DQN) to learn adaptive, multi-period DR strategies across residential, commercial, and electric vehicle (EV) participants in a 24 h rolling horizon. By incorporating a structured state representation—including forecasted load, photovoltaic (PV) output, dynamic pricing, historical DR actions, and voltage states—the agent autonomously learns control policies that minimize total operational costs while maintaining grid feasibility and voltage stability. The physical system is modeled via detailed constraints, including power flow balance, voltage magnitude bounds, PV curtailment caps, deferrable load recovery windows, and user-specific availability envelopes. A case study based on a modified IEEE 33-bus distribution network with embedded PV and DR nodes demonstrates the framework’s effectiveness. Simulation results show that the proposed method achieves significant cost savings (up to 35% over baseline), enhances PV absorption, reduces load variance by 42%, and maintains voltage profiles within safe operational thresholds. Training curves confirm smooth Q-value convergence and stable policy performance, while spatiotemporal visualizations reveal interpretable DR behavior aligned with both economic and physical system constraints. This work contributes a scalable, model-free approach for intelligent DR coordination in smart grids, integrating learning-based control with physical grid realism. The modular design allows for future extension to multi-agent systems, storage coordination, and market-integrated DR scheduling. The results position Double DQN as a promising architecture for operational decision-making in AI-enabled distribution networks. Full article
Show Figures

Figure 1

14 pages, 353 KB  
Article
Building Geometry Generation Example Applying GPT Models
by Zsolt Ercsey and Tamás Storcz
Architecture 2025, 5(3), 79; https://doi.org/10.3390/architecture5030079 - 9 Sep 2025
Viewed by 407
Abstract
The emergence of large language models (LLMs) has opened new avenues for integrating artificial intelligence into architectural design workflows. This paper explores the feasibility of applying generative AI to solve a classic combinatorial problem: generating valid building geometries of a modular family house [...] Read more.
The emergence of large language models (LLMs) has opened new avenues for integrating artificial intelligence into architectural design workflows. This paper explores the feasibility of applying generative AI to solve a classic combinatorial problem: generating valid building geometries of a modular family house structure. The problem involves identifying all valid placements of six spatial blocks under strict architectural constraints. The study contrasts the conventional algorithmic solution with generative approaches using ChatGPT-3.5, ChatGPT-4o, and a hybrid expert model. While early GPT models struggled with accuracy and solution completeness, the hybrid expert-guided approach demonstrated a successful synergy between LLM-driven code generation and domain-specific corrections. The findings suggest that, while LLMs alone are insufficient for precise combinatorial tasks, hybrid systems combining classical and AI techniques hold great promise for supporting architectural problem solving including building geometry generation. Full article
(This article belongs to the Special Issue AI as a Tool for Architectural Design and Urban Planning)
Show Figures

Figure 1

29 pages, 6873 KB  
Review
Digital Twin Technology for Urban Flood Risk Management: A Systematic Review of Remote Sensing Applications and Early Warning Systems
by Mohammed Hlal, Jean-Claude Baraka Munyaka, Jérôme Chenal, Rida Azmi, El Bachir Diop, Mariem Bounabi, Seyid Abdellahi Ebnou Abdem, Mohamed Adou Sidi Almouctar and Meriem Adraoui
Remote Sens. 2025, 17(17), 3104; https://doi.org/10.3390/rs17173104 - 5 Sep 2025
Cited by 1 | Viewed by 3372
Abstract
Digital Twin (DT) technology has emerged as a transformative tool in urban flood risk management (UFRM), enabling real-time data integration, predictive modeling, and decision support. This systematic review synthesizes existing literature to evaluate the scientific impact, technological advancements, and practical applications of DTs [...] Read more.
Digital Twin (DT) technology has emerged as a transformative tool in urban flood risk management (UFRM), enabling real-time data integration, predictive modeling, and decision support. This systematic review synthesizes existing literature to evaluate the scientific impact, technological advancements, and practical applications of DTs in UFRM. Using the PRISMA 2020 framework, we retrieved 1085 records (Scopus = 85; Web of Science = 1000), merged and deduplicated them using DOI and fuzzy-matched titles, screened titles/abstracts, and assessed full texts. This process yielded 85 unique peer-reviewed studies published between 2018 and 2025. Key findings highlight the role of remote sensing (e.g., satellite imagery, IoT sensors) in enhancing DT accuracy, the integration of machine learning for predictive analytics, and case studies demonstrating reduced flood response times by up to 40%. Challenges such as data interoperability and computational demands are discussed, alongside future directions for scalable, AI-driven DT frameworks. This review identifies key technical and governance challenges while recommending the development of modular, AI-driven DT frameworks, particularly tailored for resource-constrained regions. Full article
(This article belongs to the Special Issue Remote Sensing in Hazards Monitoring and Risk Assessment)
Show Figures

Figure 1

18 pages, 1099 KB  
Article
Human–AI Teaming in Structural Analysis: A Model Context Protocol Approach for Explainable and Accurate Generative AI
by Carlos Avila, Daniel Ilbay and David Rivera
Buildings 2025, 15(17), 3190; https://doi.org/10.3390/buildings15173190 - 4 Sep 2025
Cited by 1 | Viewed by 1385
Abstract
The integration of large language models (LLMs) into structural engineering workflows presents both a transformative opportunity and a critical challenge. While LLMs enable intuitive, natural language interactions with complex data, their limited arithmetic reasoning, contextual fragility, and lack of verifiability constrain their application [...] Read more.
The integration of large language models (LLMs) into structural engineering workflows presents both a transformative opportunity and a critical challenge. While LLMs enable intuitive, natural language interactions with complex data, their limited arithmetic reasoning, contextual fragility, and lack of verifiability constrain their application in safety-critical domains. This study introduces a novel automation pipeline that couples generative AI with finite element modelling through the Model Context Protocol (MCP)—a modular, context-aware architecture that complements language interpretation with structural computation. By interfacing GPT-4 with OpenSeesPy via MCP (JSON schemas, API interfaces, communication standards), the system allows engineers to specify and evaluate 3D frame structures using conversational prompts, while ensuring computational fidelity and code compliance. Across four case studies, the GPT+MCP framework demonstrated predictive accuracy for key structural parameters, with deviations under 1.5% compared to reference solutions produced using conventional finite element analysis workflows. In contrast, unconstrained LLM use produces deviations exceeding 400%. The architecture supports reproducibility, traceability, and rapid analysis cycles (6–12 s), enabling real-time feedback for both design and education. This work establishes a reproducible framework for trustworthy AI-assisted analysis in engineering, offering a scalable foundation for future developments in optimisation and regulatory automation. Full article
(This article belongs to the Special Issue Automation and Intelligence in the Construction Industry)
Show Figures

Figure 1

Back to TopTop