Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = cloud-in-the-loop simulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3227 KB  
Article
Research and Development of Intelligent Control Systems for High-Frequency Ozone Generators
by Askar Abdykadyrov, Dina Ermanova, Maxat Mamadiyarov, Seidulla Abdullayev, Nurzhigit Smailov and Nurlan Kystaubayev
J. Sens. Actuator Netw. 2026, 15(2), 26; https://doi.org/10.3390/jsan15020026 - 3 Mar 2026
Viewed by 606
Abstract
This paper presents the development and investigation of an intelligent control system for a high-frequency ozone generator integrated into an IoT-based and telecommunication environment. A cyber-physical nonlinear mathematical model combining the electrical, thermal, gas-dynamic, and chemical subsystems of the ozone generation process is [...] Read more.
This paper presents the development and investigation of an intelligent control system for a high-frequency ozone generator integrated into an IoT-based and telecommunication environment. A cyber-physical nonlinear mathematical model combining the electrical, thermal, gas-dynamic, and chemical subsystems of the ozone generation process is proposed. The model was implemented in discrete-time form and experimentally validated using the corona–discharge-based high-frequency ozonator ETRO-02. The deviation between simulation and experimental results did not exceed 5.3% for settling time, 6.7% for overshoot, 1.6% for steady-state ozone concentration, and 0.9% for gas temperature, confirming the adequacy of the proposed model. Based on this model, a hierarchical two-level intelligent control architecture is synthesized, consisting of a fast local control loop with a cycle time of 1–5 ms and a supervisory monitoring layer. The proposed adaptive state-feedback control law with online gain adjustment ensures stable real-time operation under nonlinear dynamics, ±20% parameter variations, network delays of 1–10 ms, and packet loss probabilities of up to 5%. As a result, the settling time is reduced from 420 ms to 160 ms, the overshoot from 12.5% to 3.1%, and the steady-state error from 6.5% to 1.6%, while the specific energy consumption decreases from 11.8 to 6.2 Wh/m3. The obtained results demonstrate that the integration of a cyber-physical model with a millisecond-level intelligent control system significantly improves the dynamic performance, robustness, and energy efficiency of high-frequency ozone generators compared to classical control and monitoring-oriented IoT systems. Unlike cloud-centric IoT monitoring architectures that operate at second-level update cycles, the proposed system closes the control loop locally at the millisecond scale, enabling stabilization of fast nonlinear electro-plasma dynamics. The results demonstrate that edge-intelligent adaptive control significantly enhances both dynamic performance and energy efficiency, confirming the feasibility of millisecond-level cyber-physical regulation for industrial ozone generation systems. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 3530 KB  
Article
A Reinforcement Learning-Based Crushing Method for Robots Operating Within Smart Fully Mechanized Mining Faces
by Yuan Wang, Jun Liu, Zhiyuan Wang and Zhengxiong Lu
Machines 2026, 14(1), 115; https://doi.org/10.3390/machines14010115 - 19 Jan 2026
Viewed by 337
Abstract
The current method of manually handling or using open-loop automation to deal with abnormal coal lumps on the scraper conveyor is inefficient due to constraints, such as safety concerns and equipment wear. To address inefficiencies in the handling of abnormal coal blocks on [...] Read more.
The current method of manually handling or using open-loop automation to deal with abnormal coal lumps on the scraper conveyor is inefficient due to constraints, such as safety concerns and equipment wear. To address inefficiencies in the handling of abnormal coal blocks on scraper conveyors, a reinforcement-learning-based method is proposed. Aiming to address the issue that experimenting on abnormal coal handling by scraper conveyors is expensive, this paper designs a variational Auto-Encoder model with the U-MLP network as its core to simulate the processing environment. In addition, given the sparse characteristics of coal block point cloud data, a deep reinforcement learning model based on the LKDG model is designed to control the crushing equipment when dealing with abnormal coal blocks. Through the point cloud data, images, and other information collected by the fully mechanized mining laboratory before and after abnormal processing of coal blocks, we built a simulation environment for abnormal coal blocks, and trained the LKDG model in the simulation environment. To validate the proposed model, we compared LKDG with baseline models in simulation experiments. The results demonstrate that this method can effectively enhance the efficiency of abnormal coal lump processing without human intervention: LKDG achieved a 10.92% higher average reward compared to existing approaches. In terms of engineering applicability, the trained LKDG delivered excellent performance in laboratory tests conducted in a fully mechanized mining environment, increasing the effective crushing count by 67.11% over conventional automated processing methods. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

33 pages, 7152 KB  
Article
DRADG: A Dynamic Risk-Adaptive Data Governance Framework for Modern Digital Ecosystems
by Jihane Gharib and Youssef Gahi
Information 2026, 17(1), 102; https://doi.org/10.3390/info17010102 - 19 Jan 2026
Viewed by 817
Abstract
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to [...] Read more.
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to enhance resilience, compliance, and trust in complex data environments. Drawing on the convergence of existing data governance models, best practice risk management (DAMA-DMBOK, NIST, and ISO 31000), and real-world enterprise experience, this framework provides a modular, expandable approach to dynamically aligning governance strategy with evolving contextual factors and threats in data management. The contribution is in the form of a multi-layered paradigm combining static policy with dynamic risk indicator through application of data sensitivity categorization, contextual risk scoring, and use of feedback loops to continuously adapt. The technical contribution is in the governance-risk matrix formulated, mapping data lifecycle stages (acquisition, storage, use, sharing, and archival) to corresponding risk mitigation mechanisms. This is embedded through a semi-automated rules-based engine capable of modifying governance controls based on predetermined thresholds and evolving data contexts. Validation was obtained through simulation-based training in cross-border data sharing, regulatory adherence, and cloud-based data management. Findings indicate that DRADG enhances governance responsiveness, reduces exposure to compliance risks, and provides a basis for sustainable data accountability. The research concludes by providing guidelines for implementation and avenues for future research in AI-driven governance automation and policy learning. DRADG sets a precedent for imbuing intelligence and responsiveness at the heart of data governance operations of modern-day digital enterprises. Full article
(This article belongs to the Special Issue Information Management and Decision-Making)
Show Figures

Graphical abstract

25 pages, 3269 KB  
Article
Dynamic Carbon-Aware Scheduling for Electric Vehicle Fleets Using VMD-BSLO-CTL Forecasting and Multi-Objective MPC
by Hongyu Wang, Zhiyu Zhao, Kai Cui, Zixuan Meng, Bin Li, Wei Zhang and Wenwen Li
Energies 2026, 19(2), 456; https://doi.org/10.3390/en19020456 - 16 Jan 2026
Viewed by 318
Abstract
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a [...] Read more.
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a hybrid forecasting model (VMD-BSLO-CTL) is constructed. By integrating Variational Mode Decomposition (VMD) with a CNN-Transformer-LSTM network optimized by the Blood-Sucking Leech Optimizer (BSLO), the model effectively captures multi-scale features. Validation on the UK National Grid dataset demonstrates its superior robustness against prediction horizon extension compared to state-of-the-art baselines. Second, a multi-objective Model Predictive Control (MPC) strategy is developed to guide EV charging. Applied to a real-world station-level scenario, the strategy navigates the trade-offs between user economy and grid stability. Simulation results show that the proposed framework simultaneously reduces economic costs by 4.17% and carbon emissions by 8.82%, while lowering the peak-valley difference by 6.46% and load variance by 11.34%. Finally, a cloud-edge collaborative deployment scheme indicates the engineering potential of the proposed approach for next-generation low-carbon energy management. Full article
Show Figures

Figure 1

20 pages, 5061 KB  
Article
Research on Orchard Navigation Technology Based on Improved LIO-SAM Algorithm
by Jinxing Niu, Jinpeng Guan, Tao Zhang, Le Zhang, Shuheng Shi and Qingyuan Yu
Agriculture 2026, 16(2), 192; https://doi.org/10.3390/agriculture16020192 - 12 Jan 2026
Viewed by 542
Abstract
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving [...] Read more.
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving equipment can occur every 5 min), and uneven terrain, this paper proposes an improved mapping algorithm named OSC-LIO (Orchard Scan Context Lidar Inertial Odometry via Smoothing and Mapping). The algorithm designs a dynamic point filtering strategy based on Euclidean clustering and spatiotemporal consistency within a 5-frame sliding window to reduce the interference of dynamic objects in point cloud registration. By integrating local semantic features such as fruit tree trunk diameter and canopy height difference, a two-tier verification mechanism combining “global and local information” is constructed to enhance the distinctiveness and robustness of loop closure detection. Motion compensation is achieved by fusing data from an Inertial Measurement Unit (IMU) and a wheel odometer to correct point cloud distortion. A three-level hierarchical indexing structure—”path partitioning, time window, KD-Tree (K-Dimension Tree)”—is built to reduce the time required for loop closure retrieval and improve the system’s real-time performance. Experimental results show that the improved OSC-LIO system reduces the Absolute Trajectory Error (ATE) by approximately 23.5% compared to the original LIO-SAM (Tightly coupled Lidar Inertial Odometry via Smoothing and Mapping) in a simulated orchard environment, while enabling stable and reliable path planning and autonomous navigation. This study provides a high-precision, lightweight technical solution for autonomous navigation in orchard scenarios. Full article
Show Figures

Figure 1

47 pages, 6988 KB  
Article
A Hierarchical Predictive-Adaptive Control Framework for State-of-Charge Balancing in Mini-Grids Using Deep Reinforcement Learning
by Iacovos Ioannou, Saher Javaid, Yasuo Tan and Vasos Vassiliou
Electronics 2026, 15(1), 61; https://doi.org/10.3390/electronics15010061 - 23 Dec 2025
Cited by 1 | Viewed by 702
Abstract
State-of-charge (SoC) balancing across multiple battery energy storage systems (BESS) is a central challenge in renewable-rich mini-grids. Heterogeneous battery capacities, differing states of health, stochastic renewable generation, and variable loads create a high-dimensional uncertain control problem. Conventional droop-based SoC balancing strategies are decentralized [...] Read more.
State-of-charge (SoC) balancing across multiple battery energy storage systems (BESS) is a central challenge in renewable-rich mini-grids. Heterogeneous battery capacities, differing states of health, stochastic renewable generation, and variable loads create a high-dimensional uncertain control problem. Conventional droop-based SoC balancing strategies are decentralized and computationally light but fundamentally reactive and limited, whereas model predictive control (MPC) is insightful but computationally intensive and prone to modeling errors. This paper proposes a Hierarchical Predictive–Adaptive Control (HPAC) framework for SoC balancing in mini-grids using deep reinforcement learning. The framework consists of two synergistic layers operating on different time scales. A long-horizon Predictive Engine, implemented as a federated Transformer network, provides multi-horizon probabilistic forecasts of net load, enabling multiple mini-grids to collaboratively train a high-capacity model without sharing raw data. A fast-timescale Adaptive Controller, implemented as a Soft Actor-Critic (SAC) agent, uses these forecasts to make real-time charge/discharge decisions for each BESS unit. The forecasts are used both to augment the agent’s state representation and to dynamically shape a multi-objective reward function that balances SoC, economic performance, degradation-aware operation, and voltage stability. The paper formulates SoC balancing as a Markov decision process, details the SAC-based control architecture, and presents a comprehensive evaluation using a MATLAB-(R2025a)-based digital-twin simulation environment. A rigorous benchmarking study compares HPAC against fourteen representative controllers spanning rule-based, MPC, and various DRL paradigms. Sensitivity analysis on reward weight selection and ablation studies isolating the contributions of forecasting and dynamic reward shaping are conducted. Stress-test scenarios, including high-volatility net-load conditions and communication impairments, demonstrate the robustness of the approach. Results show that HPAC achieves near-minimal operating cost with essentially zero SoC variance and the lowest voltage variance among all compared controllers, while maintaining moderate energy throughput that implicitly preserves battery lifetime. Finally, the paper discusses a pathway from simulation to hardware-in-the-loop testing and a cloud-edge deployment architecture for practical, real-time deployment in real-world mini-grids. Full article
(This article belongs to the Special Issue Smart Power System Optimization, Operation, and Control)
Show Figures

Figure 1

25 pages, 1886 KB  
Article
Cyber-Physical Power System Digital Twins—A Study on the State of the Art
by Nathan Elias Maruch Barreto and Alexandre Rasi Aoki
Energies 2025, 18(22), 5960; https://doi.org/10.3390/en18225960 - 13 Nov 2025
Cited by 4 | Viewed by 2039
Abstract
This study explores the transformative role of Cyber-Physical Power System (CPPS) Digital Twins (DTs) in enhancing the operational resilience, flexibility, and intelligence of modern power grids. By integrating physical system models with real-time cyber elements, CPPS DTs provide a synchronized framework for real-time [...] Read more.
This study explores the transformative role of Cyber-Physical Power System (CPPS) Digital Twins (DTs) in enhancing the operational resilience, flexibility, and intelligence of modern power grids. By integrating physical system models with real-time cyber elements, CPPS DTs provide a synchronized framework for real-time monitoring, predictive maintenance, energy management, and cybersecurity. A structured literature review was conducted using the ProKnow-C methodology, yielding a curated portfolio of 74 publications from 2017 to 2025. This corpus was analyzed to identify key application areas, enabling technologies, simulation methods, and conceptual maturity levels of CPPS DTs. The study highlights seven primary application domains, including real-time decision support and cybersecurity, while emphasizing essential enablers such as data acquisition systems, cloud/edge computing, and advanced simulation techniques like co-simulation and hardware-in-the-loop testing. Despite significant academic interest, real-world implementations remain limited due to interoperability and integration challenges. The paper identifies gaps in standard definitions, maturity models, and simulation frameworks, underscoring the need for scalable, secure, and interoperable architectures and highlighting key areas for scientific development and real-life application of CPPS DTs, such as grid predictive maintenance, forecasting, fault handling, and power system cybersecurity. Full article
(This article belongs to the Special Issue Trends and Challenges in Cyber-Physical Energy Systems)
Show Figures

Figure 1

44 pages, 1049 KB  
Review
Toward Intelligent AIoT: A Comprehensive Survey on Digital Twin and Multimodal Generative AI Integration
by Xiaoyi Luo, Aiwen Wang, Xinling Zhang, Kunda Huang, Songyu Wang, Lixin Chen and Yejia Cui
Mathematics 2025, 13(21), 3382; https://doi.org/10.3390/math13213382 - 23 Oct 2025
Cited by 3 | Viewed by 3040
Abstract
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity [...] Read more.
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity virtual replicas for real-time monitoring, simulation, and optimization with GAI enhancing cognition, cross-modal understanding, and the generation of synthetic data. This survey presents a comprehensive overview of DT–GAI integration in the AIoT. We review the foundations of DTs and multimodal GAI and highlight their complementary roles. We further introduce the Sense–Map–Generate–Act (SMGA) framework, illustrating their interaction through the SMGA loop. We discuss key enabling technologies, including multimodal data fusion, dynamic DT evolution, and cloud–edge–end collaboration. Representative application scenarios, including smart manufacturing, smart cities, autonomous driving, and healthcare, are examined to demonstrate their practical impact. Finally, we outline open challenges, including efficiency, reliability, privacy, and standardization, and we provide directions for future research toward sustainable, trustworthy, and intelligent AIoT systems. Full article
Show Figures

Figure 1

21 pages, 771 KB  
Article
LLM-Driven Offloading Decisions for Edge Object Detection in Smart City Deployments
by Xingyu Yuan and He Li
Smart Cities 2025, 8(5), 169; https://doi.org/10.3390/smartcities8050169 - 10 Oct 2025
Cited by 3 | Viewed by 2020
Abstract
Object detection is a critical technology for smart city development. As request volumes surge, inference is increasingly offloaded from centralized clouds to user-proximal edge sites to reduce latency and backhaul traffic. However, heterogeneous workloads, fluctuating bandwidth, and dynamic device capabilities make offloading and [...] Read more.
Object detection is a critical technology for smart city development. As request volumes surge, inference is increasingly offloaded from centralized clouds to user-proximal edge sites to reduce latency and backhaul traffic. However, heterogeneous workloads, fluctuating bandwidth, and dynamic device capabilities make offloading and scheduling difficult to optimize in edge environments. Deep reinforcement learning (DRL) has proved effective for this problem, but in practice, it relies on manually engineered reward functions that must be redesigned whenever service objectives change. To address this limitation, we introduce an LLM-driven framework that retargets DRL policies for edge object detection directly through natural language instructions. By leveraging understanding of the text and encoding capabilities of large language models (LLMs), our system (i) interprets the current optimization objective; (ii) generates an executable, environment-compatible reward function code; and (iii) iteratively refines the reward via closed-loop simulation feedback. In simulations for a real-world dataset, policies trained with LLM-generated rewards adapt from prompts alone and outperform counterparts trained with expert-designed rewards, while eliminating manual reward engineering. Full article
Show Figures

Figure 1

19 pages, 7416 KB  
Article
LiDAR SLAM for Safety Inspection Robots in Large Scale Public Building Construction Sites
by Chunyong Feng, Junqi Yu, Jingdan Li, Yonghua Wu, Ben Wang and Kaiwen Wang
Buildings 2025, 15(19), 3602; https://doi.org/10.3390/buildings15193602 - 8 Oct 2025
Viewed by 2116
Abstract
LiDAR-based Simultaneous Localization and Mapping (SLAM) plays a key role in enabling inspection robots to achieve autonomous navigation. However, at installation construction sites of large-scale public buildings, existing methods often suffer from point-cloud drift, large z-axis errors, and inefficient loop closure detection, [...] Read more.
LiDAR-based Simultaneous Localization and Mapping (SLAM) plays a key role in enabling inspection robots to achieve autonomous navigation. However, at installation construction sites of large-scale public buildings, existing methods often suffer from point-cloud drift, large z-axis errors, and inefficient loop closure detection, limiting their robustness and adaptability in complex environments. To address these issues, this paper proposes an improved algorithm, LeGO-LOAM-LPB (Large-scale Public Building), built upon the LeGO-LOAM framework. The method enhances feature quality through point-cloud preprocessing, stabilizes z-axis pose estimation by introducing ground-residual constraints, improves matching efficiency with an incremental k-d tree, and strengthens map consistency via a two-layer loop closure detection mechanism. Experiments conducted on a self-developed inspection robot platform in both simulated and real construction sites of large-scale public buildings demonstrate that LeGO-LOAM-LPB significantly improves positioning accuracy, reducing the root mean square error by 41.55% compared with the original algorithm. The results indicate that the proposed method offers a more precise and robust SLAM solution for safety inspection robots in construction environments and shows strong potential for engineering applications. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

25 pages, 5957 KB  
Article
Benchmarking IoT Simulation Frameworks for Edge–Fog–Cloud Architectures: A Comparative and Experimental Study
by Fatima Bendaouch, Hayat Zaydi, Safae Merzouk and Saliha Assoul
Future Internet 2025, 17(9), 382; https://doi.org/10.3390/fi17090382 - 26 Aug 2025
Cited by 1 | Viewed by 1966
Abstract
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack [...] Read more.
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack of integration limits the practical value of prior results and complicates tool selection for distributed architectures. This work introduces a selection and evaluation methodology for simulators that explicitly represent the Edge–Fog–Cloud continuum. Thirteen open-source tools are analyzed based on functional, technical, and operational features. Among them, iFogSim2 and FogNetSim++ are selected for a detailed experimental comparison on their support of mobility, resource allocation, and energy modeling across all layers. A shared hybrid IoT scenario is simulated using eight key metrics: execution time, application loop delay, CPU processing time per tuple, energy consumption, cloud execution cost, network usage, scalability, and robustness. The analysis reveals distinct modeling strategies: FogNetSim++ reduces loop latency by 48% and maintains stable performance at scale but shows high data loss under overload. In contrast, iFogSim2 consumes up to 80% less energy and preserves message continuity in stressful conditions, albeit with longer execution times. These outcomes reflect the trade-offs between modeling granularity, performance stability, and system resilience. Full article
Show Figures

Figure 1

40 pages, 4344 KB  
Review
Digital Cardiovascular Twins, AI Agents, and Sensor Data: A Narrative Review from System Architecture to Proactive Heart Health
by Nurdaulet Tasmurzayev, Bibars Amangeldy, Baglan Imanbek, Zhanel Baigarayeva, Timur Imankulov, Gulmira Dikhanbayeva, Inzhu Amangeldi and Symbat Sharipova
Sensors 2025, 25(17), 5272; https://doi.org/10.3390/s25175272 - 24 Aug 2025
Cited by 21 | Viewed by 8163
Abstract
Cardiovascular disease remains the world’s leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between [...] Read more.
Cardiovascular disease remains the world’s leading cause of mortality, yet everyday care still relies on episodic, symptom-driven interventions that detect ischemia, arrhythmias, and remodeling only after tissue damage has begun, limiting the effectiveness of therapy. A narrative review synthesized 183 studies published between 2016 and 2025 that were located through PubMed, MDPI, Scopus, IEEE Xplore, and Web of Science. This review examines CVD diagnostics using innovative technologies such as digital cardiovascular twins, which involve the collection of data from wearable IoT devices (electrocardiography (ECG), photoplethysmography (PPG), and mechanocardiography), clinical records, laboratory biomarkers, and genetic markers, as well as their integration with artificial intelligence (AI), including machine learning and deep learning, graph and transformer networks for interpreting multi-dimensional data streams and creating prognostic models, as well as generative AI, medical large language models (LLMs), and autonomous agents for decision support, personalized alerts, and treatment scenario modeling, and with cloud and edge computing for data processing. This multi-layered architecture enables the detection of silent pathologies long before clinical manifestations, transforming continuous observations into actionable recommendations and shifting cardiology from reactive treatment to predictive and preventive care. Evidence converges on four layers: sensors streaming multimodal clinical and environmental data; hybrid analytics that integrate hemodynamic models with deep-, graph- and transformer learning while Bayesian and Kalman filters manage uncertainty; decision support delivered by domain-tuned medical LLMs and autonomous agents; and prospective simulations that trial pacing or pharmacotherapy before bedside use, closing the prediction-intervention loop. This stack flags silent pathology weeks in advance and steers proactive personalized prevention. It also lays the groundwork for software-as-a-medical-device ecosystems and new regulatory guidance for trustworthy AI-enabled cardiovascular care. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

38 pages, 7241 KB  
Article
A Coordinated Adaptive Signal Control Method Based on Queue Evolution and Delay Modeling Approach
by Ruochen Hao, Yongjia Wang, Ziyu Wang, Lide Yang and Tuo Sun
Appl. Sci. 2025, 15(17), 9294; https://doi.org/10.3390/app15179294 - 24 Aug 2025
Cited by 2 | Viewed by 1562
Abstract
Coordinated adaptive signal control is a proven strategy for improving traffic efficiency and minimizing vehicular delays. First, we develop a Queue Evolution and Delay Model (QEDM) that establishes the relationship between detector-measured queue lengths and model parameters. QEDM accurately characterizes residual queue dynamics [...] Read more.
Coordinated adaptive signal control is a proven strategy for improving traffic efficiency and minimizing vehicular delays. First, we develop a Queue Evolution and Delay Model (QEDM) that establishes the relationship between detector-measured queue lengths and model parameters. QEDM accurately characterizes residual queue dynamics (accumulation and dissipation), significantly enhancing delay estimation accuracy under oversaturated conditions. Secondly, we propose a novel intersection-level signal optimization method that addresses key practical challenges: (1) pedestrian stages, overlap phases; (2) coupling effects between signal cycle and queue length; and (3) stochastic vehicle arrivals in undersaturated conditions. Unlike conventional approaches, this method proactively shortens signal cycles to reduce queues while avoiding suboptimal solutions that artificially “dilute” delays by extending cycles. Thirdly, we introduce an adaptive coordination control framework that maintains arterial-level green-band progression while maximizing intersection-level adaptive optimization flexibility. To bridge theory and practice, we design a cloud–edge–terminal collaborative deployment architecture for scalable signal control implementation and validate the framework through a hardware-in-the-loop simulation platform. Case studies in real-world scenarios demonstrate that the proposed method outperforms existing benchmarks in delay estimation accuracy, average vehicle delay, and travel time in coordinated directions. Additionally, we analyze the influence of coordination constraint update intervals on system performance, providing actionable insights for adaptive control systems. Full article
Show Figures

Figure 1

23 pages, 3505 KB  
Article
Digital Imaging Simulation and Closed-Loop Verification Model of Infrared Payloads in Space-Based Cloud–Sea Scenarios
by Wen Sun, Yejin Li, Fenghong Li and Peng Rao
Remote Sens. 2025, 17(16), 2900; https://doi.org/10.3390/rs17162900 - 20 Aug 2025
Cited by 1 | Viewed by 1483
Abstract
Driven by the rising demand for digitalization and intelligent development of infrared payloads, next-generation systems must be developed within compressed timelines. High-precision digital modeling and simulation techniques offer essential data sources but often falter in complex space-based scenarios due to the limited availability [...] Read more.
Driven by the rising demand for digitalization and intelligent development of infrared payloads, next-generation systems must be developed within compressed timelines. High-precision digital modeling and simulation techniques offer essential data sources but often falter in complex space-based scenarios due to the limited availability of infrared characteristic data, hindering evaluation of the payload effectiveness. To address this, we propose a digital imaging simulation and verification (DISV) model for high-fidelity infrared image generation and closed-loop validation in the context of cloud–sea target detection. Based on on-orbit infrared imagery, we construct a cloud cluster database via morphological operations and generate physically consistent backgrounds through iterative optimization. The DISV model subsequently calculates scene infrared radiation, integrating radiance computations with an electron-count-based imaging model for radiance-to-grayscale conversion. Closed-loop verification via blackbody radiance inversion is performed to confirm the model’s accuracy. The mid-wave infrared (MWIR, 3–5 µm) system achieves mean square errors (RSMEs) < 0.004, peak signal-to-noise ratios (PSNRs) > 49 dB, and a structural similarity index measure (SSIM) > 0.997. The long-wave infrared (LWIR, 8–12 µm) system yields RMSEs < 0.255, PSNRs > 47 dB, and an SSIM > 0.994. Under 20–40% cloud coverage, the target radiance inversion errors remain below 4.81% and 7.30% for the MWIR and LWIR, respectively. The DISV model enables infrared image simulation across multi-domain scenarios, offering vital support for optimizing on-orbit payload performance. Full article
Show Figures

Figure 1

15 pages, 10795 KB  
Article
DigiHortiRobot: An AI-Driven Digital Twin Architecture for Hydroponic Greenhouse Horticulture with Dual-Arm Robotic Automation
by Roemi Fernández, Eduardo Navas, Daniel Rodríguez-Nieto, Alain Antonio Rodríguez-González and Luis Emmi
Future Internet 2025, 17(8), 347; https://doi.org/10.3390/fi17080347 - 31 Jul 2025
Cited by 3 | Viewed by 2633
Abstract
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, [...] Read more.
The integration of digital twin technology with robotic automation holds significant promise for advancing sustainable horticulture in controlled environment agriculture. This article presents DigiHortiRobot, a novel AI-driven digital twin architecture tailored for hydroponic greenhouse systems. The proposed framework integrates real-time sensing, predictive modeling, task planning, and dual-arm robotic execution within a modular, IoT-enabled infrastructure. DigiHortiRobot is structured into three progressive implementation phases: (i) monitoring and data acquisition through a multimodal perception system; (ii) decision support and virtual simulation for scenario analysis and intervention planning; and (iii) autonomous execution with feedback-based model refinement. The Physical Layer encompasses crops, infrastructure, and a mobile dual-arm robot; the virtual layer incorporates semantic modeling and simulation environments; and the synchronization layer enables continuous bi-directional communication via a nine-tier IoT architecture inspired by FIWARE standards. A robot task assignment algorithm is introduced to support operational autonomy while maintaining human oversight. The system is designed to optimize horticultural workflows such as seeding and harvesting while allowing farmers to interact remotely through cloud-based interfaces. Compared to previous digital agriculture approaches, DigiHortiRobot enables closed-loop coordination among perception, simulation, and action, supporting real-time task adaptation in dynamic environments. Experimental validation in a hydroponic greenhouse confirmed robust performance in both seeding and harvesting operations, achieving over 90% accuracy in localizing target elements and successfully executing planned tasks. The platform thus provides a strong foundation for future research in predictive control, semantic environment modeling, and scalable deployment of autonomous systems for high-value crop production. Full article
(This article belongs to the Special Issue Advances in Smart Environments and Digital Twin Technologies)
Show Figures

Figure 1

Back to TopTop