Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (327)

Search Parameters:
Keywords = mobile edge cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2308 KB  
Review
Review on Application of Machine Vision-Based Intelligent Algorithms in Gear Defect Detection
by Dehai Zhang, Shengmao Zhou, Yujuan Zheng and Xiaoguang Xu
Processes 2025, 13(10), 3370; https://doi.org/10.3390/pr13103370 - 21 Oct 2025
Viewed by 693
Abstract
Gear defect detection directly affects the operational reliability of critical equipment in fields such as automotive and aerospace. Gear defect detection technology based on machine vision, leveraging the advantages of non-contact measurement, high efficiency, and cost-effectiveness, has become a key support for quality [...] Read more.
Gear defect detection directly affects the operational reliability of critical equipment in fields such as automotive and aerospace. Gear defect detection technology based on machine vision, leveraging the advantages of non-contact measurement, high efficiency, and cost-effectiveness, has become a key support for quality control in intelligent manufacturing. However, it still faces challenges including difficulties in semantic alignment of multimodal data, the imbalance between real-time detection requirements and computational resources, and poor model generalization in few-shot scenarios. This paper takes the paradigm evolution of gear defect detection technology as the main line, systematically reviews its development from traditional image processing to deep learning, and focuses on the innovative application of intelligent algorithms. A research framework of “technical bottleneck-breakthrough path-application verification” is constructed: for the problem of multimodal fusion, the cross-modal feature alignment mechanism based on Transformer network is deeply analyzed, clarifying its technical path of realizing joint embedding of visual and vibration signals by establishing global correlation mapping; for resource constraints, the performance of lightweight models such as MobileNet and ShuffleNet is quantitatively compared, verifying that these models reduce Parameters by 40–60% while maintaining the mean Average Precision essentially unchanged; for small-sample scenarios, few-shot generation models based on contrastive learning are systematically organized, confirming that their accuracy in the 10-shot scenario can reach 90% of that of fully supervised models, thus enhancing generalization ability. Future research can focus on the collaboration between few-shot generation and physical simulation, edge-cloud dynamic scheduling, defect evolution modeling driven by multiphysics fields, and standardization of explainable artificial intelligence. It aims to construct a gear detection system with autonomous perception capabilities, promoting the development of industrial quality inspection toward high-precision, high-robustness, and low-cost intelligence. Full article
Show Figures

Figure 1

16 pages, 6847 KB  
Article
Edge-Based Autonomous Fire and Smoke Detection Using MobileNetV2
by Dilshod Sharobiddinov, Hafeez Ur Rehman Siddiqui, Adil Ali Saleem, Gerardo Mendez Mezquita, Debora Libertad Ramírez Vargas and Isabel de la Torre Díez
Sensors 2025, 25(20), 6419; https://doi.org/10.3390/s25206419 - 17 Oct 2025
Viewed by 563
Abstract
Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment [...] Read more.
Forest fires pose significant threats to ecosystems, human life, and the global climate, necessitating rapid and reliable detection systems. Traditional fire detection approaches, including sensor networks, satellite monitoring, and centralized image analysis, often suffer from delayed response, high false positives, and limited deployment in remote areas. Recent deep learning-based methods offer high classification accuracy but are typically computationally intensive and unsuitable for low-power, real-time edge devices. This study presents an autonomous, edge-based forest fire and smoke detection system using a lightweight MobileNetV2 convolutional neural network. The model is trained on a balanced dataset of fire, smoke, and non-fire images and optimized for deployment on resource-constrained edge devices. The system performs near real-time inference, achieving a test accuracy of 97.98% with an average end-to-end prediction latency of 0.77 s per frame (approximately 1.3 FPS) on the Raspberry Pi 5 edge device. Predictions include the class label, confidence score, and timestamp, all generated locally without reliance on cloud connectivity, thereby enhancing security and robustness against potential cyber threats. Experimental results demonstrate that the proposed solution maintains high predictive performance comparable to state-of-the-art methods while providing efficient, offline operation suitable for real-world environmental monitoring and early wildfire mitigation. This approach enables cost-effective, scalable deployment in remote forest regions, combining accuracy, speed, and autonomous edge processing for timely fire and smoke detection. Full article
Show Figures

Figure 1

24 pages, 943 KB  
Review
A Review on AI Miniaturization: Trends and Challenges
by Bin Tang, Shengzhi Du and Antonie Johan Smith
Appl. Sci. 2025, 15(20), 10958; https://doi.org/10.3390/app152010958 - 12 Oct 2025
Viewed by 895
Abstract
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing [...] Read more.
Artificial intelligence (AI) often suffers from high energy consumption and complex deployment in resource-constrained environments, leading to a structural mismatch between capability and deployability. This review takes two representative scenarios—energy-first and performance-first—as the main thread, systematically comparing cloud, edge, and fog/cloudlet/mobile edge computing (MEC)/micro data center (MDC) architectures. Based on a standardized literature search and screening process, three categories of miniaturization strategies are distilled: redundancy compression (e.g., pruning, quantization, and distillation), knowledge transfer (e.g., distillation and parameter-efficient fine-tuning), and hardware–software co-design (e.g., neural architecture search (NAS), compiler-level, and operator-level optimization). The purposes of this review are threefold: (1) to unify the “architecture–strategy–implementation pathway” from a system-level perspective; (2) to establish technology–budget mapping with verifiable quantitative indicators; and (3) to summarize representative pathways for energy- and performance-prioritized scenarios, while highlighting current deficiencies in data disclosure and device-side validation. The findings indicate that, compared with single techniques, cross-layer combined optimization better balances accuracy, latency, and power consumption. Therefore, AI miniaturization should be regarded as a proactive method of structural reconfiguration for large-scale deployment. Future efforts should advance cross-scenario empirical validation and standardized benchmarking, while reinforcing hardware–software co-design. Compared with existing reviews that mostly focus on a single dimension, this review proposes a cross-level framework and design checklist, systematizing scattered optimization methods into reusable engineering pathways. Full article
Show Figures

Figure 1

19 pages, 1327 KB  
Article
An IoT Architecture for Sustainable Urban Mobility: Towards Energy-Aware and Low-Emission Smart Cities
by Manuel J. C. S. Reis, Frederico Branco, Nishu Gupta and Carlos Serôdio
Future Internet 2025, 17(10), 457; https://doi.org/10.3390/fi17100457 - 4 Oct 2025
Viewed by 509
Abstract
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents [...] Read more.
The rapid growth of urban populations intensifies congestion, air pollution, and energy demand. Green mobility is central to sustainable smart cities, and the Internet of Things (IoT) offers a means to monitor, coordinate, and optimize transport systems in real time. This paper presents an Internet of Things (IoT)-based architecture integrating heterogeneous sensing with edge–cloud orchestration and AI-driven control for green routing and coordinated Electric Vehicle (EV) charging. The framework supports adaptive traffic management, energy-aware charging, and multimodal integration through standards-aware interfaces and auditable Key Performance Indicators (KPIs). We hypothesize that, relative to a static shortest-path baseline, the integrated green routing and EV-charging coordination reduce (H1) mean travel time per trip by ≥7%, (H2) CO2 intensity (g/km) by ≥6%, and (H3) station peak load by ≥20% under moderate-to-high demand conditions. These hypotheses are tested in Simulation of Urban MObility (SUMO) with Handbook Emission Factors for Road Transport (HBEFA) emission classes, using 10 independent random seeds and reporting means with 95% confidence intervals and formal significance testing. The results confirm the hypotheses: average travel time decreases by approximately 9.8%, CO2 intensity by approximately 8%, and peak load by approximately 25% under demand multipliers ≥1.2 and EV shares ≥20%. Gains are attenuated under light demand, where congestion effects are weaker. We further discuss scalability, interoperability, privacy/security, and the simulation-to-deployment gap, and outline priorities for reproducible field pilots. In summary, a pragmatic edge–cloud IoT stack has the potential to lower congestion, reduce per-kilometer emissions, and smooth charging demand, provided it is supported by reliable data integration, resilient edge services, and standards-compliant interoperability, thereby contributing to sustainable urban mobility in line with the objectives of SDG 11 (Sustainable Cities and Communities). Full article
Show Figures

Figure 1

25 pages, 737 KB  
Systematic Review
A Systematic Literature Review on the Implementation and Challenges of Zero Trust Architecture Across Domains
by Sadaf Mushtaq, Muhammad Mohsin and Muhammad Mujahid Mushtaq
Sensors 2025, 25(19), 6118; https://doi.org/10.3390/s25196118 - 3 Oct 2025
Cited by 1 | Viewed by 1835
Abstract
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning [...] Read more.
The Zero Trust Architecture (ZTA) model has emerged as a foundational cybersecurity paradigm that eliminates implicit trust and enforces continuous verification across users, devices, and networks. This study presents a systematic literature review of 74 peer-reviewed articles published between 2016 and 2025, spanning domains such as cloud computing (24 studies), Internet of Things (11), healthcare (7), enterprise and remote work systems (6), industrial and supply chain networks (5), mobile networks (5), artificial intelligence and machine learning (5), blockchain (4), big data and edge computing (3), and other emerging contexts (4). The analysis shows that authentication, authorization, and access control are the most consistently implemented ZTA components, whereas auditing, orchestration, and environmental perception remain underexplored. Across domains, the main challenges include scalability limitations, insufficient lightweight cryptographic solutions for resource-constrained systems, weak orchestration mechanisms, and limited alignment with regulatory frameworks such as GDPR and HIPAA. Cross-domain comparisons reveal that cloud and enterprise systems demonstrate relatively mature implementations, while IoT, blockchain, and big data deployments face persistent performance and compliance barriers. Overall, the findings highlight both the progress and the gaps in ZTA adoption, underscoring the need for lightweight cryptography, context-aware trust engines, automated orchestration, and regulatory integration. This review provides a roadmap for advancing ZTA research and practice, offering implications for researchers, industry practitioners, and policymakers seeking to enhance cybersecurity resilience. Full article
Show Figures

Figure 1

20 pages, 7575 KB  
Article
A Two-Step Filtering Approach for Indoor LiDAR Point Clouds: Efficient Removal of Jump Points and Misdetected Points
by Yibo Cao, Yonghao Huang and Junheng Ni
Sensors 2025, 25(19), 5937; https://doi.org/10.3390/s25195937 - 23 Sep 2025
Viewed by 427
Abstract
In the simultaneous localization and mapping (SLAM) process of indoor mobile robots, accurate and stable point cloud data are crucial for localization and environment perception. However, in practical applications indoor mobile robots may encounter glass, smooth floors, edge objects, etc. Point cloud data [...] Read more.
In the simultaneous localization and mapping (SLAM) process of indoor mobile robots, accurate and stable point cloud data are crucial for localization and environment perception. However, in practical applications indoor mobile robots may encounter glass, smooth floors, edge objects, etc. Point cloud data are often misdetected in such environments, especially at the intersection of flat surfaces and edges of obstacles, which are prone to generating jump points. Smooth planes may also lead to the emergence of misdetected points due to reflective properties or sensor errors. To solve these problems, a two-step filtering method is proposed in this paper. In the first step, a clustering filtering algorithm based on radial distance and tangential span is used for effective filtering against jump points. The algorithm ensures accurate data by analyzing the spatial relationship between each point in the point cloud and the neighboring points, which allows it to identify and filter out the jump points. In the second step, a filtering algorithm based on the grid penetration model is used to further filter out misdetected points on the smooth plane. The model eliminates unrealistic point cloud data and improves the overall quality of the point cloud by simulating the characteristics of the beam penetrating the object. Experimental results in indoor environments show that this two-step filtering method significantly reduces jump points and misdetected points in the point cloud, leading to improved navigational accuracy and stability of indoor mobile robots. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

30 pages, 3141 KB  
Article
Lyapunov-Based Deep Deterministic Policy Gradient for Energy-Efficient Task Offloading in UAV-Assisted MEC
by Jianhua Liu, Xudong Zhang, Haitao Zhou, Xia Lei, Huiru Li and Xiaofan Wang
Drones 2025, 9(9), 653; https://doi.org/10.3390/drones9090653 - 16 Sep 2025
Cited by 1 | Viewed by 665
Abstract
The demand for low-latency computing from the Internet of Things (IoT) and emerging applications challenges traditional cloud computing. Mobile Edge Computing (MEC) offers a solution by deploying resources at the network edge, yet terrestrial deployments face limitations. Unmanned Aerial Vehicles (UAVs), leveraging their [...] Read more.
The demand for low-latency computing from the Internet of Things (IoT) and emerging applications challenges traditional cloud computing. Mobile Edge Computing (MEC) offers a solution by deploying resources at the network edge, yet terrestrial deployments face limitations. Unmanned Aerial Vehicles (UAVs), leveraging their high mobility and flexibility, provide dynamic computation offloading for User Equipments (UEs), especially in areas with poor infrastructure or network congestion. However, UAV-assisted MEC confronts significant challenges, including time-varying wireless channels and the inherent energy constraints of UAVs. We put forward the Lyapunov-based Deep Deterministic Policy Gradient (LyDDPG), a novel computation offloading algorithm. This algorithm innovatively integrates Lyapunov optimization with the Deep Deterministic Policy Gradient (DDPG) method. Lyapunov optimization transforms the long-term, stochastic energy minimization problem into a series of tractable, per-timeslot deterministic subproblems. Subsequently, DDPG is utilized to solve these subproblems by learning a model-free policy through environmental interaction. This policy maps system states to optimal continuous offloading and resource allocation decisions, aiming to minimize the Lyapunov-derived “drift-plus-penalty” term. The simulation outcomes indicate that, compared to several baseline and leading algorithms, the proposed LyDDPG algorithm reduces the total system energy consumption by at least 16% while simultaneously maintaining low task latency and ensuring system stability. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

28 pages, 7302 KB  
Article
A Prototype of a Lightweight Structural Health Monitoring System Based on Edge Computing
by Yinhao Wang, Zhiyi Tang, Guangcai Qian, Wei Xu, Xiaomin Huang and Hao Fang
Sensors 2025, 25(18), 5612; https://doi.org/10.3390/s25185612 - 9 Sep 2025
Cited by 1 | Viewed by 1157
Abstract
Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event [...] Read more.
Bridge Structural Health Monitoring (BSHM) is vital for assessing structural integrity and operational safety. Traditional wired systems are limited by high installation costs and complexity, while existing wireless systems still face issues with cost, synchronization, and reliability. Moreover, cloud-based methods for extreme event detection struggle to meet real-time and bandwidth constraints in edge environments. To address these challenges, this study proposes a lightweight wireless BSHM system based on edge computing, enabling local data acquisition and real-time intelligent detection of extreme events. The system consists of wireless sensor nodes for front-end acceleration data collection and an intelligent hub for data storage, visualization, and earthquake recognition. Acceleration data are converted into time–frequency images to train a MobileNetV2-based model. With model quantization and Neural Processing Unit (NPU) acceleration, efficient on-device inference is achieved. Experiments on a laboratory steel bridge verify the system’s high acquisition accuracy, precise clock synchronization, and strong anti-interference performance. Compared with inference on a general-purpose ARM CPU running the unquantized model, the quantized model deployed on the NPU achieves a 26× speedup in inference, a 35% reduction in power consumption, and less than 1% accuracy loss. This solution provides a cost-effective, reliable BSHM framework for small-to-medium-sized bridges, offering local intelligence and rapid response with strong potential for real-world applications. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

13 pages, 952 KB  
Article
Sensor Fusion for Target Detection Using LLM-Based Transfer Learning Approach
by Yuval Ziv, Barouch Matzliach and Irad Ben-Gal
Entropy 2025, 27(9), 928; https://doi.org/10.3390/e27090928 - 3 Sep 2025
Cited by 1 | Viewed by 1098
Abstract
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which [...] Read more.
This paper introduces a novel sensor fusion approach for the detection of multiple static and mobile targets by autonomous mobile agents. Unlike previous studies that rely on theoretical sensor models, which are considered as independent, the proposed methodology leverages real-world sensor data, which is transformed into sensor-specific probability maps using object detection estimation for optical data and converting averaged point-cloud intensities for LIDAR based on a dedicated deep learning model before being integrated through a large language model (LLM) framework. We introduce a methodology based on LLM transfer learning (LLM-TLFT) to create a robust global probability map enabling efficient swarm management and target detection in challenging environments. The paper focuses on real data obtained from two types of sensors, light detection and ranging (LIDAR) sensors and optical sensors, and it demonstrates significant improvement in performance compared to existing methods (Independent Opinion Pool, CNN, GPT-2 with deep transfer learning) in terms of precision, recall, and computational efficiency, particularly in scenarios with high noise and sensor imperfections. The significant advantage of the proposed approach is the possibility to interpret a dependency between different sensors. In addition, a model compression using knowledge-based distillation was performed (distilled TLFT), which yielded satisfactory results for the deployment of the proposed approach to edge devices. Full article
Show Figures

Figure 1

22 pages, 1688 KB  
Article
LumiCare: A Context-Aware Mobile System for Alzheimer’s Patients Integrating AI Agents and 6G
by Nicola Dall’Ora, Lorenzo Felli, Stefano Aldegheri, Nicola Vicino and Romeo Giuliano
Electronics 2025, 14(17), 3516; https://doi.org/10.3390/electronics14173516 - 2 Sep 2025
Viewed by 1111
Abstract
Alzheimer’s disease is a growing global health concern, demanding innovative solutions for early detection, continuous monitoring, and patient support. This article reviews recent advances in Smart Wearable Medical Devices (SWMDs), Internet of Things (IoT) systems, and mobile applications used to monitor physiological, behavioral, [...] Read more.
Alzheimer’s disease is a growing global health concern, demanding innovative solutions for early detection, continuous monitoring, and patient support. This article reviews recent advances in Smart Wearable Medical Devices (SWMDs), Internet of Things (IoT) systems, and mobile applications used to monitor physiological, behavioral, and cognitive changes in Alzheimer’s patients. We highlight the role of wearable sensors in detecting vital signs, falls, and geolocation data, alongside IoT architectures that enable real-time alerts and remote caregiver access. Building on these technologies, we present LumiCare, a conceptual, context-aware mobile system that integrates multimodal sensor data, chatbot-based interaction, and emerging 6G network capabilities. LumiCare uses machine learning for behavioral analysis, delivers personalized cognitive prompts, and enables emergency response through adaptive alerts and caregiver notifications. The system includes the LumiCare Companion, an interactive mobile app designed to support daily routines, cognitive engagement, and safety monitoring. By combining local AI processing with scalable edge-cloud architectures, LumiCare balances latency, privacy, and computational load. While promising, this work remains at the design stage and has not yet undergone clinical validation. Our analysis underscores the potential of wearable, IoT, and mobile technologies to improve the quality of life for Alzheimer’s patients, support caregivers, and reduce healthcare burdens. Full article
(This article belongs to the Special Issue Smart Bioelectronics, Wearable Systems and E-Health)
Show Figures

Figure 1

13 pages, 3910 KB  
Proceeding Paper
Grading Support System for Pear Fruit Using Edge Computing
by Ryo Ito, Shutaro Konuma and Tatsuya Yamazaki
Eng. Proc. 2025, 107(1), 45; https://doi.org/10.3390/engproc2025107045 - 1 Sep 2025
Viewed by 905
Abstract
Le Lectier pears (hereafter, Pears) are graded based on appearance, requiring farmers to inspect tens of thousands in a short time before shipment. To assist in this process, a grading support system was developed. The existing cloud-based system used mobile devices to capture [...] Read more.
Le Lectier pears (hereafter, Pears) are graded based on appearance, requiring farmers to inspect tens of thousands in a short time before shipment. To assist in this process, a grading support system was developed. The existing cloud-based system used mobile devices to capture images and analyzed them with Convolutional Neural Networks (CNNs) and texture-based algorithms. However, communication delays and algorithm inefficiencies resulted in a 30 s execution time, posing a problem. This paper proposes an edge computing-based system using Mask R-CNN for appearance deterioration detection. Processing on edge servers reduces execution time to 5–10 s, and 39 out of 51 Pears are accurately detected. Full article
Show Figures

Figure 1

25 pages, 5957 KB  
Article
Benchmarking IoT Simulation Frameworks for Edge–Fog–Cloud Architectures: A Comparative and Experimental Study
by Fatima Bendaouch, Hayat Zaydi, Safae Merzouk and Saliha Assoul
Future Internet 2025, 17(9), 382; https://doi.org/10.3390/fi17090382 - 26 Aug 2025
Viewed by 1031
Abstract
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack [...] Read more.
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack of integration limits the practical value of prior results and complicates tool selection for distributed architectures. This work introduces a selection and evaluation methodology for simulators that explicitly represent the Edge–Fog–Cloud continuum. Thirteen open-source tools are analyzed based on functional, technical, and operational features. Among them, iFogSim2 and FogNetSim++ are selected for a detailed experimental comparison on their support of mobility, resource allocation, and energy modeling across all layers. A shared hybrid IoT scenario is simulated using eight key metrics: execution time, application loop delay, CPU processing time per tuple, energy consumption, cloud execution cost, network usage, scalability, and robustness. The analysis reveals distinct modeling strategies: FogNetSim++ reduces loop latency by 48% and maintains stable performance at scale but shows high data loss under overload. In contrast, iFogSim2 consumes up to 80% less energy and preserves message continuity in stressful conditions, albeit with longer execution times. These outcomes reflect the trade-offs between modeling granularity, performance stability, and system resilience. Full article
Show Figures

Figure 1

16 pages, 1350 KB  
Article
The Synergistic Impact of 5G on Cloud-to-Edge Computing and the Evolution of Digital Applications
by Saleh M. Altowaijri and Mohamed Ayari
Mathematics 2025, 13(16), 2634; https://doi.org/10.3390/math13162634 - 16 Aug 2025
Cited by 1 | Viewed by 3009
Abstract
The integration of 5G technology with cloud and edge computing is redefining the digital landscape by enabling ultra-fast connectivity, low-latency communication, and scalable solutions across diverse application domains. This paper investigates the synergistic impact of 5G on cloud-to-edge architectures, emphasizing its transformative role [...] Read more.
The integration of 5G technology with cloud and edge computing is redefining the digital landscape by enabling ultra-fast connectivity, low-latency communication, and scalable solutions across diverse application domains. This paper investigates the synergistic impact of 5G on cloud-to-edge architectures, emphasizing its transformative role in revolutionizing sectors such as healthcare, smart cities, industrial automation, and autonomous systems. Key advancements in 5G—including Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communication (URLLC), and Massive Machine-Type Communications (mMTC)—are examined for their role in enabling real-time data processing, edge intelligence, and IoT scalability. In addition to conceptual analysis, the paper presents simulation-based evaluations comparing 5G cloud-to-edge systems with traditional 4G cloud models. Quantitative results demonstrate significant improvements in latency, energy efficiency, reliability, and AI prediction accuracy. The study also explores challenges in infrastructure deployment, cybersecurity, and latency management while highlighting the growing opportunities for innovation in AI-driven automation and immersive consumer technologies. Future research directions are outlined, focusing on energy-efficient designs, advanced security mechanisms, and equitable access to 5G infrastructure. Overall, this study offers comprehensive insights and performance benchmarks that will serve as a valuable resource for researchers and practitioners working to advance next-generation digital ecosystems. Full article
(This article belongs to the Special Issue Innovations in Cloud Computing and Machine Learning Applications)
Show Figures

Figure 1

29 pages, 12646 KB  
Article
The IoRT-in-Hand: Tele-Robotic Echography and Digital Twins on Mobile Devices
by Juan Bravo-Arrabal, Zhuoqi Cheng, J. J. Fernández-Lozano, Jose Antonio Gomez-Ruiz, Christian Schlette, Thiusius Rajeeth Savarimuthu, Anthony Mandow and Alfonso García-Cerezo
Sensors 2025, 25(16), 4972; https://doi.org/10.3390/s25164972 - 11 Aug 2025
Viewed by 1448
Abstract
The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, [...] Read more.
The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, or where access to a medical facility is not possible. Nevertheless, touching a human safely with a robotic arm in non-engineered or even out-of-hospital environments presents substantial challenges. This article presents a novel IoRT approach for healthcare in or from remote areas, enabling interaction between a specialist’s hand and a robotic hand. We introduce the IoRT-in-hand: a smart, lightweight end-effector that extends the specialist’s hand, integrating a medical instrument, an RGB camera with servos, a force/torque sensor, and a mini-PC with Internet connectivity. Additionally, we propose an open-source Android app combining MQTT and ROS for real-time remote manipulation, alongside an Edge–Cloud architecture that links the physical robot with its Digital Twin (DT), enabling precise control and 3D visual feedback of the robot’s environment. A proof of concept is presented for the proposed tele-robotic system, using a 6-DOF manipulator with the IoRT-in-hand to perform an ultrasound scan. Teleoperation was conducted over 2300 km via a 5G NSA network on the operator side and a wired network in a laboratory on the robot side. Performance was assessed through human subject feedback, sensory data, and latency measurements, demonstrating the system’s potential for remote healthcare and emergency applications. The source code and CAD models of the IoRT-in-hand prototype are publicly available in an open-access repository to encourage reproducibility and facilitate further developments in robotic telemedicine. Full article
Show Figures

Figure 1

51 pages, 4099 KB  
Review
Artificial Intelligence and Digital Twin Technologies for Intelligent Lithium-Ion Battery Management Systems: A Comprehensive Review of State Estimation, Lifecycle Optimization, and Cloud-Edge Integration
by Seyed Saeed Madani, Yasmin Shabeer, Michael Fowler, Satyam Panchal, Hicham Chaoui, Saad Mekhilef, Shi Xue Dou and Khay See
Batteries 2025, 11(8), 298; https://doi.org/10.3390/batteries11080298 - 5 Aug 2025
Cited by 3 | Viewed by 5482
Abstract
The rapid growth of electric vehicles (EVs) and new energy systems has put lithium-ion batteries at the center of the clean energy change. Nevertheless, to achieve the best battery performance, safety, and sustainability in many changing circumstances, major innovations are needed in Battery [...] Read more.
The rapid growth of electric vehicles (EVs) and new energy systems has put lithium-ion batteries at the center of the clean energy change. Nevertheless, to achieve the best battery performance, safety, and sustainability in many changing circumstances, major innovations are needed in Battery Management Systems (BMS). This review paper explores how artificial intelligence (AI) and digital twin (DT) technologies can be integrated to enable the intelligent BMS of the future. It investigates how powerful data approaches such as deep learning, ensembles, and models that rely on physics improve the accuracy of predicting state of charge (SOC), state of health (SOH), and remaining useful life (RUL). Additionally, the paper reviews progress in AI features for cooling, fast charging, fault detection, and intelligible AI models. Working together, cloud and edge computing technology with DTs means better diagnostics, predictive support, and improved management for any use of EVs, stored energy, and recycling. The review underlines recent successes in AI-driven material research, renewable battery production, and plans for used systems, along with new problems in cybersecurity, combining data and mass rollout. We spotlight important research themes, existing problems, and future drawbacks following careful analysis of different up-to-date approaches and systems. Uniting physical modeling with AI-based analytics on cloud-edge-DT platforms supports the development of tough, intelligent, and ecologically responsible batteries that line up with future mobility and wider use of renewable energy. Full article
Show Figures

Figure 1

Back to TopTop