Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,299)

Search Parameters:
Keywords = intelligent mobility

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
51 pages, 4099 KiB  
Review
Artificial Intelligence and Digital Twin Technologies for Intelligent Lithium-Ion Battery Management Systems: A Comprehensive Review of State Estimation, Lifecycle Optimization, and Cloud-Edge Integration
by Seyed Saeed Madani, Yasmin Shabeer, Michael Fowler, Satyam Panchal, Hicham Chaoui, Saad Mekhilef, Shi Xue Dou and Khay See
Batteries 2025, 11(8), 298; https://doi.org/10.3390/batteries11080298 - 5 Aug 2025
Abstract
The rapid growth of electric vehicles (EVs) and new energy systems has put lithium-ion batteries at the center of the clean energy change. Nevertheless, to achieve the best battery performance, safety, and sustainability in many changing circumstances, major innovations are needed in Battery [...] Read more.
The rapid growth of electric vehicles (EVs) and new energy systems has put lithium-ion batteries at the center of the clean energy change. Nevertheless, to achieve the best battery performance, safety, and sustainability in many changing circumstances, major innovations are needed in Battery Management Systems (BMS). This review paper explores how artificial intelligence (AI) and digital twin (DT) technologies can be integrated to enable the intelligent BMS of the future. It investigates how powerful data approaches such as deep learning, ensembles, and models that rely on physics improve the accuracy of predicting state of charge (SOC), state of health (SOH), and remaining useful life (RUL). Additionally, the paper reviews progress in AI features for cooling, fast charging, fault detection, and intelligible AI models. Working together, cloud and edge computing technology with DTs means better diagnostics, predictive support, and improved management for any use of EVs, stored energy, and recycling. The review underlines recent successes in AI-driven material research, renewable battery production, and plans for used systems, along with new problems in cybersecurity, combining data and mass rollout. We spotlight important research themes, existing problems, and future drawbacks following careful analysis of different up-to-date approaches and systems. Uniting physical modeling with AI-based analytics on cloud-edge-DT platforms supports the development of tough, intelligent, and ecologically responsible batteries that line up with future mobility and wider use of renewable energy. Full article
Show Figures

Figure 1

24 pages, 2357 KiB  
Article
Optimized Intelligent Localization Through Mathematical Modeling and Crow Search Algorithms
by Tamer Ramadan Badawy and Nesreen I. Ziedan
Sensors 2025, 25(15), 4804; https://doi.org/10.3390/s25154804 - 5 Aug 2025
Abstract
Localization has emerged as a critical problem over the past decades, with diverse techniques developed to address robot and mobile localization challenges across varied domains. However, existing localization methods still fall short of achieving the precision needed for certain high-demand applications. The proposed [...] Read more.
Localization has emerged as a critical problem over the past decades, with diverse techniques developed to address robot and mobile localization challenges across varied domains. However, existing localization methods still fall short of achieving the precision needed for certain high-demand applications. The proposed algorithm is designed to enhance localization accuracy by integrating mathematical modeling with the Crow Search Algorithm (CSA). The objective is to identify the most probable position within a designated search space. Anchored by a network of fixed points, the search area is initially defined. A mathematical approach is then applied to reduce this area by calculating the intersections between circles centered at each anchor point. Within this reduced area, an array of candidate points are selected, and their centroid is computed to serve as an initial estimate. The modified CSA iteratively improves upon this estimate by emulating the natural behavior of crows, updating its variables to converge on the optimal position. Experimental evaluations, conducted on both real and simulated datasets, demonstrate that the proposed algorithm leads to a better localization accuracy than existing methods. The proposed methodology achieves a significant accuracy improvement with an accuracy of 98%. These results confirm the effectiveness of our approach for applications that require high precision with minimal infrastructure and low computational complexity. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

16 pages, 13514 KiB  
Article
Development of a High-Speed Time-Synchronized Crop Phenotyping System Based on Precision Time Protoco
by Runze Song, Haoyu Liu, Yueyang Hu, Man Zhang and Wenyi Sheng
Appl. Sci. 2025, 15(15), 8612; https://doi.org/10.3390/app15158612 (registering DOI) - 4 Aug 2025
Abstract
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the [...] Read more.
Aiming to address the problems of asynchronous acquisition time of multiple sensors in the crop phenotype acquisition system and high cost of the acquisition equipment, this paper developed a low-cost crop phenotype synchronous acquisition system based on the PTP synchronization protocol, realizing the synchronous acquisition of three types of crop data: visible light images, thermal infrared images, and laser point clouds. The paper innovatively proposed the Difference Structural Similarity Index Measure (DSSIM) index, combined with statistical indicators (average point number difference, average coordinate error), distribution characteristic indicators (Charm distance), and Hausdorff distance to characterize the stability of the system. After 72 consecutive hours of synchronization testing on the timing boards, it was verified that the root mean square error of the synchronization time for each timing board reached the ns level. The synchronous trigger acquisition time for crop parameters under time synchronization was controlled at the microsecond level. Using pepper as the crop sample, 133 consecutive acquisitions were conducted. The acquisition success rate for the three phenotypic data types of pepper samples was 100%, with a DSSIM of approximately 0.96. The average point number difference and average coordinate error were both about 3%, while the Charm distance and Hausdorff distance were only 1.14 mm and 5 mm. This system can provide hardware support for multi-parameter acquisition and data registration in the fast mobile crop phenotype platform, laying a reliable data foundation for crop growth monitoring, intelligent yield analysis, and prediction. Full article
(This article belongs to the Special Issue Smart Farming: Internet of Things (IoT)-Based Sustainable Agriculture)
Show Figures

Figure 1

12 pages, 480 KiB  
Article
A Novel Deep Learning Model for Predicting Colorectal Anastomotic Leakage: A Pioneer Multicenter Transatlantic Study
by Miguel Mascarenhas, Francisco Mendes, Filipa Fonseca, Eduardo Carvalho, Andre Santos, Daniela Cavadas, Guilherme Barbosa, Antonio Pinto da Costa, Miguel Martins, Abdullah Bunaiyan, Maísa Vasconcelos, Marley Ribeiro Feitosa, Shay Willoughby, Shakil Ahmed, Muhammad Ahsan Javed, Nilza Ramião, Guilherme Macedo and Manuel Limbert
J. Clin. Med. 2025, 14(15), 5462; https://doi.org/10.3390/jcm14155462 - 3 Aug 2025
Viewed by 56
Abstract
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop [...] Read more.
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop an artificial intelligence model based on intraoperative laparoscopic recording of the anastomosis for CAL prediction. Methods: A convolutional neural network (CNN) was trained with annotated frames from colorectal surgery videos across three international high-volume centers (Instituto Português de Oncologia de Lisboa, Hospital das Clínicas de Ribeirão Preto, and Royal Liverpool University Hospital). The dataset included a total of 5356 frames from 26 patients, 2007 with CAL and 3349 showing normal anastomosis. Four CNN architectures (EfficientNetB0, EfficientNetB7, ResNet50, and MobileNetV2) were tested. The models’ performance was evaluated using their sensitivity, specificity, accuracy, and area under the receiver operating characteristic (AUROC) curve. Heatmaps were generated to identify key image regions influencing predictions. Results: The best-performing model achieved an accuracy of 99.6%, AUROC of 99.6%, sensitivity of 99.2%, specificity of 100.0%, PPV of 100.0%, and NPV of 98.9%. The model reliably identified CAL-positive frames and provided visual explanations through heatmaps. Conclusions: To our knowledge, this is the first AI model developed to predict CAL using intraoperative video analysis. Its accuracy suggests the potential to redefine surgical decision-making by providing real-time risk assessment. Further refinement with a larger dataset and diverse surgical techniques could enable intraoperative interventions to prevent CAL before it occurs, marking a paradigm shift in colorectal surgery. Full article
(This article belongs to the Special Issue Updates in Digestive Diseases and Endoscopy)
Show Figures

Figure 1

23 pages, 2029 KiB  
Systematic Review
Exploring the Role of Industry 4.0 Technologies in Smart City Evolution: A Literature-Based Study
by Nataliia Boichuk, Iwona Pisz, Anna Bruska, Sabina Kauf and Sabina Wyrwich-Płotka
Sustainability 2025, 17(15), 7024; https://doi.org/10.3390/su17157024 - 2 Aug 2025
Viewed by 206
Abstract
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to [...] Read more.
Smart cities are technologically advanced urban environments where interconnected systems and data-driven technologies enhance public service delivery and quality of life. These cities rely on information and communication technologies, the Internet of Things, big data, cloud computing, and other Industry 4.0 tools to support efficient city management and foster citizen engagement. Often referred to as digital cities, they integrate intelligent infrastructures and real-time data analytics to improve mobility, security, and sustainability. Ubiquitous sensors, paired with Artificial Intelligence, enable cities to monitor infrastructure, respond to residents’ needs, and optimize urban conditions dynamically. Given the increasing significance of Industry 4.0 in urban development, this study adopts a bibliometric approach to systematically review the application of these technologies within smart cities. Utilizing major academic databases such as Scopus and Web of Science the research aims to identify the primary Industry 4.0 technologies implemented in smart cities, assess their impact on infrastructure, economic systems, and urban communities, and explore the challenges and benefits associated with their integration. The bibliometric analysis included publications from 2016 to 2023, since the emergence of urban researchers’ interest in the technologies of the new industrial revolution. The task is to contribute to a deeper understanding of how smart cities evolve through the adoption of advanced technological frameworks. Research indicates that IoT and AI are the most commonly used tools in urban spaces, particularly in smart mobility and smart environments. Full article
Show Figures

Figure 1

23 pages, 3153 KiB  
Article
Research on Path Planning Method for Mobile Platforms Based on Hybrid Swarm Intelligence Algorithms in Multi-Dimensional Environments
by Shuai Wang, Yifan Zhu, Yuhong Du and Ming Yang
Biomimetics 2025, 10(8), 503; https://doi.org/10.3390/biomimetics10080503 - 1 Aug 2025
Viewed by 177
Abstract
Traditional algorithms such as Dijkstra and APF rely on complete environmental information for path planning, which results in numerous constraints during modeling. This not only increases the complexity of the algorithms but also reduces the efficiency and reliability of the planning. Swarm intelligence [...] Read more.
Traditional algorithms such as Dijkstra and APF rely on complete environmental information for path planning, which results in numerous constraints during modeling. This not only increases the complexity of the algorithms but also reduces the efficiency and reliability of the planning. Swarm intelligence algorithms possess strong data processing and search capabilities, enabling them to efficiently solve path planning problems in different environments and generate approximately optimal paths. However, swarm intelligence algorithms suffer from issues like premature convergence and a tendency to fall into local optima during the search process. Thus, an improved Artificial Bee Colony-Beetle Antennae Search (IABCBAS) algorithm is proposed. Firstly, Tent chaos and non-uniform variation are introduced into the bee algorithm to enhance population diversity and spatial searchability. Secondly, the stochastic reverse learning mechanism and greedy strategy are incorporated into the beetle antennae search algorithm to improve direction-finding ability and the capacity to escape local optima, respectively. Finally, the weights of the two algorithms are adaptively adjusted to balance global search and local refinement. Results of experiments using nine benchmark functions and four comparative algorithms show that the improved algorithm exhibits superior path point search performance and high stability in both high- and low-dimensional environments, as well as in unimodal and multimodal environments. Ablation experiment results indicate that the optimization strategies introduced in the algorithm effectively improve convergence accuracy and speed during path planning. Results of the path planning experiments show that compared with the comparison algorithms, the average path planning distance of the improved algorithm is reduced by 23.83% in the 2D multi-obstacle environment, and the average planning time is shortened by 27.97% in the 3D surface environment. The improvement in path planning efficiency makes this algorithm of certain value in engineering applications. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

16 pages, 1873 KiB  
Systematic Review
A Systematic Review of GIS Evolution in Transportation Planning: Towards AI Integration
by Ayda Zaroujtaghi, Omid Mansourihanis, Mohammad Tayarani, Fatemeh Mansouri, Moein Hemmati and Ali Soltani
Future Transp. 2025, 5(3), 97; https://doi.org/10.3390/futuretransp5030097 (registering DOI) - 1 Aug 2025
Viewed by 129
Abstract
Previous reviews have examined specific facets of Geographic Information Systems (GIS) in transportation planning, such as transit-focused applications and open source geospatial tools. However, this study offers the first systematic, PRISMA-guided longitudinal evaluation of GIS integration in transportation planning, spanning thematic domains, data [...] Read more.
Previous reviews have examined specific facets of Geographic Information Systems (GIS) in transportation planning, such as transit-focused applications and open source geospatial tools. However, this study offers the first systematic, PRISMA-guided longitudinal evaluation of GIS integration in transportation planning, spanning thematic domains, data models, methodologies, and outcomes from 2004 to 2024. This study addresses this gap through a longitudinal analysis of GIS-based transportation research from 2004 to 2024, adhering to PRISMA guidelines. By conducting a mixed-methods analysis of 241 peer-reviewed articles, this study delineates major trends, such as increased emphasis on sustainability, equity, stakeholder involvement, and the incorporation of advanced technologies. Prominent domains include land use–transportation coordination, accessibility, artificial intelligence, real-time monitoring, and policy evaluation. Expanded data sources, such as real-time sensor feeds and 3D models, alongside sophisticated modeling techniques, enable evidence-based, multifaceted decision-making. However, challenges like data limitations, ethical concerns, and the need for specialized expertise persist, particularly in developing regions. Future geospatial innovations should prioritize the responsible adoption of emerging technologies, inclusive capacity building, and environmental justice to foster equitable and efficient transportation systems. This review highlights GIS’s evolution from a supplementary tool to a cornerstone of data-driven, sustainable urban mobility planning, offering insights for researchers, practitioners, and policymakers to advance transportation strategies that align with equity and sustainability goals. Full article
Show Figures

Figure 1

28 pages, 4107 KiB  
Article
Channel Model for Estimating Received Power Variations at a Mobile Terminal in a Cellular Network
by Kevin Verdezoto Moreno, Pablo Lupera-Morillo, Roberto Chiguano, Robin Álvarez, Ricardo Llugsi and Gabriel Palma
Electronics 2025, 14(15), 3077; https://doi.org/10.3390/electronics14153077 - 31 Jul 2025
Viewed by 180
Abstract
This paper introduces a theoretical large-scale radio channel model for the downlink in cellular systems, aimed at estimating variations in received signal power at the user terminal as a function of device mobility. This enables applications such as direction-of-arrival (DoA) estimation, estimating power [...] Read more.
This paper introduces a theoretical large-scale radio channel model for the downlink in cellular systems, aimed at estimating variations in received signal power at the user terminal as a function of device mobility. This enables applications such as direction-of-arrival (DoA) estimation, estimating power at subsequent points based on received power, and detection of coverage anomalies. The model is validated using real-world measurements from urban and suburban environments, achieving a maximum estimation error of 7.6%. In contrast to conventional models like Okumura–Hata, COST-231, Third Generation Partnership Project (3GPP) stochastic models, or ray-tracing techniques, which estimate average power under static conditions, the proposed model captures power fluctuations induced by terminal movement, a factor often neglected. Although advanced techniques such as wave-domain processing with intelligent metasurfaces can also estimate DoA, this model provides a simpler, geometry-driven approach based on empirical traces. While it does not incorporate infrastructure-specific characteristics or inter-cell interference, it remains a practical solution for scenarios with limited information or computational resources. Full article
Show Figures

Figure 1

30 pages, 3898 KiB  
Article
Application of Information and Communication Technologies for Public Services Management in Smart Villages
by Ingrida Kazlauskienė and Vilma Atkočiūnienė
Businesses 2025, 5(3), 31; https://doi.org/10.3390/businesses5030031 - 31 Jul 2025
Viewed by 192
Abstract
Information and communication technologies (ICTs) are becoming increasingly important for sustainable rural development through the smart village concept. This study aims to model ICT’s potential for public services management in European rural areas. It identifies ICT applications across rural service domains, analyzes how [...] Read more.
Information and communication technologies (ICTs) are becoming increasingly important for sustainable rural development through the smart village concept. This study aims to model ICT’s potential for public services management in European rural areas. It identifies ICT applications across rural service domains, analyzes how these technologies address specific rural challenges, and evaluates their benefits, implementation barriers, and future prospects for sustainable rural development. A qualitative content analysis method was applied using purposive sampling to analyze 79 peer-reviewed articles from EBSCO and Elsevier databases (2000–2024). A deductive approach employed predefined categories to systematically classify ICT applications across rural public service domains, with data coded according to technology scope, problems addressed, and implementation challenges. The analysis identified 15 ICT application domains (agriculture, healthcare, education, governance, energy, transport, etc.) and 42 key technology categories (Internet of Things, artificial intelligence, blockchain, cloud computing, digital platforms, mobile applications, etc.). These technologies address four fundamental rural challenges: limited service accessibility, inefficient resource management, demographic pressures, and social exclusion. This study provides the first comprehensive systematic categorization of ICT applications in smart villages, establishing a theoretical framework connecting technology deployment with sustainable development dimensions. Findings demonstrate that successful ICT implementation requires integrated urban–rural cooperation, community-centered approaches, and balanced attention to economic, social, and environmental sustainability. The research identifies persistent challenges, including inadequate infrastructure, limited digital competencies, and high implementation costs, providing actionable insights for policymakers and practitioners developing ICT-enabled rural development strategies. Full article
Show Figures

Figure 1

13 pages, 532 KiB  
Article
Medical and Biomedical Students’ Perspective on Digital Health and Its Integration in Medical Curricula: Recent and Future Views
by Srijit Das, Nazik Ahmed, Issa Al Rahbi, Yamamh Al-Jubori, Rawan Al Busaidi, Aya Al Harbi, Mohammed Al Tobi and Halima Albalushi
Int. J. Environ. Res. Public Health 2025, 22(8), 1193; https://doi.org/10.3390/ijerph22081193 - 30 Jul 2025
Viewed by 249
Abstract
The incorporation of digital health into the medical curricula is becoming more important to better prepare doctors in the future. Digital health comprises a wide range of tools such as electronic health records, health information technology, telemedicine, telehealth, mobile health applications, wearable devices, [...] Read more.
The incorporation of digital health into the medical curricula is becoming more important to better prepare doctors in the future. Digital health comprises a wide range of tools such as electronic health records, health information technology, telemedicine, telehealth, mobile health applications, wearable devices, artificial intelligence, and virtual reality. The present study aimed to explore the medical and biomedical students’ perspectives on the integration of digital health in medical curricula. A cross-sectional study was conducted on the medical and biomedical undergraduate students at the College of Medicine and Health Sciences at Sultan Qaboos University. Data was collected using a self-administered questionnaire. The response rate was 37%. The majority of respondents were in the MD (Doctor of Medicine) program (84.4%), while 29 students (15.6%) were from the BMS (Biomedical Sciences) program. A total of 55.38% agreed that they were familiar with the term ‘e-Health’. Additionally, 143 individuals (76.88%) reported being aware of the definition of e-Health. Specifically, 69 individuals (37.10%) utilize e-Health technologies every other week, 20 individuals (10.75%) reported using them daily, while 44 individuals (23.66%) indicated that they never used such technologies. Despite having several benefits, challenges exist in integrating digital health into the medical curriculum. There is a need to overcome the lack of infrastructure, existing educational materials, and digital health topics. In conclusion, embedding digital health into medical curricula is certainly beneficial for creating a digitally competent healthcare workforce that could help in better data storage, help in diagnosis, aid in patient consultation from a distance, and advise on medications, thereby leading to improved patient care which is a key public health priority. Full article
Show Figures

Figure 1

35 pages, 4940 KiB  
Article
A Novel Lightweight Facial Expression Recognition Network Based on Deep Shallow Network Fusion and Attention Mechanism
by Qiaohe Yang, Yueshun He, Hongmao Chen, Youyong Wu and Zhihua Rao
Algorithms 2025, 18(8), 473; https://doi.org/10.3390/a18080473 - 30 Jul 2025
Viewed by 296
Abstract
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to [...] Read more.
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to run efficiently on mobile devices or edge devices, so the research on lightweight face expression recognition is particularly important. However, feature extraction and classification methods of lightweight convolutional neural network expression recognition algorithms mostly used at present are not specifically and fully optimized for the characteristics of facial expression images, yet fail to make full use of the feature information in face expression images. To address the lack of facial expression recognition models that are both lightweight and effectively optimized for expression-specific feature extraction, this study proposes a novel network design tailored to the characteristics of facial expressions. In this paper, we refer to the backbone architecture of MobileNet V2 network, and redesign LightExNet, a lightweight convolutional neural network based on the fusion of deep and shallow layers, attention mechanism, and joint loss function, according to the characteristics of the facial expression features. In the network architecture of LightExNet, firstly, deep and shallow features are fused in order to fully extract the shallow features in the original image, reduce the loss of information, alleviate the problem of gradient disappearance when the number of convolutional layers increases, and achieve the effect of multi-scale feature fusion. The MobileNet V2 architecture has also been streamlined to seamlessly integrate deep and shallow networks. Secondly, by combining the own characteristics of face expression features, a new channel and spatial attention mechanism is proposed to obtain the feature information of different expression regions as much as possible for encoding. Thus improve the accuracy of expression recognition effectively. Finally, the improved center loss function is superimposed to further improve the accuracy of face expression classification results, and corresponding measures are taken to significantly reduce the computational volume of the joint loss function. In this paper, LightExNet is tested on the three mainstream face expression datasets: Fer2013, CK+ and RAF-DB, respectively, and the experimental results show that LightExNet has 3.27 M Parameters and 298.27 M Flops, and the accuracy on the three datasets is 69.17%, 97.37%, and 85.97%, respectively. The comprehensive performance of LightExNet is better than the current mainstream lightweight expression recognition algorithms such as MobileNet V2, IE-DBN, Self-Cure Net, Improved MobileViT, MFN, Ada-CM, Parallel CNN(Convolutional Neural Network), etc. Experimental results confirm that LightExNet effectively improves recognition accuracy and computational efficiency while reducing energy consumption and enhancing deployment flexibility. These advantages underscore its strong potential for real-world applications in lightweight facial expression recognition. Full article
Show Figures

Figure 1

28 pages, 2959 KiB  
Article
Trajectory Prediction and Decision Optimization for UAV-Assisted VEC Networks: An Integrated LSTM-TD3 Framework
by Jiahao Xie and Hao Hao
Information 2025, 16(8), 646; https://doi.org/10.3390/info16080646 - 29 Jul 2025
Viewed by 130
Abstract
With the rapid development of intelligent transportation systems (ITSs) and Internet of Things (IoT), vehicle-mounted edge computing (VEC) networks are facing the challenge of handling increasingly growing computation-intensive and latency-sensitive tasks. In the UAV-assisted VEC network, by introducing mobile edge servers, the coverage [...] Read more.
With the rapid development of intelligent transportation systems (ITSs) and Internet of Things (IoT), vehicle-mounted edge computing (VEC) networks are facing the challenge of handling increasingly growing computation-intensive and latency-sensitive tasks. In the UAV-assisted VEC network, by introducing mobile edge servers, the coverage of ground infrastructure is effectively supplemented. However, there is still the problem of decision-making lag in a highly dynamic environment. This paper proposes a deep reinforcement learning framework based on the long short-term memory (LSTM) network for trajectory prediction to optimize resource allocation in UAV-assisted VEC networks. Uniquely integrating vehicle trajectory prediction with the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, this framework enables proactive computation offloading and UAV trajectory planning. Specifically, we design an LSTM network with an attention mechanism to predict the future trajectory of vehicles and integrate the prediction results into the optimization decision-making process. We propose state smoothing and data augmentation techniques to improve training stability and design a multi-objective optimization model that incorporates the Age of Information (AoI), energy consumption, and resource leasing costs. The simulation results show that compared with existing methods, the method proposed in this paper significantly reduces the total system cost, improves the information freshness, and exhibits better environmental adaptability and convergence performance under various network conditions. Full article
Show Figures

Figure 1

17 pages, 3604 KiB  
Article
Binary-Weighted Neural Networks Using FeRAM Array for Low-Power AI Computing
by Seung-Myeong Cho, Jaesung Lee, Hyejin Jo, Dai Yun, Jihwan Moon and Kyeong-Sik Min
Nanomaterials 2025, 15(15), 1166; https://doi.org/10.3390/nano15151166 - 28 Jul 2025
Viewed by 177
Abstract
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this [...] Read more.
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this requirement of energy-efficient computing, this work presents a BWNN (binary-weighted neural network) architecture implemented using FeRAM (Ferroelectric RAM)-based synaptic arrays. By leveraging the non-volatile nature and low-power computing of FeRAM-based CIM (computing in memory), the proposed CIM architecture indicates significant reductions in both dynamic and standby power consumption. Simulation results in this paper demonstrate that scaling the ferroelectric capacitor size can reduce dynamic power by up to 6.5%, while eliminating DRAM-like refresh cycles allows standby power to drop by over 258× under typical conditions. Furthermore, the combination of binary weight quantization and in-memory computing enables energy-efficient inference without significant loss in recognition accuracy, as validated using MNIST datasets. Compared to prior CIM architectures of SRAM-CIM, DRAM-CIM, and STT-MRAM-CIM, the proposed FeRAM-CIM exhibits superior energy efficiency, achieving 230–580 TOPS/W in a 45 nm process. These results highlight the potential of FeRAM-based BWNNs as a compelling solution for edge-AI and IoT applications where energy constraints are critical. Full article
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)
Show Figures

Figure 1

25 pages, 1343 KiB  
Article
Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control
by Daniel Poul Mtowe, Lika Long and Dong Min Kim
Sensors 2025, 25(15), 4666; https://doi.org/10.3390/s25154666 - 28 Jul 2025
Viewed by 355
Abstract
This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently [...] Read more.
This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system’s enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

21 pages, 4738 KiB  
Article
Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space–Air–Marine Network
by Haixiang Gao
Entropy 2025, 27(8), 803; https://doi.org/10.3390/e27080803 - 28 Jul 2025
Viewed by 293
Abstract
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, [...] Read more.
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud–edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15–60% specifically. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

Back to TopTop