Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,536)

Search Parameters:
Keywords = IoT intelligent system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2465 KiB  
Article
Latency-Aware and Energy-Efficient Task Offloading in IoT and Cloud Systems with DQN Learning
by Amina Benaboura, Rachid Bechar, Walid Kadri, Tu Dac Ho, Zhenni Pan and Shaaban Sahmoud
Electronics 2025, 14(15), 3090; https://doi.org/10.3390/electronics14153090 (registering DOI) - 1 Aug 2025
Abstract
The exponential proliferation of the Internet of Things (IoT) and optical IoT (O-IoT) has introduced substantial challenges concerning computational capacity and energy efficiency. IoT devices generate vast volumes of aggregated data and require intensive processing, often resulting in elevated latency and excessive energy [...] Read more.
The exponential proliferation of the Internet of Things (IoT) and optical IoT (O-IoT) has introduced substantial challenges concerning computational capacity and energy efficiency. IoT devices generate vast volumes of aggregated data and require intensive processing, often resulting in elevated latency and excessive energy consumption. Task offloading has emerged as a viable solution; however, many existing strategies fail to adequately optimize both latency and energy usage. This paper proposes a novel task-offloading approach based on deep Q-network (DQN) learning, designed to intelligently and dynamically balance these critical metrics. The proposed framework continuously refines real-time task offloading decisions by leveraging the adaptive learning capabilities of DQN, thereby substantially reducing latency and energy consumption. To further enhance system performance, the framework incorporates optical networks into the IoT–fog–cloud architecture, capitalizing on their high-bandwidth and low-latency characteristics. This integration facilitates more efficient distribution and processing of tasks, particularly in data-intensive IoT applications. Additionally, we present a comparative analysis between the proposed DQN algorithm and the optimal strategy. Through extensive simulations, we demonstrate the superior effectiveness of the proposed DQN framework across various IoT and O-IoT scenarios compared to the BAT and DJA approaches, achieving improvements in energy consumption and latency of 35%, 50%, 30%, and 40%, respectively. These findings underscore the significance of selecting an appropriate offloading strategy tailored to the specific requirements of IoT and O-IoT applications, particularly with regard to environmental stability and performance demands. Full article
Show Figures

Figure 1

28 pages, 694 KiB  
Article
Artificial Intelligence-Enabled Digital Transformation in Circular Logistics: A Structural Equation Model of Organizational, Technological, and Environmental Drivers
by Ionica Oncioiu, Diana Andreea Mândricel and Mihaela Hortensia Hojda
Logistics 2025, 9(3), 102; https://doi.org/10.3390/logistics9030102 (registering DOI) - 1 Aug 2025
Abstract
Background: Digital transformation is increasingly present in modern logistics, especially in the context of sustainability and circularity pressures. The integration of technologies such as Internet of Things (IoT), Radio Frequency Identification (RFID), and automated platforms involves not only infrastructure but also a [...] Read more.
Background: Digital transformation is increasingly present in modern logistics, especially in the context of sustainability and circularity pressures. The integration of technologies such as Internet of Things (IoT), Radio Frequency Identification (RFID), and automated platforms involves not only infrastructure but also a strategic vision, a flexible organizational culture, and the ability to support decisions through artificial intelligence (AI)-based systems. Methods: This study proposes an extended conceptual model using structural equation modelling (SEM) to explore the relationships between five constructs: technological change, strategic and organizational readiness, transformation environment, AI-enabled decision configuration, and operational redesign. The model was validated based on a sample of 217 active logistics specialists, coming from sectors such as road transport, retail, 3PL logistics services, and manufacturing. The participants are involved in the digitization of processes, especially in activities related to operational decisions and sustainability. Results: The findings reveal that the analysis confirms statistically significant relationships between organizational readiness, transformation environment, AI-based decision processes, and operational redesign. Conclusions: The study highlights the importance of an integrated approach in which technology, organizational culture, and advanced decision support collectively contribute to the transition to digital and circular logistics chains. Full article
Show Figures

Figure 1

23 pages, 2888 KiB  
Review
Machine Learning in Flocculant Research and Application: Toward Smart and Sustainable Water Treatment
by Caichang Ding, Ling Shen, Qiyang Liang and Lixin Li
Separations 2025, 12(8), 203; https://doi.org/10.3390/separations12080203 (registering DOI) - 1 Aug 2025
Abstract
Flocculants are indispensable in water and wastewater treatment, enabling the aggregation and removal of suspended particles, colloids, and emulsions. However, the conventional development and application of flocculants rely heavily on empirical methods, which are time-consuming, resource-intensive, and environmentally problematic due to issues such [...] Read more.
Flocculants are indispensable in water and wastewater treatment, enabling the aggregation and removal of suspended particles, colloids, and emulsions. However, the conventional development and application of flocculants rely heavily on empirical methods, which are time-consuming, resource-intensive, and environmentally problematic due to issues such as sludge production and chemical residues. Recent advances in machine learning (ML) have opened transformative avenues for the design, optimization, and intelligent application of flocculants. This review systematically examines the integration of ML into flocculant research, covering algorithmic approaches, data-driven structure–property modeling, high-throughput formulation screening, and smart process control. ML models—including random forests, neural networks, and Gaussian processes—have successfully predicted flocculation performance, guided synthesis optimization, and enabled real-time dosing control. Applications extend to both synthetic and bioflocculants, with ML facilitating strain engineering, fermentation yield prediction, and polymer degradability assessments. Furthermore, the convergence of ML with IoT, digital twins, and life cycle assessment tools has accelerated the transition toward sustainable, adaptive, and low-impact treatment technologies. Despite its potential, challenges remain in data standardization, model interpretability, and real-world implementation. This review concludes by outlining strategic pathways for future research, including the development of open datasets, hybrid physics–ML frameworks, and interdisciplinary collaborations. By leveraging ML, the next generation of flocculant systems can be more effective, environmentally benign, and intelligently controlled, contributing to global water sustainability goals. Full article
(This article belongs to the Section Environmental Separations)
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 (registering DOI) - 1 Aug 2025
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

40 pages, 18911 KiB  
Article
Twin-AI: Intelligent Barrier Eddy Current Separator with Digital Twin and AI Integration
by Shohreh Kia, Johannes B. Mayer, Erik Westphal and Benjamin Leiding
Sensors 2025, 25(15), 4731; https://doi.org/10.3390/s25154731 (registering DOI) - 31 Jul 2025
Abstract
The current paper presents a comprehensive intelligent system designed to optimize the performance of a barrier eddy current separator (BECS), comprising a conveyor belt, a vibration feeder, and a magnetic drum. This system was trained and validated on real-world industrial data gathered directly [...] Read more.
The current paper presents a comprehensive intelligent system designed to optimize the performance of a barrier eddy current separator (BECS), comprising a conveyor belt, a vibration feeder, and a magnetic drum. This system was trained and validated on real-world industrial data gathered directly from the working separator under 81 different operational scenarios. The intelligent models were used to recommend optimal settings for drum speed, belt speed, vibration intensity, and drum angle, thereby maximizing separation quality and minimizing energy consumption. the smart separation module utilizes YOLOv11n-seg and achieves a mean average precision (mAP) of 0.838 across 7163 industrial instances from aluminum, copper, and plastic materials. For shape classification (sharp vs. smooth), the model reached 91.8% accuracy across 1105 annotated samples. Furthermore, the thermal monitoring unit can detect iron contamination by analyzing temperature anomalies. Scenarios with iron showed a maximum temperature increase of over 20 °C compared to clean materials, with a detection response time of under 2.5 s. The architecture integrates a Digital Twin using Azure Digital Twins to virtually mirror the system, enabling real-time tracking, behavior simulation, and remote updates. A full connection with the PLC has been implemented, allowing the AI-driven system to adjust physical parameters autonomously. This combination of AI, IoT, and digital twin technologies delivers a reliable and scalable solution for enhanced separation quality, improved operational safety, and predictive maintenance in industrial recycling environments. Full article
(This article belongs to the Special Issue Sensors and IoT Technologies for the Smart Industry)
28 pages, 2959 KiB  
Article
Trajectory Prediction and Decision Optimization for UAV-Assisted VEC Networks: An Integrated LSTM-TD3 Framework
by Jiahao Xie and Hao Hao
Information 2025, 16(8), 646; https://doi.org/10.3390/info16080646 - 29 Jul 2025
Viewed by 97
Abstract
With the rapid development of intelligent transportation systems (ITSs) and Internet of Things (IoT), vehicle-mounted edge computing (VEC) networks are facing the challenge of handling increasingly growing computation-intensive and latency-sensitive tasks. In the UAV-assisted VEC network, by introducing mobile edge servers, the coverage [...] Read more.
With the rapid development of intelligent transportation systems (ITSs) and Internet of Things (IoT), vehicle-mounted edge computing (VEC) networks are facing the challenge of handling increasingly growing computation-intensive and latency-sensitive tasks. In the UAV-assisted VEC network, by introducing mobile edge servers, the coverage of ground infrastructure is effectively supplemented. However, there is still the problem of decision-making lag in a highly dynamic environment. This paper proposes a deep reinforcement learning framework based on the long short-term memory (LSTM) network for trajectory prediction to optimize resource allocation in UAV-assisted VEC networks. Uniquely integrating vehicle trajectory prediction with the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, this framework enables proactive computation offloading and UAV trajectory planning. Specifically, we design an LSTM network with an attention mechanism to predict the future trajectory of vehicles and integrate the prediction results into the optimization decision-making process. We propose state smoothing and data augmentation techniques to improve training stability and design a multi-objective optimization model that incorporates the Age of Information (AoI), energy consumption, and resource leasing costs. The simulation results show that compared with existing methods, the method proposed in this paper significantly reduces the total system cost, improves the information freshness, and exhibits better environmental adaptability and convergence performance under various network conditions. Full article
Show Figures

Figure 1

17 pages, 3604 KiB  
Article
Binary-Weighted Neural Networks Using FeRAM Array for Low-Power AI Computing
by Seung-Myeong Cho, Jaesung Lee, Hyejin Jo, Dai Yun, Jihwan Moon and Kyeong-Sik Min
Nanomaterials 2025, 15(15), 1166; https://doi.org/10.3390/nano15151166 - 28 Jul 2025
Viewed by 104
Abstract
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this [...] Read more.
Artificial intelligence (AI) has become ubiquitous in modern computing systems, from high-performance data centers to resource-constrained edge devices. As AI applications continue to expand into mobile and IoT domains, the need for energy-efficient neural network implementations has become increasingly critical. To meet this requirement of energy-efficient computing, this work presents a BWNN (binary-weighted neural network) architecture implemented using FeRAM (Ferroelectric RAM)-based synaptic arrays. By leveraging the non-volatile nature and low-power computing of FeRAM-based CIM (computing in memory), the proposed CIM architecture indicates significant reductions in both dynamic and standby power consumption. Simulation results in this paper demonstrate that scaling the ferroelectric capacitor size can reduce dynamic power by up to 6.5%, while eliminating DRAM-like refresh cycles allows standby power to drop by over 258× under typical conditions. Furthermore, the combination of binary weight quantization and in-memory computing enables energy-efficient inference without significant loss in recognition accuracy, as validated using MNIST datasets. Compared to prior CIM architectures of SRAM-CIM, DRAM-CIM, and STT-MRAM-CIM, the proposed FeRAM-CIM exhibits superior energy efficiency, achieving 230–580 TOPS/W in a 45 nm process. These results highlight the potential of FeRAM-based BWNNs as a compelling solution for edge-AI and IoT applications where energy constraints are critical. Full article
(This article belongs to the Special Issue Neuromorphic Devices: Materials, Structures and Bionic Applications)
Show Figures

Figure 1

37 pages, 1037 KiB  
Review
Machine Learning for Flood Resiliency—Current Status and Unexplored Directions
by Venkatesh Uddameri and E. Annette Hernandez
Environments 2025, 12(8), 259; https://doi.org/10.3390/environments12080259 - 28 Jul 2025
Viewed by 402
Abstract
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural [...] Read more.
A systems-oriented review of machine learning (ML) over the entire flood management spectrum, encompassing fluvial flood control, pluvial flood management, and resiliency-risk characterization was undertaken. Deep learners like long short-term memory (LSTM) networks perform well in predicting reservoir inflows and outflows. Convolution neural networks (CNNs) and other object identification algorithms are being explored in assessing levee and flood wall failures. The use of ML methods in pump station operations is limited due to lack of public-domain datasets. Reinforcement learning (RL) has shown promise in controlling low-impact development (LID) systems for pluvial flood management. Resiliency is defined in terms of the vulnerability of a community to floods. Multi-criteria decision making (MCDM) and unsupervised ML methods are used to capture vulnerability. Supervised learning is used to model flooding hazards. Conventional approaches perform better than deep learners and ensemble methods for modeling flood hazards due to paucity of data and large inter-model predictive variability. Advances in satellite-based, drone-facilitated data collection and Internet of Things (IoT)-based low-cost sensors offer new research avenues to explore. Transfer learning at ungauged basins holds promise but is largely unexplored. Explainable artificial intelligence (XAI) is seeing increased use and helps the transition of ML models from black-box forecasters to knowledge-enhancing predictors. Full article
(This article belongs to the Special Issue Hydrological Modeling and Sustainable Water Resources Management)
Show Figures

Figure 1

28 pages, 2918 KiB  
Article
Machine Learning-Powered KPI Framework for Real-Time, Sustainable Ship Performance Management
by Christos Spandonidis, Vasileios Iliopoulos and Iason Athanasopoulos
J. Mar. Sci. Eng. 2025, 13(8), 1440; https://doi.org/10.3390/jmse13081440 - 28 Jul 2025
Viewed by 218
Abstract
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics [...] Read more.
The maritime sector faces escalating demands to minimize emissions and optimize operational efficiency under tightening environmental regulations. Although technologies such as the Internet of Things (IoT), Artificial Intelligence (AI), and Digital Twins (DT) offer substantial potential, their deployment in real-time ship performance analytics is at an emerging state. This paper proposes a machine learning-driven framework for real-time ship performance management. The framework starts with data collected from onboard sensors and culminates in a decision support system that is easily interpretable, even by non-experts. It also provides a method to forecast vessel performance by extrapolating Key Performance Indicator (KPI) values. Furthermore, it offers a flexible methodology for defining KPIs for every crucial component or aspect of vessel performance, illustrated through a use case focusing on fuel oil consumption. Leveraging Artificial Neural Networks (ANNs), hybrid multivariate data fusion, and high-frequency sensor streams, the system facilitates continuous diagnostics, early fault detection, and data-driven decision-making. Unlike conventional static performance models, the framework employs dynamic KPIs that evolve with the vessel’s operational state, enabling advanced trend analysis, predictive maintenance scheduling, and compliance assurance. Experimental comparison against classical KPI models highlights superior predictive fidelity, robustness, and temporal consistency. Furthermore, the paper delineates AI and ML applications across core maritime operations and introduces a scalable, modular system architecture applicable to both commercial and naval platforms. This approach bridges advanced simulation ecosystems with in situ operational data, laying a robust foundation for digital transformation and sustainability in maritime domains. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

27 pages, 956 KiB  
Article
Boosting Sustainable Urban Development: How Smart Cities Improve Emergency Management—Evidence from 275 Chinese Cities
by Ming Guo and Yang Zhou
Sustainability 2025, 17(15), 6851; https://doi.org/10.3390/su17156851 - 28 Jul 2025
Viewed by 328
Abstract
Rapid urbanization and escalating disaster risks necessitate resilient urban governance systems. Smart city initiatives that leverage digital technologies—such as the internet of things (IoT), big data analytics, and artificial intelligence (AI)—demonstrate transformative potential in enhancing emergency management capabilities. However, empirical evidence regarding their [...] Read more.
Rapid urbanization and escalating disaster risks necessitate resilient urban governance systems. Smart city initiatives that leverage digital technologies—such as the internet of things (IoT), big data analytics, and artificial intelligence (AI)—demonstrate transformative potential in enhancing emergency management capabilities. However, empirical evidence regarding their causal impact and underlying mechanisms remains limited, particularly in developing economies. Drawing on panel data from 275 Chinese prefecture-level cities over the period 2006–2021 and using China’s smart city pilot policy as a quasi-natural experiment, this study applies a multi-period difference-in-differences (DID) approach to rigorously assess the effects of smart city construction on emergency management capabilities. Results reveal that smart city construction produced a statistically significant improvement in emergency management capabilities, which remained robust after conducting multiple sensitivity checks and controlling for potential confounding policies. The benefits exhibit notable heterogeneity: emergency management capability improvements are most pronounced in central China and in cities at the extremes of population size—megacities (>10 million residents) and small cities (<1 million residents)—while effects remain marginal in medium-sized and eastern cities. Crucially, mechanism analysis reveals that digital technology application fully mediates 86.7% of the total effect, whereas factor allocation efficiency exerts only a direct, non-mediating influence. These findings suggest that smart cities primarily enhance emergency management capabilities through digital enablers, with effectiveness contingent upon regional infrastructure development and urban scale. Policy priorities should therefore emphasize investments in digital infrastructure, interagency data integration, and targeted capacity-building strategies tailored to central and western regions as well as smaller cities. Full article
(This article belongs to the Special Issue Advanced Studies in Sustainable Urban Planning and Urban Development)
Show Figures

Figure 1

21 pages, 4738 KiB  
Article
Research on Computation Offloading and Resource Allocation Strategy Based on MADDPG for Integrated Space–Air–Marine Network
by Haixiang Gao
Entropy 2025, 27(8), 803; https://doi.org/10.3390/e27080803 - 28 Jul 2025
Viewed by 193
Abstract
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, [...] Read more.
This paper investigates the problem of computation offloading and resource allocation in an integrated space–air–sea network based on unmanned aerial vehicle (UAV) and low Earth orbit (LEO) satellites supporting Maritime Internet of Things (M-IoT) devices. Considering the complex, dynamic environment comprising M-IoT devices, UAVs and LEO satellites, traditional optimization methods encounter significant limitations due to non-convexity and the combinatorial explosion in possible solutions. A multi-agent deep deterministic policy gradient (MADDPG)-based optimization algorithm is proposed to address these challenges. This algorithm is designed to minimize the total system costs, balancing energy consumption and latency through partial task offloading within a cloud–edge-device collaborative mobile edge computing (MEC) system. A comprehensive system model is proposed, with the problem formulated as a partially observable Markov decision process (POMDP) that integrates association control, power control, computing resource allocation, and task distribution. Each M-IoT device and UAV acts as an intelligent agent, collaboratively learning the optimal offloading strategies through a centralized training and decentralized execution framework inherent in the MADDPG. The numerical simulations validate the effectiveness of the proposed MADDPG-based approach, which demonstrates rapid convergence and significantly outperforms baseline methods, and indicate that the proposed MADDPG-based algorithm reduces the total system cost by 15–60% specifically. Full article
(This article belongs to the Special Issue Space-Air-Ground-Sea Integrated Communication Networks)
Show Figures

Figure 1

54 pages, 5068 KiB  
Review
Application of Machine Learning Models in Optimizing Wastewater Treatment Processes: A Review
by Florin-Stefan Zamfir, Madalina Carbureanu and Sanda Florentina Mihalache
Appl. Sci. 2025, 15(15), 8360; https://doi.org/10.3390/app15158360 - 27 Jul 2025
Viewed by 485
Abstract
The treatment processes from a wastewater treatment plant (WWTP) are known for their complexity and highly nonlinear behavior, which makes them challenging to analyze, model, and especially, to control. This research studies how machine learning (ML) with a focus on deep learning (DL) [...] Read more.
The treatment processes from a wastewater treatment plant (WWTP) are known for their complexity and highly nonlinear behavior, which makes them challenging to analyze, model, and especially, to control. This research studies how machine learning (ML) with a focus on deep learning (DL) techniques can be applied to optimize the treatment processes of WWTPs, highlighting those case studies that propose ML and DL methods that directly address this issue. This research aims to study the ML and DL systematic applications in optimizing the wastewater treatment processes from an industrial plant, such as the modeling of complex physical–chemical processes, real-time monitoring and prediction of critical wastewater quality indicators, chemical reactants consumption reduction, minimization of plant energy consumption, plant effluent quality prediction, development of data-driven type models as support in the decision-making process, etc. To perform a detailed analysis, 87 articles were included from an initial set of 324, using criteria such as wastewater combined with ML, DL, and artificial intelligence (AI), for articles from 2010 or newer. From the initial set of 324 scientific articles, 300 were identified using Litmaps, obtained from five important scientific databases, all focusing on addressing the specific problem proposed for investigation. Thus, this paper identifies gaps in the current research, discusses ML and DL algorithms in the context of optimizing wastewater treatment processes, and identifies future directions for optimizing these processes through data-driven methods. As opposed to traditional models, IA models (ML, DL, hybrid and ensemble models, digital twin, IoT, etc.) demonstrated significant advantages in wastewater quality indicator prediction and forecasting, in energy consumption forecasting, in temporal pattern recognition, and in optimal interpretability for normative compliance. Integrating advanced ML and DL technologies into the various processes involved in wastewater treatment improves the plant systems’ predictive capabilities and ensures a higher level of compliance with environmental standards. Full article
Show Figures

Figure 1

17 pages, 1850 KiB  
Article
Cloud–Edge Collaborative Model Adaptation Based on Deep Q-Network and Transfer Feature Extraction
by Jue Chen, Xin Cheng, Yanjie Jia and Shuai Tan
Appl. Sci. 2025, 15(15), 8335; https://doi.org/10.3390/app15158335 - 26 Jul 2025
Viewed by 303
Abstract
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. [...] Read more.
With the rapid development of smart devices and the Internet of Things (IoT), the explosive growth of data has placed increasingly higher demands on real-time processing and intelligent decision making. Cloud-edge collaborative computing has emerged as a mainstream architecture to address these challenges. However, in sky-ground integrated systems, the limited computing capacity of edge devices and the inconsistency between cloud-side fusion results and edge-side detection outputs significantly undermine the reliability of edge inference. To overcome these issues, this paper proposes a cloud-edge collaborative model adaptation framework that integrates deep reinforcement learning via Deep Q-Networks (DQN) with local feature transfer. The framework enables category-level dynamic decision making, allowing for selective migration of classification head parameters to achieve on-demand adaptive optimization of the edge model and enhance consistency between cloud and edge results. Extensive experiments conducted on a large-scale multi-view remote sensing aircraft detection dataset demonstrate that the proposed method significantly improves cloud-edge consistency. The detection consistency rate reaches 90%, with some scenarios approaching 100%. Ablation studies further validate the necessity of the DQN-based decision strategy, which clearly outperforms static heuristics. In the model adaptation comparison, the proposed method improves the detection precision of the A321 category from 70.30% to 71.00% and the average precision (AP) from 53.66% to 53.71%. For the A330 category, the precision increases from 32.26% to 39.62%, indicating strong adaptability across different target types. This study offers a novel and effective solution for cloud-edge model adaptation under resource-constrained conditions, enhancing both the consistency of cloud-edge fusion and the robustness of edge-side intelligent inference. Full article
Show Figures

Figure 1

51 pages, 5654 KiB  
Review
Exploring the Role of Digital Twin and Industrial Metaverse Technologies in Enhancing Occupational Health and Safety in Manufacturing
by Arslan Zahid, Aniello Ferraro, Antonella Petrillo and Fabio De Felice
Appl. Sci. 2025, 15(15), 8268; https://doi.org/10.3390/app15158268 - 25 Jul 2025
Viewed by 323
Abstract
The evolution of Industry 4.0 and the emerging paradigm of Industry 5.0 have introduced disruptive technologies that are reshaping modern manufacturing environments. Among these, Digital Twin (DT) and Industrial Metaverse (IM) technologies are increasingly recognized for their potential to enhance Occupational Health and [...] Read more.
The evolution of Industry 4.0 and the emerging paradigm of Industry 5.0 have introduced disruptive technologies that are reshaping modern manufacturing environments. Among these, Digital Twin (DT) and Industrial Metaverse (IM) technologies are increasingly recognized for their potential to enhance Occupational Health and Safety (OHS). However, a comprehensive understanding of how these technologies integrate to support OHS in manufacturing remains limited. This study systematically explores the transformative role of DT and IM in creating immersive, intelligent, and human-centric safety ecosystems. Following the PRISMA guidelines, a Systematic Literature Review (SLR) of 75 peer-reviewed studies from the SCOPUS and Web of Science databases was conducted. The review identifies key enabling technologies such as Virtual Reality (VR), Augmented Reality (AR), Extended Reality (XR), Internet of Things (IoT), Artificial Intelligence (AI), Cyber-Physical Systems (CPS), and Collaborative Robots (COBOTS), and highlights their applications in real-time monitoring, immersive safety training, and predictive hazard mitigation. A conceptual framework is proposed, illustrating a synergistic digital ecosystem that integrates predictive analytics, real-time monitoring, and immersive training to enhance the OHS. The findings highlight both the transformative benefits and the key adoption challenges of these technologies, including technical complexities, data security, privacy, ethical concerns, and organizational resistance. This study provides a foundational framework for future research and practical implementation in Industry 5.0. Full article
Show Figures

Figure 1

26 pages, 1234 KiB  
Article
Joint Optimization of DCCR and Energy Efficiency in Active STAR-RIS-Assisted UAV-NOMA Networks
by Yan Zhan, Yi Hong, Deying Li, Chuanwen Luo and Xin Fan
Drones 2025, 9(8), 520; https://doi.org/10.3390/drones9080520 - 24 Jul 2025
Viewed by 169
Abstract
This paper investigated the issues of unstable data collection links and low efficiency in IoT data collection for smart cities by combining active STAR-RIS with UAVs to enhance channel quality, achieving efficient data collection in complex environments. To this end, we propose an [...] Read more.
This paper investigated the issues of unstable data collection links and low efficiency in IoT data collection for smart cities by combining active STAR-RIS with UAVs to enhance channel quality, achieving efficient data collection in complex environments. To this end, we propose an active simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS)-assisted UAV-enabled NOMA data collection system that jointly optimizes active STAR-RIS beamforming, SN power allocation, and UAV trajectory to maximize the system energy efficiency (EE) and the data complete collection rate (DCCR). We apply block coordinate ascent (BCA) to decompose the non-convex problem into three alternating subproblems: combined beamforming optimization of phase shift and amplification gain matrices, power allocation, and trajectory optimization, which are iteratively processed through successive convex approximation (SCA) and fractional programming (FP) methods, respectively. Simulation results demonstrate the proposed algorithm’s rapid convergence and significant advantages over conventional NOMA and OMA schemes in both throughput rate and DCCR. Full article
Show Figures

Figure 1

Back to TopTop