Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (82)

Search Parameters:
Keywords = federated reinforcement learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 260
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

22 pages, 3950 KiB  
Article
A Deep Reinforcement Learning-Based Concurrency Control of Federated Digital Twin for Software-Defined Manufacturing Systems
by Rubab Anwar, Jin-Woo Kwon and Won-Tae Kim
Appl. Sci. 2025, 15(15), 8245; https://doi.org/10.3390/app15158245 - 24 Jul 2025
Viewed by 131
Abstract
Modern manufacturing demands real-time, scalable coordination that legacy manufacturing management systems cannot provide. Digital transformation encompasses the entire manufacturing infrastructure, which can be represented by digital twins for facilitating efficient monitoring, prediction, and optimization of factory operations. A Federated Digital Twin (FDT) emerges [...] Read more.
Modern manufacturing demands real-time, scalable coordination that legacy manufacturing management systems cannot provide. Digital transformation encompasses the entire manufacturing infrastructure, which can be represented by digital twins for facilitating efficient monitoring, prediction, and optimization of factory operations. A Federated Digital Twin (FDT) emerges by combining heterogeneous digital twins, enabling real-time collaboration, data sharing, and collective decision-making. However, deploying FDTs introduces new concurrency control challenges, such as priority inversion and synchronization failures, which can potentially cause process delays, missed deadlines, and reduced customer satisfaction. Traditional concurrency control approaches in the computing domain, due to their reliance on static priority assignments and centralized control, are inadequate for managing dynamic, real-time conflicts effectively in real production lines. To address these challenges, this study proposes a novel concurrency control framework combining Deep Reinforcement Learning with the Priority Ceiling Protocol. Using SimPy-based discrete-event simulations, which accurately model the asynchronous nature of FDT interactions, the proposed approach adaptively optimizes resource allocation and effectively mitigates priority inversion. The results demonstrate that against the rule-based PCP controller, our hybrid DRLCC enhances completion time maximum of 24.27% to a minimum of 1.51%, urgent-job delay maximum of 6.65% and a minimum of 2.18%, while preserving lower-priority inversions. Full article
Show Figures

Figure 1

27 pages, 2260 KiB  
Article
Machine Learning for Industrial Optimization and Predictive Control: A Patent-Based Perspective with a Focus on Taiwan’s High-Tech Manufacturing
by Chien-Chih Wang and Chun-Hua Chien
Processes 2025, 13(7), 2256; https://doi.org/10.3390/pr13072256 - 15 Jul 2025
Viewed by 568
Abstract
The global trend toward Industry 4.0 has intensified the demand for intelligent, adaptive, and energy-efficient manufacturing systems. Machine learning (ML) has emerged as a crucial enabler of this transformation, particularly in high-mix, high-precision environments. This review examines the integration of machine learning techniques, [...] Read more.
The global trend toward Industry 4.0 has intensified the demand for intelligent, adaptive, and energy-efficient manufacturing systems. Machine learning (ML) has emerged as a crucial enabler of this transformation, particularly in high-mix, high-precision environments. This review examines the integration of machine learning techniques, such as convolutional neural networks (CNNs), reinforcement learning (RL), and federated learning (FL), within Taiwan’s advanced manufacturing sectors, including semiconductor fabrication, smart assembly, and industrial energy optimization. The present study draws on patent data and industrial case studies from leading firms, such as TSMC, Foxconn, and Delta Electronics, to trace the evolution from classical optimization to hybrid, data-driven frameworks. A critical analysis of key challenges is provided, including data heterogeneity, limited model interpretability, and integration with legacy systems. A comprehensive framework is proposed to address these issues, incorporating data-centric learning, explainable artificial intelligence (XAI), and cyber–physical architectures. These components align with industrial standards, including the Reference Architecture Model Industrie 4.0 (RAMI 4.0) and the Industrial Internet Reference Architecture (IIRA). The paper concludes by outlining prospective research directions, with a focus on cross-factory learning, causal inference, and scalable industrial AI deployment. This work provides an in-depth examination of the potential of machine learning to transform manufacturing into a more transparent, resilient, and responsive ecosystem. Additionally, this review highlights Taiwan’s distinctive position in the global high-tech manufacturing landscape and provides an in-depth analysis of patent trends from 2015 to 2025. Notably, this study adopts a patent-centered perspective to capture practical innovation trends and technological maturity specific to Taiwan’s globally competitive high-tech sector. Full article
(This article belongs to the Special Issue Machine Learning for Industrial Optimization and Predictive Control)
Show Figures

Figure 1

32 pages, 1750 KiB  
Article
Latency Analysis of UAV-Assisted Vehicular Communications Using Personalized Federated Learning with Attention Mechanism
by Abhishek Gupta and Xavier Fernando
Drones 2025, 9(7), 497; https://doi.org/10.3390/drones9070497 - 15 Jul 2025
Viewed by 362
Abstract
In this paper, unmanned aerial vehicle (UAV)-assisted vehicular communications are investigated to minimize latency and maximize the utilization of available UAV battery power. As communication and cooperation among UAV and vehicles is frequently required, a viable approach is to reduce the transmission of [...] Read more.
In this paper, unmanned aerial vehicle (UAV)-assisted vehicular communications are investigated to minimize latency and maximize the utilization of available UAV battery power. As communication and cooperation among UAV and vehicles is frequently required, a viable approach is to reduce the transmission of redundant messages. However, when the sensor data captured by the varying number of vehicles is not independent and identically distributed (non-i.i.d.), this becomes challenging. Hence, in order to group the vehicles with similar data distributions in a cluster, we utilize federated learning (FL) based on an attention mechanism. We jointly maximize the UAV’s available battery power in each transmission window and minimize communication latency. The simulation experiments reveal that the proposed personalized FL approach achieves performance improvement compared with baseline FL approaches. Our model, trained on the V2X-Sim dataset, outperforms existing methods on key performance indicators. The proposed FL approach with an attention mechanism offers a reduction in communication latency by up to 35% and a significant reduction in computational complexity without degradation in performance. Specifically, we achieve an improvement of approximately 40% in UAV energy efficiency, 20% reduction in the communication overhead, and 15% minimization in sojourn time. Full article
Show Figures

Figure 1

23 pages, 1678 KiB  
Article
Development of Digital Training Twins in the Aircraft Maintenance Ecosystem
by Igor Kabashkin
Algorithms 2025, 18(7), 411; https://doi.org/10.3390/a18070411 - 3 Jul 2025
Viewed by 319
Abstract
This paper presents an integrated digital training twin framework for adaptive aircraft maintenance education, combining real-time competence modeling, algorithmic orchestration, and cloud–edge deployment architectures. The proposed system dynamically evaluates learner skill gaps and assigns individualized training resources through a multi-objective optimization function that [...] Read more.
This paper presents an integrated digital training twin framework for adaptive aircraft maintenance education, combining real-time competence modeling, algorithmic orchestration, and cloud–edge deployment architectures. The proposed system dynamically evaluates learner skill gaps and assigns individualized training resources through a multi-objective optimization function that balances skill alignment, Bloom’s cognitive level, fidelity tier, and time efficiency. A modular orchestration engine incorporates reinforcement learning agents for policy refinement, federated learning for privacy-preserving skill analytics, and knowledge graph-based curriculum models for dependency management. Simulation results were conducted on the Pneumatic Systems training module. The system’s validation matrix provides full-cycle traceability of instructional decisions, supporting regulatory audit-readiness and institutional reporting. The digital training twin ecosystem offers a scalable, regulation-compliant, and data-driven solution for next-generation aviation maintenance training, with demonstrated operational efficiency, instructional precision, and extensibility for future expansion. Full article
Show Figures

Graphical abstract

16 pages, 3186 KiB  
Article
AI-Driven Framework for Secure and Efficient Load Management in Multi-Station EV Charging Networks
by Md Sabbir Hossen, Md Tanjil Sarker, Marran Al Qwaid, Gobbi Ramasamy and Ngu Eng Eng
World Electr. Veh. J. 2025, 16(7), 370; https://doi.org/10.3390/wevj16070370 - 2 Jul 2025
Viewed by 413
Abstract
This research introduces a comprehensive AI-driven framework for secure and efficient load management in multi-station electric vehicle (EV) charging networks, responding to the increasing demand and operational difficulties associated with widespread EV adoption. The suggested architecture has three main parts: a Smart Load [...] Read more.
This research introduces a comprehensive AI-driven framework for secure and efficient load management in multi-station electric vehicle (EV) charging networks, responding to the increasing demand and operational difficulties associated with widespread EV adoption. The suggested architecture has three main parts: a Smart Load Balancer (SLB), an AI-driven intrusion detection system (AIDS), and a Real-Time Analytics Engine (RAE). These parts use advanced machine learning methods like Support Vector Machines (SVMs), autoencoders, and reinforcement learning (RL) to make the system more flexible, secure, and efficient. The framework uses federated learning (FL) to protect data privacy and make decisions in a decentralized way, which lowers the risks that come with centralizing data. The framework makes load distribution 23.5% more efficient, cuts average wait time by 17.8%, and predicts station-level demand with 94.2% accuracy, according to simulation results. The AI-based intrusion detection component has precision, recall, and F1-scores that are all over 97%, which is better than standard methods. The study also finds important gaps in the current literature and suggests new areas for research, such as using graph neural networks (GNNs) and quantum machine learning to make EV charging infrastructures even more scalable, resilient, and intelligent. Full article
Show Figures

Figure 1

27 pages, 6130 KiB  
Article
AI-Assisted Real-Time Monitoring of Infectious Diseases in Urban Areas
by Mohammed M. Alwakeel
Mathematics 2025, 13(12), 1911; https://doi.org/10.3390/math13121911 - 7 Jun 2025
Viewed by 986
Abstract
The rapid expansion of infectious diseases in urban environments presents a significant public health challenge, as traditional surveillance methods rely on delayed case reporting, limiting proactive response capabilities. With the increasing availability of real-time health data, artificial intelligence (AI) has emerged as a [...] Read more.
The rapid expansion of infectious diseases in urban environments presents a significant public health challenge, as traditional surveillance methods rely on delayed case reporting, limiting proactive response capabilities. With the increasing availability of real-time health data, artificial intelligence (AI) has emerged as a powerful tool for disease monitoring, anomaly detection, and outbreak prediction. This study proposes SmartHealth-Track, an AI-powered real-time infectious disease monitoring framework that integrates machine learning models with IoT-enabled surveillance, smart pharmacy analytics, wearable health tracking, and wastewater surveillance to enhance early outbreak detection and predictive forecasting. The system leverages time series forecasting with long short-term memory (LSTM) networks, logistic regression for outbreak probability estimation, anomaly detection with isolation forests, and natural language processing (NLP) for extracting epidemiological insights from public health reports and social media trends. Experimental validation using real-world datasets demonstrated that SmartHealth-Track achieves high accuracy, with an outbreak detection accuracy of 92.4%, wearable-based fever detection accuracy of 93.5%, AI-driven contact tracing precision of 91.2%, and AI-enhanced wastewater pathogen classification accuracy of 94.1%. The findings confirm that AI-driven real-time surveillance significantly improves outbreak detection and forecasting, enabling timely public health interventions. Future research should focus on federated learning for secure data collaboration and reinforcement learning for adaptive decision making. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

18 pages, 970 KiB  
Article
Deep Reinforcement Learning-Based Multi-Objective Optimization for Virtual Power Plants and Smart Grids: Maximizing Renewable Energy Integration and Grid Efficiency
by Xinfa Tang and Jingjing Wang
Processes 2025, 13(6), 1809; https://doi.org/10.3390/pr13061809 - 6 Jun 2025
Cited by 1 | Viewed by 719
Abstract
The rapid development of renewable energy necessitates advanced solutions that address the volatility and complexity of modern power systems. This study proposes an AI-driven integrated optimization framework for a Virtual Power Plant (VPP) and Smart Grid, aiming to enhance renewable energy utilization, reduce [...] Read more.
The rapid development of renewable energy necessitates advanced solutions that address the volatility and complexity of modern power systems. This study proposes an AI-driven integrated optimization framework for a Virtual Power Plant (VPP) and Smart Grid, aiming to enhance renewable energy utilization, reduce grid losses, and improve economic dispatch efficiency. Leveraging deep reinforcement learning (DRL), this framework dynamically adapts to real-time grid conditions, optimizing multi-objective functions such as power loss minimization and renewable energy maximization. This research incorporates data-driven decision-making, blockchain for secure transactions, and transformer architectures for predictive analytics, ensuring its scalability and adaptability. Experimental validation using real-world data from the Shenzhen VPP demonstrates a 15% reduction in grid losses and a 22% increase in renewable energy utilization compared to traditional methods. This study addresses critical limitations in existing research, such as data rigidity and privacy risks, by introducing federated learning and anonymization techniques. By bridging theoretical innovation with practical application, this work contributes to the United Nations’ Sustainable Development Goals (SDGs) 7 and 13, offering a robust pathway toward a sustainable and intelligent energy future. The findings highlight the transformative potential of AI in power systems, providing actionable insights for policymakers and industry stakeholders. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

48 pages, 556 KiB  
Review
Machine Learning-Based Security Solutions for IoT Networks: A Comprehensive Survey
by Abdullah Alfahaid, Easa Alalwany, Abdulqader M. Almars, Fatemah Alharbi, Elsayed Atlam and Imad Mahgoub
Sensors 2025, 25(11), 3341; https://doi.org/10.3390/s25113341 - 26 May 2025
Viewed by 2500
Abstract
The Internet of Things (IoT) is revolutionizing industries by enabling seamless interconnectivity across domains such as healthcare, smart cities, the Industrial Internet of Things (IIoT), and the Internet of Vehicles (IoV). However, IoT security remains a significant challenge due to vulnerabilities related to [...] Read more.
The Internet of Things (IoT) is revolutionizing industries by enabling seamless interconnectivity across domains such as healthcare, smart cities, the Industrial Internet of Things (IIoT), and the Internet of Vehicles (IoV). However, IoT security remains a significant challenge due to vulnerabilities related to data breaches, privacy concerns, cyber threats, and trust management issues. Addressing these risks requires advanced security mechanisms, with machine learning (ML) emerging as a powerful tool for anomaly detection, intrusion detection, and threat mitigation. This survey provides a comprehensive review of ML-driven IoT security solutions from 2020 to 2024, examining the effectiveness of supervised, unsupervised, and reinforcement learning approaches, as well as advanced techniques such as deep learning (DL), ensemble learning (EL), federated learning (FL), and transfer learning (TL). A systematic classification of ML techniques is presented based on their IoT security applications, along with a taxonomy of security threats and a critical evaluation of existing solutions in terms of scalability, computational efficiency, and privacy preservation. Additionally, this study identifies key limitations of current ML approaches, including high computational costs, adversarial vulnerabilities, and interpretability challenges, while outlining future research opportunities such as privacy-preserving ML, explainable AI, and edge-based security frameworks. By synthesizing insights from recent advancements, this paper provides a structured framework for developing robust, intelligent, and adaptive IoT security solutions. The findings aim to guide researchers and practitioners in designing next-generation cybersecurity models capable of effectively countering emerging threats in IoT ecosystems. Full article
Show Figures

Figure 1

36 pages, 2990 KiB  
Review
Advances in Multi-Source Navigation Data Fusion Processing Methods
by Xiaping Ma, Peimin Zhou and Xiaoxing He
Mathematics 2025, 13(9), 1485; https://doi.org/10.3390/math13091485 - 30 Apr 2025
Cited by 1 | Viewed by 685
Abstract
In recent years, the field of multi-source navigation data fusion has witnessed substantial advancements, propelled by the rapid development of multi-sensor technologies, Artificial Intelligence (AI) algorithms and enhanced computational capabilities. On one hand, fusion methods based on filtering theory, such as Kalman Filtering [...] Read more.
In recent years, the field of multi-source navigation data fusion has witnessed substantial advancements, propelled by the rapid development of multi-sensor technologies, Artificial Intelligence (AI) algorithms and enhanced computational capabilities. On one hand, fusion methods based on filtering theory, such as Kalman Filtering (KF), Particle Filtering (PF), and Federated Filtering (FF), have been continuously optimized, enabling effective handling of non-linear and non-Gaussian noise issues. On the other hand, the introduction of AI technologies like deep learning and reinforcement learning has provided new solutions for multi-source data fusion, particularly enhancing adaptive capabilities in complex and dynamic environments. Additionally, methods based on Factor Graph Optimization (FGO) have also demonstrated advantages in multi-source data fusion, offering better handling of global consistency problems. In the future, with the widespread adoption of technologies such as 5G, the Internet of Things, and edge computing, multi-source navigation data fusion is expected to evolve towards real-time processing, intelligence, and distributed systems. So far, fusion methods mainly include optimal estimation methods, filtering methods, uncertain reasoning methods, Multiple Model Estimation (MME), AI, and so on. To analyze the performance of these methods and provide a reliable theoretical reference and basis for the design and development of a multi-source data fusion system, this paper summarizes the characteristics of these fusion methods and their corresponding application scenarios. These results can provide references for theoretical research, system development, and application in the fields of autonomous driving, unmanned vehicle navigation, and intelligent navigation. Full article
Show Figures

Figure 1

38 pages, 7485 KiB  
Article
Privacy-Preserving Federated Learning for Space–Air–Ground Integrated Networks: A Bi-Level Reinforcement Learning and Adaptive Transfer Learning Optimization Framework
by Ling Li, Lidong Zhu and Weibang Li
Sensors 2025, 25(9), 2828; https://doi.org/10.3390/s25092828 - 30 Apr 2025
Viewed by 545
Abstract
The Space-Air-Ground Integrated Network (SAGIN) has emerged as a core architecture for future intelligent communication due to its wide-area coverage and dynamic heterogeneous characteristics. However, its high latency, dynamic topology, and privacy–security challenges severely constrain the application of Federated Learning (FL). This paper [...] Read more.
The Space-Air-Ground Integrated Network (SAGIN) has emerged as a core architecture for future intelligent communication due to its wide-area coverage and dynamic heterogeneous characteristics. However, its high latency, dynamic topology, and privacy–security challenges severely constrain the application of Federated Learning (FL). This paper proposes a Privacy-Preserving Federated Learning framework for SAGIN (PPFL-SAGIN), which for the first time integrates differential privacy, adaptive transfer learning, and bi-level reinforcement learning to systematically address data heterogeneity, device dynamics, and privacy leakage in SAGINs. Specifically, (1) an adaptive knowledge-sharing mechanism based on transfer learning is designed to balance device heterogeneity and data distribution divergence through dynamic weighting factors; (2) a bi-level reinforcement learning device selection strategy is proposed, combining meta-learning and hierarchical attention mechanisms to optimize global–local decision-making and enhance model convergence efficiency; (3) dynamic privacy budget allocation and robust aggregation algorithms are introduced to reduce communication overhead while ensuring privacy. Finally, experimental evaluations validate the proposed method. Results demonstrate that PPFL-SAGIN significantly outperforms baseline solutions such as FedAvg, FedAsync, and FedAsyncISL in terms of model accuracy, convergence speed, and privacy protection strength, verifying its effectiveness in addressing privacy preservation, device selection, and global aggregation in SAGINs. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

48 pages, 10120 KiB  
Review
Machine Learning in Maritime Safety for Autonomous Shipping: A Bibliometric Review and Future Trends
by Jie Xue, Peijie Yang, Qianbing Li, Yuanming Song, P. H. A. J. M. van Gelder, Eleonora Papadimitriou and Hao Hu
J. Mar. Sci. Eng. 2025, 13(4), 746; https://doi.org/10.3390/jmse13040746 - 8 Apr 2025
Viewed by 2002
Abstract
Autonomous vessels are becoming paramount to ocean transportation, while they also face complex risks in dynamic marine environments. Machine learning plays a crucial role in enhancing maritime safety by leveraging its data analysis and predictive capabilities. However, there has been no review grounded [...] Read more.
Autonomous vessels are becoming paramount to ocean transportation, while they also face complex risks in dynamic marine environments. Machine learning plays a crucial role in enhancing maritime safety by leveraging its data analysis and predictive capabilities. However, there has been no review grounded in bibliometric analysis in this field. To explore the research evolution and knowledge frontier in the field of maritime safety for autonomous shipping, a bibliometric analysis was conducted using 719 publications from the Web of Science database, covering the period from 2000 up to May 2024. This study utilized VOSviewer, alongside traditional literature analysis methods, to construct a knowledge network map and perform cluster analysis, thereby identifying research hotspots, evolution trends, and emerging knowledge frontiers. The findings reveal a robust cooperative network among journals, researchers, research institutions, and countries or regions, underscoring the interdisciplinary nature of this research domain. Through the review, we found that maritime safety machine learning methods are evolving toward a systematic and comprehensive direction, and the integration with AI and human interaction may be the next bellwether. Future research will concentrate on three main areas: evolving safety objectives towards proactive management and autonomous coordination, developing advanced safety technologies, such as bio-inspired sensors, quantum machine learning, and self-healing systems, and enhancing decision-making with machine learning algorithms such as generative adversarial networks (GANs), hierarchical reinforcement learning (HRL), and federated learning. By visualizing collaborative networks, analyzing evolutionary trends, and identifying research hotspots, this study lays a groundwork for pioneering advancements and sets a visionary angle for the future of safety in autonomous shipping. Moreover, it also facilitates partnerships between industry and academia, making for concerted efforts in the domain of USVs. Full article
(This article belongs to the Special Issue Sustainable and Efficient Maritime Operations)
Show Figures

Figure 1

33 pages, 1969 KiB  
Article
Collaborative Adaptive Management in the Greater Yellowstone Ecosystem: A Rangeland Living Laboratory at the US Sheep Experiment Station
by Hailey Wilmer, Jonathan Spiess, Patrick E. Clark, Michelle Anderson, Amira Burns, Arica Crootof, Lily Fanok, Tracy Hruska, Bruce J. Mincher, Ryan S. Miller, William Munger, Christian J. Posbergh, Carrie S. Wilson, Eric Winford, Jessica Windh, Nicole Strong, Marlen Eve and J. Bret Taylor
Sustainability 2025, 17(7), 3086; https://doi.org/10.3390/su17073086 - 31 Mar 2025
Viewed by 1194
Abstract
Social conflict over rangeland-use priorities, especially near protected areas, has long pitted environmental and biodiversity conservation interests against livestock livelihoods. Social–ecological conflict limits management adaptation and creativity while reinforcing social and disciplinary divisions. It can also reduce rancher access to land and negatively [...] Read more.
Social conflict over rangeland-use priorities, especially near protected areas, has long pitted environmental and biodiversity conservation interests against livestock livelihoods. Social–ecological conflict limits management adaptation and creativity while reinforcing social and disciplinary divisions. It can also reduce rancher access to land and negatively affect wildlife conservation. Communities increasingly expect research organizations to address complex social dynamics to improve opportunities for multiple ecosystem service delivery on rangelands. In the Greater Yellowstone Ecosystem (GYE), an area of the western US, long-standing disagreements among actors who argue for the use of the land for livestock and those who prioritize wildlife are limiting conservation and ranching livelihoods. Researchers at the USDA-ARS US Sheep Experiment Station (USSES) along with University and societal partners are responding to these challenges using a collaborative adaptive management (CAM) methodology. The USSES Rangeland Collaboratory is a living laboratory project leveraging the resources of a federal range sheep research ranch operating across sagebrush steppe ecosystems in Clark County, Idaho, and montane/subalpine landscapes in Beaverhead County, Montana. The project places stakeholders, including ranchers, conservation groups, and government land managers, in the decision-making seat for a participatory case study. This involves adaptive management planning related to grazing and livestock–wildlife management decisions for two ranch-scale rangeland management scenarios, one modeled after a traditional range sheep operation and the second, a more intensified operation with no use of summer ranges. We discuss the extent to which the CAM approach creates opportunities for multi-directional learning among participants and evaluate trade-offs among preferred management systems through participatory ranch-scale grazing research. In a complex system where the needs and goals of various actors are misaligned across spatiotemporal, disciplinary, and social–ecological scales, CAM creates a structure and methods to focus on social learning and land management knowledge creation. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

15 pages, 1272 KiB  
Article
Design of an Immersive Basketball Tactical Training System Based on Digital Twins and Federated Learning
by Xiongce Lv, Ye Tao, Yifan Zhang and Yang Xue
Appl. Sci. 2025, 15(7), 3831; https://doi.org/10.3390/app15073831 - 31 Mar 2025
Viewed by 745
Abstract
To address the challenges of dynamic adversarial scenario modeling distortion, insufficient cross-institutional data privacy protection, and simplistic evaluation systems in collegiate basketball tactical education, this study proposes and validates an immersive instructional system integrating digital twin and federated learning technologies. The four-tier architecture [...] Read more.
To address the challenges of dynamic adversarial scenario modeling distortion, insufficient cross-institutional data privacy protection, and simplistic evaluation systems in collegiate basketball tactical education, this study proposes and validates an immersive instructional system integrating digital twin and federated learning technologies. The four-tier architecture (sensing layer, digital twin layer, federated layer, and interaction layer) synthesizes multimodal data (motion trajectories and physiological signals) with Multi-Agent Reinforcement Learning (MARL) to enable virtual–physical integrated tactical simulation and real-time error correction. Experimental results demonstrate that the experimental group achieved 35.2% higher tactical execution accuracy (TEA) (p < 0.01), 1.8 s faster decision making (p < 0.05), and 47% improved team coordination efficiency compared to the controls. The hierarchical federated learning framework (trajectory ε = 0.8; physiology ε = 0.3) maintained model precision loss at 2.4% while optimizing communication efficiency by 23%, ensuring privacy preservation. A novel three-dimensional “Skill–Creativity–Load” evaluation system revealed a 22% increase in unconventional tactical applications (p = 0.013) through the Tactical Creativity Index (TCI). By implementing lightweight federated architecture with dynamic cognitive offloading mechanisms, the system enables resource-constrained institutions to achieve 87% of the pedagogical effectiveness observed in elite programs, offering an innovative solution to reconcile educational equity with technological ethics. Future research should focus on long-term skill transfer, multimodal adaptive learning, and ethical framework development to advance intelligent sports education from efficiency-oriented paradigms to competency-based transformation. Full article
Show Figures

Figure 1

42 pages, 2232 KiB  
Article
Federated Reinforcement Learning-Based Dynamic Resource Allocation and Task Scheduling in Edge for IoT Applications
by Saroj Mali, Feng Zeng, Deepak Adhikari, Inam Ullah, Mahmoud Ahmad Al-Khasawneh, Osama Alfarraj and Fahad Alblehai
Sensors 2025, 25(7), 2197; https://doi.org/10.3390/s25072197 - 30 Mar 2025
Cited by 1 | Viewed by 1834
Abstract
Using Google cluster traces, the research presents a task offloading algorithm and a hybrid forecasting model that unites Bidirectional Long Short-Term Memory (BiLSTM) with Gated Recurrent Unit (GRU) layers along an attention mechanism. This model predicts resource usage for flexible task scheduling in [...] Read more.
Using Google cluster traces, the research presents a task offloading algorithm and a hybrid forecasting model that unites Bidirectional Long Short-Term Memory (BiLSTM) with Gated Recurrent Unit (GRU) layers along an attention mechanism. This model predicts resource usage for flexible task scheduling in Internet of Things (IoT) applications based on edge computing. The suggested algorithm improves task distribution to boost performance and reduce energy consumption. The system’s design includes collecting data, fusing and preparing it for use, training models, and performing simulations with EdgeSimPy. Experimental outcomes show that the method we suggest is better than those used in best-fit, first-fit, and worst-fit basic algorithms. It maintains power stability usage among edge servers while surpassing old-fashioned heuristic techniques. Moreover, we also propose the Deep Deterministic Policy Gradient (D4PG) based on a Federated Learning algorithm for adjusting the participation of dynamic user equipment (UE) according to resource availability and data distribution. This algorithm is compared to DQN, DDQN, Dueling DQN, and Dueling DDQN models using Non-IID EMNIST, IID EMNIST datasets, and with the Crop Prediction dataset. Results indicate that the proposed D4PG method achieves superior performance, with an accuracy of 92.86% on the Crop Prediction dataset, outperforming alternative models. On the Non-IID EMNIST dataset, the proposed approach achieves an F1-score of 0.9192, demonstrating better efficiency and fairness in model updates while preserving privacy. Similarly, on the IID EMNIST dataset, the proposed D4PG model attains an F1-score of 0.82 and an accuracy of 82%, surpassing other Reinforcement Learning-based approaches. Additionally, for edge server power consumption, the hybrid offloading algorithm reduces fluctuations compared to existing methods, ensuring more stable energy usage across edge nodes. This corroborates that the proposed method can preserve privacy by handling issues related to fairness in model updates and improving efficiency better than state-of-the-art alternatives. Full article
(This article belongs to the Special Issue Securing E-Health Data Across IoMT and Wearable Sensor Networks)
Show Figures

Figure 1

Back to TopTop