Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,771)

Search Parameters:
Keywords = network resource consumption

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1221 KB  
Article
Distributed Deep Learning in IoT Sensor Network for the Diagnosis of Plant Diseases
by Athanasios Papanikolaou, Athanasios Tziouvaras, George Floros, Apostolos Xenakis and Fabio Bonsignorio
Sensors 2025, 25(24), 7646; https://doi.org/10.3390/s25247646 - 17 Dec 2025
Abstract
The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational [...] Read more.
The early detection of plant diseases is critical to improving agricultural productivity and ensuring food security. However, conventional centralized deep learning approaches are often unsuitable for large-scale agricultural deployments, as they rely on continuous data transmission to cloud servers and require high computational resources that are impractical for Internet of Things (IoT)-based field environments. In this article, we present a distributed deep learning framework based on Federated Learning (FL) for the diagnosis of plant diseases in IoT sensor networks. The proposed architecture integrates multiple IoT nodes and an edge computing node that collaboratively train an EfficientNet B0 model using the Federated Averaging (FedAvg) algorithm without transferring local data. Two training pipelines are evaluated: a standard single-model pipeline and a hierarchical pipeline that combines a crop classifier with crop-specific disease models. Experimental results on a multicrop leaf image dataset under realistic augmentation scenarios demonstrate that the hierarchical FL approach improves per-crop classification accuracy and robustness to environmental variations, while the standard pipeline offers lower latency and energy consumption. Full article
Show Figures

Figure 1

22 pages, 6090 KB  
Article
GLBAD: Online BGP Anomaly Detection Under Partial Observation
by Zheng Wu, Yaoyu Zhou and Junda Wu
Electronics 2025, 14(24), 4940; https://doi.org/10.3390/electronics14244940 - 16 Dec 2025
Abstract
The Border Gateway Protocol (BGP) is the core protocol for inter-domain routing on the Internet. However, due to its lack of built-in security authentication mechanisms, BGP is highly vulnerable to misconfigurations or malicious route announcements, which can lead to severe incidents such as [...] Read more.
The Border Gateway Protocol (BGP) is the core protocol for inter-domain routing on the Internet. However, due to its lack of built-in security authentication mechanisms, BGP is highly vulnerable to misconfigurations or malicious route announcements, which can lead to severe incidents such as route hijacking and information leakage. Existing detection methods face two major bottlenecks: First, as the scale of Autonomous System (AS)-level topology continues to grow, conventional graph neural networks struggle to meet the demands of computational resources and latency. Second, the observational data provided by current monitoring systems are inherently localized. To address these challenges, this paper proposes a Graph Learning-driven framework for BGP Anomaly Detection, named GLBAD. The core design of GLBAD comprises three components: First, to handle BGP’s large-scale network topology, we propose a graph partition method to perform a dedicated topological partitioning on the BGP network. Second, to overcome the limitation of localized observational data, we design a graph autoencoder-based approach for adaptive graph learning, enabling topology inference. Finally, integrating the above components, we develop a comprehensive BGP anomaly detection system to achieve real-time and accurate anomaly detection. We evaluate our approach on 20 real-world BGP anomaly events. Experimental results demonstrate that the proposed GLBAD effectively detects anomalies with less time consumption while achieving a lower false positive rate. Full article
(This article belongs to the Section Networks)
22 pages, 1380 KB  
Article
Selection of Optimal Cluster Head Using MOPSO and Decision Tree for Cluster-Oriented Wireless Sensor Networks
by Rahul Mishra, Sudhanshu Kumar Jha, Shiv Prakash and Rajkumar Singh Rathore
Future Internet 2025, 17(12), 577; https://doi.org/10.3390/fi17120577 - 15 Dec 2025
Abstract
Wireless sensor networks (WSNs) consist of distributed nodes to monitor various physical and environmental parameters. The sensor nodes (SNs) are usually resource constrained such as power source, communication, and computation capacity. In WSN, energy consumption varies depending on the distance between sender and [...] Read more.
Wireless sensor networks (WSNs) consist of distributed nodes to monitor various physical and environmental parameters. The sensor nodes (SNs) are usually resource constrained such as power source, communication, and computation capacity. In WSN, energy consumption varies depending on the distance between sender and receiver SNs. Communication among SNs having long distance requires significantly additional energy that negatively affects network longevity. To address these issues, WSNs are deployed using multi-hop routing. Using multi-hop routing solves various problems like reduced communication and communication cost but finding an optimal cluster head (CH) and route remain an issue. An optimal CH reduces energy consumption and maintains reliable data transmission throughout the network. To improve the performance of multi-hop routing in WSN, we propose a model that combines Multi-Objective Particle Swarm Optimization (MOPSO) and a Decision Tree for dynamic CH selection. The proposed model consists of two phases, namely, the offline phase and the online phase. In the offline phase, various network scenarios with node densities, initial energy levels, and BS positions are simulated, required features are collected, and MOPSO is applied to the collected features to generate a Pareto front of optimal CH nodes to optimize energy efficiency, coverage, and load balancing. Each node is labeled as selected CH or not by the MOPSO, and the labelled dataset is then used to train a Decision Tree classifier, which generates a lightweight and interpretable model for CH prediction. In the online phase, the trained model is used in the deployed network to quickly and adaptively select CHs using features of each node and classifying them as a CH or non-CH. The predicted nodes broadcast the information and manage the intra-cluster communication, data aggregation, and routing to the base station. CH selection is re-initiated based on residual energy drop below a threshold, load saturation, and coverage degradation. The simulation results demonstrate that the proposed model outperforms protocols such as LEACH, HEED, and standard PSO regarding energy efficiency and network lifetime, making it highly suitable for applications in green computing, environmental monitoring, precision agriculture, healthcare, and industrial IoT. Full article
(This article belongs to the Special Issue Clustered Federated Learning for Networks)
Show Figures

Figure 1

25 pages, 2590 KB  
Article
Enhancing Distribution Network Flexibility via Adjustable Carbon Emission Factors and Negative-Carbon Incentive Mechanism
by Hualei Zou, Qiang Xing, Hao Fu, Tengfei Zhang, Yu Chen and Jian Zhu
Processes 2025, 13(12), 4023; https://doi.org/10.3390/pr13124023 - 12 Dec 2025
Viewed by 139
Abstract
With increasing penetration of distributed renewable energy sources (RES) in distribution networks, spatiotemporal mismatches arise between static time-of-use (TOU) pricing and real-time carbon emission factors. This misalignment hinders demand-side flexibility deployment, potentially increasing high-carbon-period consumption and impeding low-carbon operations. To address this, the [...] Read more.
With increasing penetration of distributed renewable energy sources (RES) in distribution networks, spatiotemporal mismatches arise between static time-of-use (TOU) pricing and real-time carbon emission factors. This misalignment hinders demand-side flexibility deployment, potentially increasing high-carbon-period consumption and impeding low-carbon operations. To address this, the paper proposes an adjustable carbon emission factor (ADCEF) which decouples electricity from carbon liability using storage. The strategy leverages energy storage for carbon responsibility time-shifting to build a dynamic ADCEF model, introducing a negative-carbon incentive mechanism which quantifies the value of surplus renewables. A revenue feedback mechanism couples ADCEF with electricity prices, forming dynamic price troughs during high-RES periods to guide flexible resources toward coordinated peak shaving, valley filling, and low-carbon responses. Validated on a modified IEEE 33-bus system across multiple scenarios, the strategy shifts resources to carbon-negative periods, achieving 100% on-site excess RES utilization in high-penetration scenarios and, compared to traditional TOU approaches, a 27.9% emission reduction and 8.3% revenue increase. Full article
Show Figures

Figure 1

29 pages, 1892 KB  
Article
Resolving Spatial Asymmetry in China’s Data Center Layout: A Tripartite Evolutionary Game Analysis
by Chenfeng Gao, Donglin Chen, Xiaochao Wei and Ying Chen
Symmetry 2025, 17(12), 2136; https://doi.org/10.3390/sym17122136 - 11 Dec 2025
Viewed by 156
Abstract
The rapid advancement of artificial intelligence has driven a surge in demand for computing power. As the core computing infrastructure, data centers have expanded in scale, escalating electricity consumption and magnifying a regional mismatch between computing capacity and energy resources: facilities are concentrated [...] Read more.
The rapid advancement of artificial intelligence has driven a surge in demand for computing power. As the core computing infrastructure, data centers have expanded in scale, escalating electricity consumption and magnifying a regional mismatch between computing capacity and energy resources: facilities are concentrated in the energy-constrained East, while the renewable-rich West possesses vast, untapped hosting capacity. Focusing on cross-regional data-center migration under the “Eastern Data, Western Computing” initiative, this study constructs a tripartite evolutionary game model comprising the Eastern Local Government, the Western Local Government, and data-center enterprises. The central government is modeled as an external regulator that indirectly shapes players’ strategies through policies such as energy-efficiency constraints and carbon-quota mechanisms. First, we introduce key parameters—including energy efficiency, carbon costs, green revenues, coordination subsidies, and migration losses—and analyze the system’s evolutionary stability using replicator-dynamics equations. Second, we conduct numerical simulations in MATLAB 2024a and perform sensitivity analyses with respect to energy and green constraints, central rewards and penalties, regional coordination incentives, and migration losses. The results show the following: (1) Multiple equilibria can arise, including coordinated optima, policy-failure states, and coordination-impeded outcomes. These coordinated optima do not emerge spontaneously but rather depend on a precise alignment of payoff structures across central government, local governments, and enterprises. (2) The eastern regulatory push—centered on energy efficiency and carbon emissions—is generally more effective than western fiscal subsidies or stand-alone energy advantages at reshaping firm payoffs and inducing relocation. Central penalties and coordination subsidies serve complementary and constraining roles. (3) Commercial risks associated with full migration, such as service interruption and customer attrition, remain among the key barriers to shifting from partial to full migration. These risks are closely linked to practical relocation and connectivity constraints—such as logistics and commissioning effort, and cross-regional network latency/bandwidth—thereby potentially trapping firms in a suboptimal partial-migration equilibrium. This study provides theoretical support for refining the “Eastern Data, Western Computing” policy mix and offers generalized insights for other economies facing similar spatial energy–demand asymmetries. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 7707 KB  
Article
A Multi-Tier Vehicular Edge–Fog Framework for Real-Time Traffic Management in Smart Cities
by Syed Rizwan Hassan and Asif Mehmood
Mathematics 2025, 13(24), 3947; https://doi.org/10.3390/math13243947 - 11 Dec 2025
Viewed by 90
Abstract
The factors restricting the large-scale deployment of smart vehicular networks include application service placement/migration, mobility management, network congestion, and latency. Current vehicular networks are striving to optimize network performance through decentralized framework deployments. Specifically, the urban-level execution of current network deployments often fails [...] Read more.
The factors restricting the large-scale deployment of smart vehicular networks include application service placement/migration, mobility management, network congestion, and latency. Current vehicular networks are striving to optimize network performance through decentralized framework deployments. Specifically, the urban-level execution of current network deployments often fails to achieve the quality of service required by smart cities. To address these issues, we have proposed a vehicular edge–fog computing (VEFC)-enabled adaptive area-based traffic management (AABTM) architecture. Our design divides the urban area into multiple microzones for distributed control. These microzones are equipped with roadside units for real-time collection of vehicular information. We also propose (1) a vehicle mobility management (VMM) scheme to facilitate seamless service migration during vehicular movement; (2) a dynamic vehicular clustering (DVC) approach for the dynamic clustering of distributed network nodes to enhance service delivery; and (3) a dynamic microservice assignment (DMA) algorithm to ensure efficient resource-aware microservice placement/migration. We have evaluated the proposed schemes on different scales. The proposed schemes provide a significant improvement in vital network parameters. AABTM achieves reductions of 86.4% in latency, 53.3% in network consumption, 6.2% in energy usage, and 48.3% in execution cost, while DMA-clustering reduces network consumption by 59.2%, energy usage by 5%, and execution cost by 38.4% compared to traditional cloud-based urban traffic management frameworks. This research highlights the potential of utilizing distributed frameworks for real-time traffic management in next-generation smart vehicular networks. Full article
Show Figures

Figure 1

26 pages, 2212 KB  
Article
Adaptive Reinforcement Learning-Based Framework for Energy-Efficient Task Offloading in a Fog–Cloud Environment
by Branka Mikavica and Aleksandra Kostic-Ljubisavljevic
Sensors 2025, 25(24), 7516; https://doi.org/10.3390/s25247516 - 10 Dec 2025
Viewed by 252
Abstract
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog–cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive [...] Read more.
Ever-increasing computational demand introduced by the expanding scale of Internet of Things (IoT) devices poses significant concerns in terms of energy consumption in a fog–cloud environment. Due to the limited resources of IoT devices, energy-efficient task offloading becomes even more challenging for time-sensitive tasks. In this paper, we propose a reinforcement learning-based framework, namely Adaptive Q-learning-based Energy-aware Task Offloading (AQETO), that dynamically manages the energy consumption of fog nodes in a fog–cloud network. Concurrently, it considers IoT task delay tolerance and allocates computational resources while satisfying deadline requirements. The proposed approach dynamically determines energy states of each fog node using Q-learning depending on workload fluctuations. Moreover, AQETO prioritizes allocation of the most urgent tasks to minimize delays. Extensive experiments demonstrate the effectiveness of AQETO in terms of the minimization of fog node energy consumption and delay and the maximization of system efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

37 pages, 12674 KB  
Article
Efficient Neural Modeling of Wind Power Density for National-Scale Energy Planning: Toward Sustainable AI Applications in Industry 5.0
by Mario Molina-Almaraz, Luis Octavio Solís-Sánchez, Luis E. Bañuelos-García, Celina L. Castañeda-Miranda, Héctor A. Guerrero-Osuna and Eduardo García-Sánchez
Appl. Sci. 2025, 15(24), 13000; https://doi.org/10.3390/app152413000 - 10 Dec 2025
Viewed by 190
Abstract
This study presents an efficient and reproducible framework for estimating wind power density (WPD) across Mexico using a Dense Neural Network (DNN) trained exclusively on ERA5 and ERA5-Land reanalysis data. The model is designed as a computationally efficient surrogate that reproduces the statistical [...] Read more.
This study presents an efficient and reproducible framework for estimating wind power density (WPD) across Mexico using a Dense Neural Network (DNN) trained exclusively on ERA5 and ERA5-Land reanalysis data. The model is designed as a computationally efficient surrogate that reproduces the statistical behavior of the ERA5 benchmark while enabling national-scale WPD mapping and short-term projections at minimal computational cost. Meteorological variables—including wind components at 10 m and 100 m, surface temperature, pressure, and terrain elevation—were harmonized on a 0.25° grid for the 1971–2024 period. A chronological dataset split (70-20-10%) was applied to realistically evaluate forecasting capability. The optimized DNN architecture (512-256-128 neurons) achieved high predictive performance (R2 ≈ 0.91, RMSE ≈ 6.2 W/m2) and accurately reproduced spatial patterns and seasonal variability, particularly in high-resource regions such as Oaxaca and Baja California. Compared with deeper neural architectures, the proposed model reduced training time by more than 60% and energy consumption by approximately 40%, supporting principles of sustainable computing and Industry 5.0. The resulting WPD fields, delivered in interoperable NetCDF formats, can be directly integrated into decision-support tools for wind-farm planning, smart-grid management, and long-term renewable-energy strategies in data-scarce environments. Full article
Show Figures

Figure 1

22 pages, 1044 KB  
Article
Explaining Food Waste Dissimilarities in the European Union: An Analysis of Economic, Demographic, and Educational Dimensions
by Claudiu George Bocean
Foods 2025, 14(24), 4244; https://doi.org/10.3390/foods14244244 - 10 Dec 2025
Viewed by 231
Abstract
Food waste remains a persistent sustainability challenge for the European Union, revealing how economic development, demographic structures, and educational attainment intersect to shape consumption behavior. Although rising prosperity can enhance efficiency, it often encourages overproduction and habits of abundance that increase food waste. [...] Read more.
Food waste remains a persistent sustainability challenge for the European Union, revealing how economic development, demographic structures, and educational attainment intersect to shape consumption behavior. Although rising prosperity can enhance efficiency, it often encourages overproduction and habits of abundance that increase food waste. This study investigates the structural drivers behind the variation in per capita food waste across EU member states by examining the combined influences of economic growth, human capital, and population density. Using a cross-country dataset, the analysis integrates factorial methods to identify latent relationships among socioeconomic indicators, a multilayer perceptron to capture nonlinear dependencies, and cluster analysis to classify countries according to shared development and education patterns. The results show that higher income and consumption levels tend to elevate food waste. Nevertheless, this effect is moderated when educational attainment and public awareness are stronger, highlighting the role of knowledge in shaping responsible consumption. The neural network further demonstrates that the relationship between prosperity and waste is not linear but mediated by the cognitive and social capacities of each society. Cluster patterns reveal regional models where sustainability policies and cultural norms contribute to more efficient food management. Overall, the study emphasizes that food waste arises from structural disparities rather than isolated behaviors, offering an evidence-based foundation for integrated EU policies that support more sustainable and equitable resource use. Full article
(This article belongs to the Special Issue Recent Advances in Sustainable Food Manufacturing)
Show Figures

Figure 1

20 pages, 2501 KB  
Article
Field-Deployable Kubernetes Cluster for Enhanced Computing Capabilities in Remote Environments
by Teodor-Mihail Giurgică, Annamaria Sârbu, Bernd Klauer and Liviu Găină
Appl. Sci. 2025, 15(24), 12991; https://doi.org/10.3390/app152412991 - 10 Dec 2025
Viewed by 236
Abstract
This paper presents a portable cluster architecture based on a lightweight Kubernetes distribution designed to provide enhanced computing capabilities in isolated environments. The architecture is validated in two operational scenarios: (1) machine learning operations (MLOps) for on-site learning, fine-tuning and retraining of models [...] Read more.
This paper presents a portable cluster architecture based on a lightweight Kubernetes distribution designed to provide enhanced computing capabilities in isolated environments. The architecture is validated in two operational scenarios: (1) machine learning operations (MLOps) for on-site learning, fine-tuning and retraining of models and (2) web hosting for isolated or resource-constrained networks, providing resilient service delivery through failover and load balancing. The cluster leverages low-cost Raspberry Pi 4B units and virtualized nodes, integrated with Docker containerization, Kubernetes orchestration, and Kubeflow-based workflow optimization. System monitoring with Prometheus and Grafana offers continuous visibility into node health, workload distribution, and resource usage, supporting early detection of operational issues within the cluster. The results show that the proposed dual-mode cluster can function as a compact, field-deployable micro-datacenter, enabling both real-time Artificial Intelligence (AI) operations and resilient web service delivery in field environments where autonomy and reliability are critical. In addition to performance and availability measurements, power consumption, scalability bottlenecks, and basic security aspects were analyzed to assess the feasibility of such a platform under constrained conditions. Limitations are discussed, and future work includes scaling the cluster, evaluating GPU/TPU-enabled nodes, and conducting field tests in realistic tactical environments. Full article
Show Figures

Figure 1

17 pages, 1021 KB  
Article
A Lightweight CNN-Based Method for Micro-Doppler Feature-Based UAV Detection and Classification
by Luyan Zhang, Gangyi Tu, Yike Xu and Xujia Zhou
Electronics 2025, 14(24), 4831; https://doi.org/10.3390/electronics14244831 - 8 Dec 2025
Viewed by 286
Abstract
To address the high computational cost and significant resource consumption of radar Doppler-based target recognition, which limits its application in real-time embedded systems, this paper proposes a lightweight CNN (Convolutional Neural Network) approach for radar target identification. The proposed approach builds a deep [...] Read more.
To address the high computational cost and significant resource consumption of radar Doppler-based target recognition, which limits its application in real-time embedded systems, this paper proposes a lightweight CNN (Convolutional Neural Network) approach for radar target identification. The proposed approach builds a deep convolutional neural network using range-Doppler maps, and leverages data collected by frequency-modulated continuous wave (FMCW) radar from targets such as drones, vehicles, and pedestrians. This method enables efficient object detection and classification across a wide range of scenarios. To improve the performance of the proposed model, this study incorporates a coordinate attention mechanism within the convolutional neural network. This mechanism fine-tunes the network’s focus by dynamically adjusting the weights of different feature channels and spatial regions, allowing it to concentrate on the most informative areas. Experimental results show that the foundational architecture of the proposed deep learning model, RangDopplerNet Type-1, effectively captures micro-Doppler features from range-Doppler maps across diverse targets. This capability enables precise detection and classification, with the model achieving an impressive average recognition accuracy of 96.71%. The enhanced network architecture, RangeDopplerNet Type-2, reached an average accuracy of 98.08%, while retaining a compact footprint of only 403 KB. Compared with standard lightweight models such as MobileNetV2, the proposed architecture reduces model size by 97.04%. This demonstrates that, while improving accuracy, the proposed architecture also significantly reduces both computational and storage overhead.The deep learning model introduced in this study is specifically tailored for deployment on resource-constrained platforms, including mobile and embedded systems. It provides an efficient and practical approach for development of miniaturized low-power devices. Full article
Show Figures

Figure 1

27 pages, 36940 KB  
Article
An Energy-Efficient Fault Diagnosis Method for Subsea Main Shaft Bearings
by Jiawen Hu, Jingbao Hou, Tenglong Yang, Yixi Zhang and Zhenghua Chen
J. Mar. Sci. Eng. 2025, 13(12), 2329; https://doi.org/10.3390/jmse13122329 - 8 Dec 2025
Viewed by 117
Abstract
Main shaft bearings are among the critical rotating components of subsea drilling rigs, and their health status directly affects the efficiency and reliability of the drilling system. However, in the high-pressure liquid environment of the deep sea, with intense noise, the vibration signals [...] Read more.
Main shaft bearings are among the critical rotating components of subsea drilling rigs, and their health status directly affects the efficiency and reliability of the drilling system. However, in the high-pressure liquid environment of the deep sea, with intense noise, the vibration signals of the bearings attenuate rapidly. As a result, fault-related features have a low signal-to-noise ratio (SNR), which poses a challenge for bearing health monitoring. In recent years, Deep Neural Network (DNN)-based fault diagnosis methods for subsea drilling rig bearings have become a research hotspot in the field due to their strong potential for deep fault feature mining. Nevertheless, their reliance on high-power-consumption computational resources restricts their widespread application in subsea monitoring scenarios. To address the above issues, this paper proposes a fault diagnosis method for the main-spindle bearings of subsea drilling rigs that combines population coding with an adaptive-threshold k-winner-take-all (k-WTA) mechanism. The method exploits the noise robustness of population coding and the sparse activation induced by the adaptive k-WTA mechanism, achieving a noise-robust and energy-efficient fault diagnosis scheme for the main-spindle bearings of subsea drilling rigs. The experimental results confirm the effectiveness of the proposed method. In accuracy and generalization experiments on the CWRU benchmark dataset, the proposed method achieves good diagnostic accuracy that is not inferior to other SOTA methods, indicating relatively strong generalization and robustness. On the Paderborn real-bearing benchmark dataset, the results highlight the importance of selecting features that are adapted to specific operating conditions. Additionally, in the noise robustness and energy efficiency experiments, the proposed method shows advantages in both noise resistance and energy efficiency. Full article
(This article belongs to the Special Issue Deep-Sea Mineral Resource Development Technology and Equipment)
Show Figures

Figure 1

28 pages, 5016 KB  
Article
A Lightweight Improved YOLOv8-Based Method for Rebar Intersection Detection
by Rui Wang, Fangjun Shi, Yini She, Li Zhang, Kaifeng Lin, Longshun Fu and Jingkun Shi
Appl. Sci. 2025, 15(24), 12898; https://doi.org/10.3390/app152412898 - 7 Dec 2025
Viewed by 217
Abstract
As industrialized construction and smart building continue to advance, rebar-tying robots place higher demands on the real-time and accurate recognition of rebar intersections and their tying status. Existing deep learning-based detection methods generally rely on heavy backbone networks and complex feature-fusion structures, making [...] Read more.
As industrialized construction and smart building continue to advance, rebar-tying robots place higher demands on the real-time and accurate recognition of rebar intersections and their tying status. Existing deep learning-based detection methods generally rely on heavy backbone networks and complex feature-fusion structures, making it difficult to deploy them efficiently on resource-constrained mobile robots and edge devices, and there is also a lack of dedicated datasets for rebar intersections. In this study, 12,000 rebar mesh images were collected and annotated from two indoor scenes and one outdoor scene to construct a rebar-intersection dataset that supports both object detection and instance segmentation, enabling simultaneous learning of intersection locations and tying status. On this basis, a lightweight improved YOLOv8-based method for rebar intersection detection and segmentation is proposed. The original backbone is replaced with ShuffleNetV2, and a C2f_Dual residual module is introduced in the neck; the same improvements are further transferred to YOLOv8-seg to form a unified lightweight detection–segmentation framework for joint prediction of intersection locations and tying status. Experimental results show that, compared with the original YOLOv8L and several mainstream detectors, the proposed model achieves comparable or superior performance in terms of mAP@50, precision and recall, while reducing model size and computational cost by 51.2% and 58.1%, respectively, and significantly improving inference speed. The improved YOLOv8-seg also achieves satisfactory contour alignment and regional consistency for rebar regions and intersection masks. Owing to its combination of high accuracy and low resource consumption, the proposed method is well suited for deployment on edge-computing devices used in rebar-tying robots and construction quality inspection, providing an effective visual perception solution for intelligent construction. Full article
(This article belongs to the Special Issue Advances in Smart Construction and Intelligent Buildings)
Show Figures

Figure 1

20 pages, 377 KB  
Article
An Enhanced Tuna Swarm Algorithm for Link Scheduling Strategies in Wireless Sensor Networks
by Sunyan Hong, Zhe Yang, Yang Shen and Yujian Wang
Mathematics 2025, 13(24), 3905; https://doi.org/10.3390/math13243905 - 6 Dec 2025
Viewed by 186
Abstract
In resource-constrained wireless sensor networks, efficient link scheduling is a well-studied challenge. This problem is NP-hard, indicating that NP (Nondeterministic Polynomial Time) refers to problems whose solutions can be verified in polynomial time but are computationally difficult to find, and traditional methods seldom [...] Read more.
In resource-constrained wireless sensor networks, efficient link scheduling is a well-studied challenge. This problem is NP-hard, indicating that NP (Nondeterministic Polynomial Time) refers to problems whose solutions can be verified in polynomial time but are computationally difficult to find, and traditional methods seldom yield optimal solutions within practical time limits. This research introduces an innovative novel link scheduling strategy based on the Tuna Swarm Optimization (TSO-LS) algorithm to optimize the link scheduling performance of wireless sensor networks. This work enhances the tuna swarm algorithm’s search process by incorporating characteristics of the link scheduling problem, resulting in specialized algorithmic improvements for this scenario. This research presents three principal improvements to the algorithm: first, optimizing the individual update mechanism to expedite scheduling solutions; second, refining the leading individual selection strategy to elevate global scheduling quality; and third, maintaining population diversity to prevent convergence on suboptimal scheduling schemes. In the experimental section, TSO-LS is compared with the Genetic Algorithm, Particle Swarm Optimization, Enhanced Particle Swarm Optimization and Ant Colony Optimization. The results show that TSO-LS achieves a 13.3% improvement in energy efficiency and a 12.5% decrease in average latency. Under different experimental conditions, the TSO-LS strategy shortens the average latency to 10.5 ms, demonstrating outstanding overall performance. Furthermore, this strategy reduces node consumption from 0.41 mJ to 0.32 mJ, significantly extending the overall lifespan of the network. Full article
Show Figures

Figure 1

22 pages, 698 KB  
Article
Model Predictive Load Frequency Control for Virtual Power Plants: A Mixed Time- and Event-Triggered Approach Dependent on Performance Standard
by Liangyi Pu, Jianhua Hou, Song Wang, Haijun Wei, Yanghaoran Zhu, Xiong Xu and Xiongbo Wan
Technologies 2025, 13(12), 571; https://doi.org/10.3390/technologies13120571 - 5 Dec 2025
Viewed by 275
Abstract
To improve the load frequency control (LFC) performance of power systems incorporating virtual power plants (VPPs) while reducing network resource consumption, a model predictive control (MPC) method based on a mixed time/event-triggered mechanism (MTETM) is proposed. This mechanism integrates an event-triggered mechanism (ETM) [...] Read more.
To improve the load frequency control (LFC) performance of power systems incorporating virtual power plants (VPPs) while reducing network resource consumption, a model predictive control (MPC) method based on a mixed time/event-triggered mechanism (MTETM) is proposed. This mechanism integrates an event-triggered mechanism (ETM) with a time-triggered mechanism (TTM), where ETM avoids unnecessary signal transmission and TTM ensures fundamental control performance. Subsequently, for the LFC system incorporating VPPs, a state hard constrained MPC problem is formulated and transformed into a “min-max” optimisation problem. Through linear matrix inequalities, the original optimisation problem is equivalently transformed into an auxiliary optimisation problem, with the optimal control law solved via rolling optimisation. Theoretical analysis demonstrates that the proposed auxiliary optimisation problem possesses recursive feasibility, whilst the closed-loop system satisfies input-to-state stability. Finally, validation through case studies of two regional power systems demonstrates that the MPC approach based on MTETM outperforms the ETM-based MPC approach in terms of control performance while maintaining a triggering rate of 33.3%. Compared with the TTM-based MPC algorithm, the MTETM-based MPC method reduces the triggering rate by 66.7%, while maintaining nearly equivalent control performance. Consequently, the results validate the effectiveness of the MTETM-based MPC approach in conserving network resources while maintaining control performance. Full article
(This article belongs to the Special Issue Next-Generation Distribution System Planning, Operation, and Control)
Show Figures

Figure 1

Back to TopTop