Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,089)

Search Parameters:
Keywords = optimal time node

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 6303 KB  
Article
An Efficient Remote Sensing Index for Soybean Identification: Enhanced Chlorophyll Index (NRLI)
by Dongmei Lyu, Chenlan Lai, Bingxue Zhu, Zhijun Zhen and Kaishan Song
Remote Sens. 2026, 18(2), 278; https://doi.org/10.3390/rs18020278 - 14 Jan 2026
Abstract
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we [...] Read more.
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we developed an Enhanced Chlorophyll Index (NRLI) to improve the separability between soybean and maize—two spectrally similar crops that often confound traditional vegetation indices. The proposed NRLI integrates red-edge, near-infrared, and green spectral information, effectively capturing variations in chlorophyll and canopy water content during key phenological stages, particularly from flowering to pod setting and maturity. Building upon this foundation, we further introduce a pixel-wise compositing strategy based on the peak phase of NRLI to enhance the temporal adaptability and spectral discriminability in crop classification. Unlike conventional approaches that rely on imagery from fixed dates, this strategy dynamically analyzes annual time-series data, enabling phenology-adaptive alignment at the pixel level. Comparative analysis reveals that NRLI consistently outperforms existing vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Greenness and Water Content Composite Index (GWCCI), across representative soybean-producing regions in multiple countries. It improves overall accuracy (OA) by approximately 10–20 percentage points, achieving accuracy rates exceeding 90% in large, contiguous cultivation areas. To further validate the robustness of the proposed index, benchmark comparisons were conducted against the Random Forest (RF) machine learning algorithm. The results demonstrated that the single-index NRLI approach achieved competitive performance, comparable to the multi-feature RF model, with accuracy differences generally within 1–2%. In some regions, NRLI even outperformed RF. This finding highlights NRLI as a computationally efficient alternative to complex machine learning models without compromising mapping precision. This study provides a robust, scalable, and transferable single-index approach for large-scale soybean mapping and monitoring using remote sensing. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Smart Agriculture and Digital Twins)
38 pages, 7657 KB  
Article
Optimizing Energy Storage Systems with PSO: Improving Economics and Operations of PMGD—A Chilean Case Study
by Juan Tapia-Aguilera, Luis Fernando Grisales-Noreña, Roberto Eduardo Quintal-Palomo, Oscar Danilo Montoya and Daniel Sanin-Villa
Appl. Syst. Innov. 2026, 9(1), 22; https://doi.org/10.3390/asi9010022 - 14 Jan 2026
Abstract
This work develops a methodology for operating Battery Energy Storage Systems (BESSs) in distribution networks, connected in parallel with a medium- and small-scale photovoltaic Distributed Generator (PMGD), focusing on a real project located in the O’Higgins region of Chile. The objective is to [...] Read more.
This work develops a methodology for operating Battery Energy Storage Systems (BESSs) in distribution networks, connected in parallel with a medium- and small-scale photovoltaic Distributed Generator (PMGD), focusing on a real project located in the O’Higgins region of Chile. The objective is to increase energy sales by the PMGD while ensuring compliance with operational constraints related to the grid, PMGD, and BESSs, and optimizing renewable energy use. A real distribution network from Compañía General de Electricidad (CGE) comprising 627 nodes was simplified into a validated three-node, two-line equivalent model to reduce computational complexity while maintaining accuracy. A mathematical model was designed to maximize economic benefits through optimal energy dispatch, considering solar generation variability, demand curves, and seasonal energy sales and purchasing prices. An energy management system was proposed based on a master–slave methodology composed of Particle Swarm Optimization (PSO) and an hourly power flow using the successive approximation method. Advanced optimization techniques such as Monte Carlo (MC) and the Genetic Algorithm (GAP) were employed as comparison methods, supported by a statistical analysis evaluating the best and average solutions, repeatability, and processing times to select the most effective optimization approach. Results demonstrate that BESS integration efficiently manages solar generation surpluses, injecting energy during peak demand and high-price periods to maximize revenue, alleviate grid congestion, and improve operational stability, with PSO proving particularly efficient. This work underscores the potential of BESS in PMGD to support a more sustainable and efficient energy matrix in Chile, despite regulatory and technical challenges that warrant further investigation. Full article
(This article belongs to the Section Applied Mathematics)
16 pages, 1736 KB  
Article
Multi-Factor Cost Function-Based Interference-Aware Clustering with Voronoi Cell Partitioning for Dense WSNs
by Soundrarajan Sam Peter, Parimanam Jayarajan, Rajagopal Maheswar and Shanmugam Maheswaran
Sensors 2026, 26(2), 546; https://doi.org/10.3390/s26020546 - 13 Jan 2026
Abstract
Efficient clustering and cluster head (CH) selection are the critical parameters of wireless sensor networks (WSNs) for their prolonged network lifetime. However, the performances of the traditional clustering algorithms like LEACH and HEED are not satisfactory when they are implemented on a dense [...] Read more.
Efficient clustering and cluster head (CH) selection are the critical parameters of wireless sensor networks (WSNs) for their prolonged network lifetime. However, the performances of the traditional clustering algorithms like LEACH and HEED are not satisfactory when they are implemented on a dense WSN due to their unbalanced load distribution and high contention nature. In the traditional methods, the cluster heads are selected with respect to the residual energy criteria, and often create a circular cluster shape boundary with a uniform node distribution. This causes the cluster heads to become overloaded in the high-density regions and the unutilized cluster heads gather in the sparse regions. Therefore, frequent cluster head changes occur, which is not suitable for a real-time dynamic environment. In order to avoid these issues, this proposed work develops a density-aware adaptive clustering (DAAC) protocol for optimizing the CH selection and cluster formation in a dense wireless sensor network. The residual energy information, together with the local node density and link quality, is utilized as a single cluster head detection metric in this work. The local node density information assists the proposed work to estimate the sparse and dense area in the network that results in frequent cluster head congestion. DAAC is also included with a minimum inter-CH distance constraint for CH crowding, and a multi-factor cost function is used for making the clusters by inviting the nodes by their distance and an expected transmission energy. DAAC triggers re-clustering in a dynamic manner when it finds a response in the CH energy depletion or a significant change in the load density. Unlike the traditional circular cluster boundaries, DAAC utilizes dynamic Voronoi cells (VCs) for making an interference-aware coverage in the network. This makes dense WSNs operate efficiently, by providing a hierarchical extension, on making secondary CHs in an extremely dense scenario. The proposed model is implemented in MATLAB simulation, to determine and compare its efficiency over the traditional algorithms such as LEACH and HEED, which shows a satisfactory network lifetime improvement of 20.53% and 32.51%, an average increase in packet delivery ratio by 8.14% and 25.68%, and an enhancement in total throughput packet by 140.15% and 883.51%, respectively. Full article
18 pages, 15405 KB  
Article
Electric Vehicle Route Optimization: An End-to-End Learning Approach with Multi-Objective Planning
by Rodrigo Gutiérrez-Moreno, Ángel Llamazares, Pedro Revenga, Manuel Ocaña and Miguel Antunes-García
World Electr. Veh. J. 2026, 17(1), 41; https://doi.org/10.3390/wevj17010041 - 13 Jan 2026
Abstract
Traditional routing algorithms optimizing for distance or travel time are inadequate for electric vehicles (EVs), which require energy-aware planning considering battery constraints and charging infrastructure. This work presents an energy-optimal routing system for EVs that integrates personalized consumption modeling with real-time environmental data. [...] Read more.
Traditional routing algorithms optimizing for distance or travel time are inadequate for electric vehicles (EVs), which require energy-aware planning considering battery constraints and charging infrastructure. This work presents an energy-optimal routing system for EVs that integrates personalized consumption modeling with real-time environmental data. The system employs a Long Short-Term Memory (LSTM) neural network to predict State-of-Charge (SoC) consumption from real-world driving data, learning directly from spatiotemporal features including velocity, temperature, road inclination, and traveled distance. Unlike physics-based models requiring difficult-to-obtain parameters, this approach captures nonlinear dependencies and temporal patterns in energy consumption. The routing framework integrates static map data, dynamic traffic conditions, weather information, and charging station locations into a weighted graph representation. Edge costs reflect predicted SoC drops, while node penalties account for traffic congestion and charging opportunities. An enhanced A* algorithm finds optimal routes minimizing energy consumption. Experimental validation on a Nissan Leaf shows that the proposed end-to-end SoC estimator significantly outperforms traditional approaches. The model achieves an RMSE of 36.83 and an R2 of 0.9374, corresponding to a 59.91% reduction in error compared to physics-based formulas. Real-world testing on various routes further confirms its accuracy, with a Mean Absolute Error in the total route SoC estimation of 2%, improving upon the 3.5% observed for commercial solutions. Full article
(This article belongs to the Section Propulsion Systems and Components)
17 pages, 702 KB  
Article
Machine Learning the Decoherence Property of Superconducting and Semiconductor Quantum Devices from Graph Connectivity
by Quan Fu, Jie Liu, Xin Wang and Rui Xiong
Entropy 2026, 28(1), 89; https://doi.org/10.3390/e28010089 - 12 Jan 2026
Viewed by 120
Abstract
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays [...] Read more.
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays a crucial role in determining overall system susceptibility to environmental noise, yet systematic characterization of this relationship has been hindered by computational complexity. We develop a machine learning framework that bridges graph features with quantum device characterization to predict decoherence lifetime directly from connectivity patterns. By representing quantum architectures as connected graphs and using 14 topological features as input to supervised learning models, we achieve accurate lifetime predictions with R2>0.96 for both superconducting and semiconductor platforms. Our analysis reveals fundamentally distinct decoherence mechanisms: superconducting qubits show high sensitivity to global connectivity measures (betweenness centrality δ1=0.484, spectral entropy δ1=0.480), while semiconductor quantum dots exhibit exceptional sensitivity to system scale (node count δ2=0.919, importance = 1.860). The complete failure of cross-platform model transfer (R2 scores of −0.39 and −433.60) emphasizes the platform-specific nature of optimal connectivity design. Our approach enables rapid assessment of quantum architectures without expensive simulations, providing practical guidance for noise-optimized quantum processor design. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

25 pages, 2617 KB  
Article
RF-Driven Adaptive Surrogate Models for LoRaDisC Network Performance Prediction in Smart Agriculture and Field Sensing Environments
by Showkat Ahmad Bhat, Ishfaq Bashir Sofi, Ming-Che Chen and Nen-Fu Huang
AgriEngineering 2026, 8(1), 27; https://doi.org/10.3390/agriengineering8010027 - 11 Jan 2026
Viewed by 75
Abstract
LoRa-based IoT systems are increasingly used in smart farming, greenhouse monitoring, and large-scale agricultural sensing, where long-range, energy-efficient communication is essential. However, estimating link quality metrics such as PRR, RSSI, and SNR typically requires continuous packet transmission and sequence logging, an impractical approach [...] Read more.
LoRa-based IoT systems are increasingly used in smart farming, greenhouse monitoring, and large-scale agricultural sensing, where long-range, energy-efficient communication is essential. However, estimating link quality metrics such as PRR, RSSI, and SNR typically requires continuous packet transmission and sequence logging, an impractical approach for power-constrained field nodes. This study proposes a deep learning-driven framework for real-time prediction of link- and network-level performance in multihop LoRa networks, targeting the LoRaDisC protocol commonly deployed in agricultural environments. By integrating Bayesian surrogate modeling with Random Forest-guided hyperparameter optimization, the system accurately predicts PRR, RSSI, and SNR using multivariate time series features. Experiments on a large-scale outdoor LoRa testbed (ChirpBox) show that aggregated link layer metrics strongly correlate with PRR, with performance influenced by environmental variables such as humidity, temperature, and field topology. The optimized model achieves a mean absolute error (MAE) of 8.83 and adapts effectively to dynamic environmental conditions. This work enables energy-efficient, autonomous communication in agricultural IoT deployments, supporting reliable field sensing, crop monitoring, livestock tracking, and other smart farming applications that depend on resilient low-power wireless connectivity. Full article
20 pages, 3512 KB  
Article
Adaptive Edge–Cloud Framework for Real-Time Smart Grid Optimization with IIoT Analytics
by Omar Alharbi
Electronics 2026, 15(2), 300; https://doi.org/10.3390/electronics15020300 - 9 Jan 2026
Viewed by 118
Abstract
The large-scale integration of Distributed Energy Resources (DERs) in smart grids creates challenges related to real-time optimization, system scalability, and operational security. This paper presents GridOpt, a hybrid edge–cloud framework designed to address these challenges through distributed intelligence and coordinated control. In GridOpt, [...] Read more.
The large-scale integration of Distributed Energy Resources (DERs) in smart grids creates challenges related to real-time optimization, system scalability, and operational security. This paper presents GridOpt, a hybrid edge–cloud framework designed to address these challenges through distributed intelligence and coordinated control. In GridOpt, edge nodes handle latency-sensitive tasks, while cloud resources support the processing of large-scale grid data. Security is addressed through the integration of homomorphic encryption and blockchain-based consensus, together with an interoperability layer that enables coordination among heterogeneous grid components. Simulation results show that GridOpt achieves an average latency of 76 ms and an energy consumption of 25 Joules under high-throughput conditions. The framework further maintains scalability beyond 10 requests per second with a resource utilization of 54% in dense deployment scenarios. Comparative analysis indicates that GridOpt outperforms ECCGrid, JOintCS, and EdgeApp across key performance metrics. Full article
25 pages, 3717 KB  
Article
Partition-Based Cooperative Decision-Making for High-Frequency Generator Tripping Control
by Wanli Shao, Haiyun Wang, Zhaowei Li, Hongli Zhang and Xuelian Wu
Processes 2026, 14(2), 237; https://doi.org/10.3390/pr14020237 - 9 Jan 2026
Viewed by 151
Abstract
To address the decline in system inertia and the risk of frequency instability resulting from high penetration of renewable energy, and to overcome the limitations of centralized control—such as high computational burden and slow response—as well as the lack of global coordination in [...] Read more.
To address the decline in system inertia and the risk of frequency instability resulting from high penetration of renewable energy, and to overcome the limitations of centralized control—such as high computational burden and slow response—as well as the lack of global coordination in decentralized control, this study proposes a cooperative decision-making strategy, based on system partitioning, for high-frequency generator tripping control. The method first combines spectral clustering with nodal frequency response correlation analysis to achieve dynamic system partitioning and the selection of characteristic monitoring nodes. A two-layer cooperative architecture consisting of zone controllers and a central controller is then established, in which the zone controllers are responsible for aggregating local information, while the central controller dynamically generates a zonal generator tripping priority sequence based on four indicators: regional power surplus ratio, equivalent inertia, frequency deviation, and Rate of Change of Frequency. A simulation study conducted on an actual provincial power grid in the northwest region with a renewable energy penetration rate of 31.1% showed that compared with the existing decentralized control strategy of the power grid, this method not only achieved better frequency recovery, but also reduced the generator tripping capacity by 6.29%. Compared with advanced centralized strategies, it reduces recovery time by 10% while achieving good frequency recovery. By aggregating regional information to reduce central computing load and implementing coordinated control between regions through a global optimization mechanism, this strategy provides an effective method for high-frequency safety control in power grids with high penetration rates of renewable energy. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

25 pages, 6136 KB  
Article
Design and Implementation of a Decentralized Node-Level Battery Management System Chip Based on Deep Neural Network Algorithms
by Muh-Tian Shiue, Yang-Chieh Ou, Chih-Feng Wu, Yi-Fong Wang and Bing-Jun Liu
Electronics 2026, 15(2), 296; https://doi.org/10.3390/electronics15020296 - 9 Jan 2026
Viewed by 136
Abstract
As Battery Management Systems (BMSs) continue to expand in both scale and capacity, conventional state-of-charge (SOC) estimation methods—such as Coulomb counting and model-based observers—face increasing challenges in meeting the requirements for cell-level precision, scalability, and adaptability under aging and operating variability. To address [...] Read more.
As Battery Management Systems (BMSs) continue to expand in both scale and capacity, conventional state-of-charge (SOC) estimation methods—such as Coulomb counting and model-based observers—face increasing challenges in meeting the requirements for cell-level precision, scalability, and adaptability under aging and operating variability. To address these limitations, this study integrates a Deep Neural Network (DNN)–based estimation framework into a node-level BMS architecture, enabling edge-side computation at each individual battery cell. The proposed architecture adopts a decentralized node-level structure with distributed parameter synchronization, in which each BMS node independently performs SOC estimation using shared model parameters. Global battery characteristics are learned through offline training and subsequently synchronized to all nodes, ensuring estimation consistency across large battery arrays while avoiding centralized online computation. This design enhances system scalability and deployment flexibility, particularly in high-voltage battery strings with isolated measurement requirements. The proposed DNN framework consists of two identical functional modules: an offline training module and a real-time estimation module. The training module operates on high-performance computing platforms—such as in-vehicle microcontrollers during idle periods or charging-station servers—using historical charge–discharge data to extract and update battery characteristic parameters. These parameters are then transferred to the real-time estimation chip for adaptive SOC inference. The decentralized BMS node chip integrates preprocessing circuits, a momentum-based optimizer, a first-derivative sigmoid unit, and a weight update module. The design is implemented using the TSMC 40 nm CMOS process and verified on a Xilinx Virtex-5 FPGA. Experimental results using real BMW i3 battery data demonstrate a Root Mean Square Error (RMSE) of 1.853%, with an estimation error range of [4.324%, −4.346%]. Full article
(This article belongs to the Special Issue New Insights in Power Electronics: Prospects and Challenges)
Show Figures

Figure 1

16 pages, 1371 KB  
Article
Enhancing Resilience in China’s Refined Oil Product Distribution Network: A Complex Network Theory Approach with Optimization Strategies
by Qingning Shen, Lin Lin, Tongtong Hou and Cen Song
Systems 2026, 14(1), 69; https://doi.org/10.3390/systems14010069 - 8 Jan 2026
Viewed by 158
Abstract
Considering the escalating international geopolitical tensions and the ensuing great power maneuvers, China’s oil supply faced unprecedented threats. To safeguard against these risks and harness domestic resources more effectively, addressing the stability of refined oil supply had become an urgent imperative. The complex [...] Read more.
Considering the escalating international geopolitical tensions and the ensuing great power maneuvers, China’s oil supply faced unprecedented threats. To safeguard against these risks and harness domestic resources more effectively, addressing the stability of refined oil supply had become an urgent imperative. The complex network theory is integrated into oil product delivery logistics, accounting for transportation volumes, distances, and node importance. Through simulation, we evaluated each scheme’s efficacy using a case study from a province in northwest China. The results demonstrate notable improvements in network robustness across all four strategies. The key node focuses on protection measures emerged as the most effective, followed by the oil depot resource optimization strategy and the network topology optimization strategy, in descending order. By mitigating the risks stemming from international uncertainties, our strategies ensured the timely supply of refined oil products, thereby upholding the stable functioning of the national economy. Full article
(This article belongs to the Section Complex Systems and Cybernetics)
Show Figures

Figure 1

20 pages, 3259 KB  
Article
Green Transportation Planning for Smart Cities: Digital Twins and Real-Time Traffic Optimization in Urban Mobility Networks
by Marek Lis and Maksymilian Mądziel
Appl. Sci. 2026, 16(2), 678; https://doi.org/10.3390/app16020678 - 8 Jan 2026
Viewed by 222
Abstract
This paper proposes a comprehensive framework for integrating Digital Twins (DT) with real-time traffic optimization systems to enhance urban mobility management in Smart Cities. Using the Pobitno Roundabout in Rzeszów as a case study, we established a calibrated microsimulation model (validated via the [...] Read more.
This paper proposes a comprehensive framework for integrating Digital Twins (DT) with real-time traffic optimization systems to enhance urban mobility management in Smart Cities. Using the Pobitno Roundabout in Rzeszów as a case study, we established a calibrated microsimulation model (validated via the GEH statistic) that serves as the core of the proposed Digital Twin. The study goes beyond static scenario analysis by introducing an Adaptive Inflow Metering (AIM) logic designed to interact with IoT sensor data. While traditional geometrical upgrades (e.g., turbo-roundabouts) were analyzed, simulation results revealed that geometrical changes alone—without dynamic control—may fail under peak load conditions (resulting in LOS F). Consequently, the research demonstrates how the DT framework allows for the testing of “Software-in-the-Loop” (SiL) solutions where Python-based algorithms dynamically adjust inflow parameters to prevent gridlock. The findings confirm that combining physical infrastructure changes with digital, real-time optimization algorithms is essential for achieving sustainable “green transport” goals and reducing emissions in congested urban nodes. Full article
(This article belongs to the Special Issue Green Transportation and Pollution Control)
Show Figures

Figure 1

27 pages, 3772 KB  
Article
Research on Three-Dimensional Simulation Technology Based on an Improved RRT Algorithm
by Nan Zhang, Yang Luan, Chengkun Li, Weizhou Xu, Fengju Zhu, Chao Ye and Nianxia Han
Electronics 2026, 15(2), 286; https://doi.org/10.3390/electronics15020286 - 8 Jan 2026
Viewed by 112
Abstract
As urban power grids grow increasingly complex and underground space resources become increasingly scarce, traditional two-dimensional cable design methods face significant challenges in spatial representation accuracy and design efficiency. This study proposes an automated cable path planning method based on an improved Rapidly [...] Read more.
As urban power grids grow increasingly complex and underground space resources become increasingly scarce, traditional two-dimensional cable design methods face significant challenges in spatial representation accuracy and design efficiency. This study proposes an automated cable path planning method based on an improved Rapidly exploring Random Tree (RRT) algorithm. This framework first introduces an enhanced RRT algorithm (referred to as ABS-RRT) that integrates adaptive stride, target-biased sampling, and Soft Actor-Critic reinforcement learning. This algorithm automates the planning of serpentine cable laying paths in confined environments such as cable tunnels and manholes. Subsequently, through trajectory simplification and smoothing optimization, it generates final paths that are safe, smooth, and compliant with engineering specifications. Simulation validation on a typical cable tunnel project in a city’s core area demonstrates that compared to the traditional RRT algorithm, this approach reduces path planning time by over 57%, decreases path length by 8.1%, and lowers the number of nodes by 52%. These results validate the algorithm’s broad application potential in complex urban power grid projects. Full article
(This article belongs to the Special Issue Planning, Scheduling and Control of Grids with Renewables)
Show Figures

Figure 1

23 pages, 2112 KB  
Article
An Adaptive Compression Method for Lightweight AI Models of Edge Nodes in Customized Production
by Chun Jiang, Mingxin Hou and Hongxuan Wang
Sensors 2026, 26(2), 383; https://doi.org/10.3390/s26020383 - 7 Jan 2026
Viewed by 189
Abstract
In customized production environments featuring multi-task parallelism, the efficient adaptability of edge intelligent models is essential for ensuring the stable operation of production lines. However, rapidly generating deployable lightweight models under conditions of frequent task changes and constrained hardware resources remains a major [...] Read more.
In customized production environments featuring multi-task parallelism, the efficient adaptability of edge intelligent models is essential for ensuring the stable operation of production lines. However, rapidly generating deployable lightweight models under conditions of frequent task changes and constrained hardware resources remains a major challenge for current edge intelligence applications. This paper proposes an adaptive lightweight artificial intelligence (AI) model compression method for edge nodes in customized production lines to overcome the limited transferability and insufficient flexibility of traditional static compression approaches. First, a task requirement analysis model is constructed based on accuracy, latency, and power-consumption demands associated with different production tasks. Then, the hardware information of edge nodes is structurally characterized. Subsequently, a compression-strategy candidate pool is established, and an adaptive decision engine integrating ensemble reinforcement learning (RL) and Bayesian optimization (BO) is introduced. Finally, through an iterative optimization mechanism, compression ratios are dynamically adjusted using real-time feedback of inference latency, memory usage, and recognition accuracy, thereby continuously enhancing model performance in edge environments. Experimental results demonstrate that, in typical object-recognition tasks, the lightweight models generated by the proposed method significantly improve inference efficiency while maintaining high accuracy, outperforming conventional fixed compression strategies and validating the effectiveness of the proposed approach in adaptive capability and edge-deployment performance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Edge Computing in IoT-Based Applications)
Show Figures

Figure 1

29 pages, 1215 KB  
Article
Cost-Optimal Coordination of PV Generation and D-STATCOM Control in Active Distribution Networks
by Luis Fernando Grisales-Noreña, Daniel Sanin-Villa, Oscar Danilo Montoya, Rubén Iván Bolaños and Kathya Ximena Bonilla Rojas
Sci 2026, 8(1), 8; https://doi.org/10.3390/sci8010008 - 7 Jan 2026
Viewed by 95
Abstract
This paper presents an intelligent operational strategy that performs the coordinated dispatch of active and reactive power from PV distributed generators (PV DGs) and Distributed Static Compensators (D-STATCOMs) to support secure and economical operation of active distribution networks. The problem is formulated as [...] Read more.
This paper presents an intelligent operational strategy that performs the coordinated dispatch of active and reactive power from PV distributed generators (PV DGs) and Distributed Static Compensators (D-STATCOMs) to support secure and economical operation of active distribution networks. The problem is formulated as a nonlinear optimization problem that explicitly represents the P and Q control capabilities of Distributed Energy Resources (DER), encompassing small-scale generation and compensation units connected at the distribution level, such as PV generators and D-STATCOM devices, adjusting their reference power setpoints to minimize daily operating costs, including energy purchasing and DER maintenance, while satisfying device power limits and the voltage and current constraints of the grid. To solve this problem efficiently, a parallel version of the Population Continuous Genetic Algorithm (CGA) is implemented, enabling simultaneous evaluation of candidate solutions and significantly reducing computational time. The strategy is assessed on the 33- and 69-node benchmark systems under deterministic and uncertainty scenarios derived from real demand and solar-generation profiles from a Colombian region. In all cases, the proposed approach achieved the lowest operating cost, outperforming state-of-the-art metaheuristics such as Particle Swarm Optimization (PSO), Sine Cosine Algorithm (SCA), and Crow Search Algorithm (CSA), while maintaining power limits, voltages and line currents within secure ranges, exhibiting excellent repeatability with standard deviations close to 0.0090%, and reducing execution time by more than 68% compared with its sequential counterpart. The main contributions of this work are: a unified optimization model for joint PQ control in PV and D–STATCOM units, a robust codification mechanism that ensures stable convergence under variability, and a parallel evolutionary framework that delivers optimal, repeatable, and computationally efficient energy management in distribution networks subject to realistic operating uncertainty. Full article
Show Figures

Figure 1

20 pages, 1096 KB  
Article
A New Ant Colony Optimization-Based Dynamic Path Planning and Energy Optimization Model in Wireless Sensor Networks for Mobile Sink by Using Mixed-Integer Linear Programming
by Fangyan Chen, Xiangcheng Wu, Zhiming Wang, Weimin Qi and Peng Li
Biomimetics 2026, 11(1), 44; https://doi.org/10.3390/biomimetics11010044 - 6 Jan 2026
Viewed by 163
Abstract
Currently, wireless sensor networks (WSNs) have been mutually applied to environmental monitoring and industrial control due to their low-cost and low-energy sensor nodes. However, WSNs are composed of a large number of energy-limited sensor nodes, which requires balancing the relationship among energy consumption, [...] Read more.
Currently, wireless sensor networks (WSNs) have been mutually applied to environmental monitoring and industrial control due to their low-cost and low-energy sensor nodes. However, WSNs are composed of a large number of energy-limited sensor nodes, which requires balancing the relationship among energy consumption, transmission delay, and network lifetime simultaneously to avoid the formation of energy holes. In nature, gregarious herbivores, such as the white-bearded wildebeest on the African savanna, employ a “fast-transit and selective-dwell” strategy when searching for water; they cross low-value regions quickly and prolong their stay in nutrient-rich pastures, thereby minimizing energy cost while maximizing nutrient gain. Ants, meanwhile, dynamically evaluate the “energy-to-reward” ratio of a path through pheromone concentration and its evaporation rate, achieving globally optimal foraging. Inspired by these two complementary biological mechanisms, our study proposes a novel ACO-conceptualized optimization model formulated via mixedinteger linear programming (MILP). By mapping the pheromone intensity and evaporation rate into the MILP energy constraints and cost functions, the model integrates discrete decision-making (path selection) and continuous variables (dwell time) by dynamic path planning and energy optimization of mobile sink, constituting multi-objective optimization. Firstly, we can achieve flexible trade-offs between multiple objectives such as data transmission delay and energy consumption balance through adjustable weight coefficients of the MILP model. Secondly, the method transforms complex path planning and scheduling problems into deterministic optimization models with theoretical global optimality guarantees. Finally, experimental results show that the model can effectively optimize network performance, significantly improve energy efficiency, while ensuring real-time performance and extended network lifetime. Full article
(This article belongs to the Section Biomimetic Design, Constructions and Devices)
Show Figures

Figure 1

Back to TopTop