error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (14,136)

Search Parameters:
Keywords = computational cost

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 2702 KB  
Article
Adaptive and Sustainable Smart Environments Using Predictive Reasoning and Context-Aware Reinforcement Learning
by Abderrahim Lakehal, Boubakeur Annane, Adel Alti, Philippe Roose and Soliman Aljarboa
Future Internet 2026, 18(1), 40; https://doi.org/10.3390/fi18010040 (registering DOI) - 8 Jan 2026
Abstract
Smart environments play a key role in improving user comfort, energy efficiency, and sustainability through intelligent automation. Nevertheless, real-world deployments still face major challenges, including network instability, delayed responsiveness, inconsistent AI decisions, and limited adaptability under dynamic conditions. Many existing approaches lack advanced [...] Read more.
Smart environments play a key role in improving user comfort, energy efficiency, and sustainability through intelligent automation. Nevertheless, real-world deployments still face major challenges, including network instability, delayed responsiveness, inconsistent AI decisions, and limited adaptability under dynamic conditions. Many existing approaches lack advanced context-awareness, effective multi-agent coordination, and scalable learning, leading to high computational cost and reduced reliability. To address these limitations, this paper proposes MACxRL, a lightweight Multi-Agent Context-Aware Reinforcement Learning framework for autonomous smart-environment control. The system adopts a three-tier architecture consisting of real-time context acquisition, lightweight prediction, and centralized RL-based decision learning. Local agents act quickly at the edge using rule-based reasoning, while a shared CxRL engine refines actions for global coordination, combining fast responsiveness with continuous adaptive learning. Experiments show that MACxRL reduces energy consumption by 45–60%, converges faster, and achieves more stable performance than standard and deep RL baselines. Future work will explore self-adaptive reward tuning and extend deployment to multi-room environments toward practical real-world realization. Full article
Show Figures

Graphical abstract

22 pages, 1710 KB  
Article
Shape Parameterization and Efficient Optimization Design Method for the Ray-like Underwater Gliders
by Daiyu Zhang, Daxing Zeng, Heng Zhou, Chaoming Bao and Qian Liu
Biomimetics 2026, 11(1), 58; https://doi.org/10.3390/biomimetics11010058 (registering DOI) - 8 Jan 2026
Abstract
To address the challenges of high computational cost and lengthy design cycles in the high-precision optimization of ray-like underwater gliders, this study proposes a high-accuracy, low-cost parametric modeling and optimization method. The proposed framework begins by extracting the characteristic contours of the manta [...] Read more.
To address the challenges of high computational cost and lengthy design cycles in the high-precision optimization of ray-like underwater gliders, this study proposes a high-accuracy, low-cost parametric modeling and optimization method. The proposed framework begins by extracting the characteristic contours of the manta ray and reconstructing the airfoil sections using the Class-Shape Transformation (CST) method, resulting in a flexible parametric geometry capable of smooth deformation. High-fidelity Computational Fluid Dynamics (CFD) simulations are employed to evaluate the hydrodynamic characteristics, and detailed flow field analyses are conducted to identify the most influential geometric features affecting lift and drag performance. On this basis, a Kriging-based sequential optimization framework is developed. The surrogate model is adaptively refined through dynamic infilling of sample points based on combined Mean Squared Prediction (MSP) and Expected Improvement (EI) criteria, thus improving optimization efficiency while maintaining predictive accuracy. Comparative case studies demonstrate that the proposed method achieves a 116% improvement in lift-to-drag ratio and a more uniform flow distribution, confirming its effectiveness in enhancing both design accuracy and computational efficiency. The results indicate that this approach provides a practical and efficient tool for the parametric design and hydrodynamic optimization of bio-inspired underwater vehicles. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Biomechanics and Biomimetics)
27 pages, 3490 KB  
Article
Multimodal Minimal-Angular-Geometry Representation for Real-Time Dynamic Mexican Sign Language Recognition
by Gerardo Garcia-Gil, Gabriela del Carmen López-Armas and Yahir Emmanuel Ramirez-Pulido
Technologies 2026, 14(1), 48; https://doi.org/10.3390/technologies14010048 - 8 Jan 2026
Abstract
Current approaches to dynamic sign language recognition commonly rely on dense landmark representations, which impose high computational cost and hinder real-time deployment on resource-constrained devices. To address this limitation, this work proposes a computationally efficient framework for real-time dynamic Mexican Sign Language (MSL) [...] Read more.
Current approaches to dynamic sign language recognition commonly rely on dense landmark representations, which impose high computational cost and hinder real-time deployment on resource-constrained devices. To address this limitation, this work proposes a computationally efficient framework for real-time dynamic Mexican Sign Language (MSL) recognition based on a multimodal minimal angular-geometry representation. Instead of processing complete landmark sets (e.g., MediaPipe Holistic with up to 468 keypoints), the proposed method encodes the relational geometry of the hands, face, and upper body into a compact set of 28 invariant internal angular descriptors. This representation substantially reduces feature dimensionality and computational complexity while preserving linguistically relevant manual and non-manual information required for grammatical and semantic discrimination in MSL. A real-time end-to-end pipeline is developed, comprising multimodal landmark extraction, angular feature computation, and temporal modeling using a Bidirectional Long Short-Term Memory (BiLSTM) network. The system is evaluated on a custom dataset of dynamic MSL gestures acquired under controlled real-time conditions. Experimental results demonstrate that the proposed approach achieves 99% accuracy and 99% macro F1-score, matching state-of-the-art performance while using fewer features dramatically. The compactness, interpretability, and efficiency of the minimal angular descriptor make the proposed system suitable for real-time deployment on low-cost devices, contributing toward more accessible and inclusive sign language recognition technologies. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
34 pages, 4803 KB  
Review
Toward Integrated Computational Design: A Systematic Mapping of AAD–FEM Practices in Conceptual Structural Engineering
by Lars Olav Toppe, Villem Vaktskjold, Marcin Luczkowski, Francesco Mirko Massaro and Anders Rønnquist
Buildings 2026, 16(2), 271; https://doi.org/10.3390/buildings16020271 - 8 Jan 2026
Abstract
The early stages of structural design increasingly make use of computational tools that support rapid exploration, performance-informed decision-making, and closer interaction between design and engineering. This systematic mapping study examines how Algorithm-Aided Design (AAD) and the Finite Element Method (FEM) are applied and [...] Read more.
The early stages of structural design increasingly make use of computational tools that support rapid exploration, performance-informed decision-making, and closer interaction between design and engineering. This systematic mapping study examines how Algorithm-Aided Design (AAD) and the Finite Element Method (FEM) are applied and combined in conceptual design workflows. Based on a structured search across three academic databases and a coding scheme applied to 87 publications, the literature is mapped according to algorithmic strategies, FEM applications, element types, disciplinary domains, and levels of integration. The results show that algorithmic and predictive approaches are reported with increasing frequency after 2020, alongside growing use of surrogate models and optimisation routines. Linear-elastic analyses and shell- or beam-based models are frequently reported, particularly in civil engineering contexts, while nonlinear, dynamic, and solid-element analyses appear more prominently in mechanical domains. More tightly coupled AAD–FEM workflows become increasingly visible after 2021, reflecting a growing interest in real-time or near-real-time simulation feedback during early design exploration. At the same time, the literature highlights persistent challenges related to computational cost, fragmented toolchains, limited interoperability, and the relatively limited use of multiscale or advanced material models in conceptual design. Taken together, the findings suggest that continued progress toward more integrated AAD–FEM workflows is closely tied to advances in computational efficiency, improved data exchange and interoperability, and the development of more accessible design–analysis environments across disciplinary boundaries. Full article
28 pages, 6292 KB  
Article
RSICDNet: A Novel Regional Scribble-Based Interactive Change Detection Network for Remote Sensing Images
by Daifeng Peng, Chen He and Haiyan Guan
Remote Sens. 2026, 18(2), 204; https://doi.org/10.3390/rs18020204 - 8 Jan 2026
Abstract
To address the issues of inadequate performance and excessive interaction costs when handling large-scale and complex-shaped change areas with existing interaction forms, this paper proposes RSICDNet, an interactive change detection (ICD) model with regional scribble interaction. In this framework, regional scribble interaction is [...] Read more.
To address the issues of inadequate performance and excessive interaction costs when handling large-scale and complex-shaped change areas with existing interaction forms, this paper proposes RSICDNet, an interactive change detection (ICD) model with regional scribble interaction. In this framework, regional scribble interaction is introduced for the first time to provide rich spatial prior information for accurate ICD. Specifically, RSICDNet first employs an interaction processing network to extract interactive features, and subsequently utilizes the High-Resolution Network (HRNet) backbone to extract features from bi-temporal remote sensing images concatenated along the channel dimension. To effectively integrate these two information streams, an Interaction Fusion and Refinement Module (IFRM) is proposed, which injects the spatial priors from the interactive features into the high-level semantic features. Finally, an Object Contextual Representation (OCR) module is applied to further refine feature representations, and a lightweight segmentation head is used to generate final change map. Furthermore, a human–computer ICD application has been developed based on RSICDNet, significantly enhancing its potential for practical deployment. To validate the effectiveness of the proposed RSICDNet, extensive experiments are conducted against mainstream interactive deep learning models on the WHU-CD, LEVIR-CD, and CLCD datasets. The quantitative results demonstrate that RSICDNet achieves optimal Number of Interactions (NoI) metrics across all three datasets. Specifically, its NoI80 values reach 1.15, 1.45, and 3.42 on the WHU-CD, LEVIR-CD, and CLCD datasets, respectively. The qualitative results confirm a clear advantage for RSICDNet, which consistently delivers visually superior outcomes using the same or often fewer interactions. Full article
18 pages, 1326 KB  
Review
MR-Guided Radiotherapy in Oesophageal Cancer: From Principles to Practice—A Narrative Review
by Su Chen Fong, Eddie Lau, David S. Liu, Niall C. Tebbutt, Richard Khor, Trevor Leong, David Williams, Sergio Uribe and Sweet Ping Ng
Curr. Oncol. 2026, 33(1), 34; https://doi.org/10.3390/curroncol33010034 - 8 Jan 2026
Abstract
Oesophageal cancer remains a significant global health burden with poor survival outcomes despite multimodal treatment. Recent advances in magnetic resonance imaging (MRI) have opened opportunities to improve radiotherapy delivery. This review examines the role of MRI and MR-guided radiotherapy (MRgRT) in oesophageal cancer, [...] Read more.
Oesophageal cancer remains a significant global health burden with poor survival outcomes despite multimodal treatment. Recent advances in magnetic resonance imaging (MRI) have opened opportunities to improve radiotherapy delivery. This review examines the role of MRI and MR-guided radiotherapy (MRgRT) in oesophageal cancer, focusing on applications in staging, treatment planning, and response assessment, with particular emphasis on magnetic resonance linear accelerator (MR-Linac)-based delivery. Compared to computed tomography (CT), MRI offers superior soft-tissue contrast, enabling more accurate tumour delineation and the potential for reduced treatment margins. Real-time MR imaging during treatment can facilitate motion management, while daily adaptive planning can accommodate anatomical changes throughout the treatment course. Functional MRI sequences, including diffusion-weighted and dynamic contrast-enhanced imaging, offer quantitative data for treatment response monitoring. Early clinical and dosimetric studies demonstrate that MRgRT can significantly reduce radiation dose to critical organs while maintaining target coverage. However, clinical evidence for MRgRT in oesophageal cancer is limited to small early-phase studies, with no phase II/III trials demonstrating improvements in survival, toxicity, or patient-reported outcomes. Long-term clinical benefits and cost-effectiveness remain unproven, highlighting the need for prospective outcome-focused studies to define the role for MRgRT within multimodality treatment pathways. Full article
(This article belongs to the Special Issue Adaptive Radiotherapy: Advanced Imaging for Personalised Treatment)
Show Figures

Figure 1

22 pages, 2918 KB  
Article
Multi-Attribute Physical-Layer Authentication Against Jamming and Battery-Depletion Attacks in LoRaWAN
by Azita Pourghasem, Raimund Kirner, Athanasios Tsokanos, Iosif Mporas and Alexios Mylonas
Future Internet 2026, 18(1), 38; https://doi.org/10.3390/fi18010038 - 8 Jan 2026
Abstract
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication [...] Read more.
LoRaWAN is widely used for IoT environmental monitoring, but its lightweight security mechanisms leave the physical layer vulnerable to availability attacks such as jamming and battery-depletion. These risks are particularly critical in mission-critical environmental monitoring systems. This paper proposes a multi-attribute physical-layer authentication (PLA) framework that supports uplink legitimacy assessment by jointly exploiting radio, energy, and temporal attributes, specifically RSSI, altitude, battery_level, battery_drop_speed, event_step, and time_rank. Using publicly available Brno LoRaWAN traces, we construct a device-aware semi-synthetic dataset comprising 230,296 records from 1921 devices over 13.68 days, augmented with energy, spatial, and temporal attributes and injected with controlled jamming and battery-depletion anomalies. Five classifiers (Random Forest, Multi-Layer Perceptron, XGBoost, Logistic Regression, and K-Nearest Neighbours) are evaluated using accuracy, precision, recall, F1-score, and AUC-ROC. The Multi-Layer Perceptron achieves the strongest detection performance (F1-score = 0.8260, AUC-ROC = 0.8953), with Random Forest performing comparably. Deployment-oriented computational profiling shows that lightweight models such as Logistic Regression and the MLP achieve near-instantaneous prediction latency (below 2 µs per sample) with minimal CPU overhead, while tree-based models incur higher training and storage costs but remain feasible for Network Server-side deployment. Full article
Show Figures

Figure 1

35 pages, 1656 KB  
Review
Microgrid Optimization with Metaheuristic Algorithms—A Review of Technologies and Trends for Sustainable Energy Systems
by Ghassan Zubi and Sofoklis Makridis
Sustainability 2026, 18(2), 647; https://doi.org/10.3390/su18020647 - 8 Jan 2026
Abstract
Microgrids are evolving from simple hybrid systems into complex, multi-energy platforms with high-dimensional optimization challenges due to technological diversification, sector coupling, and increased data granularity. This review systematically examines the intersection of microgrid optimization and metaheuristic algorithms, focusing on the period from 2015 [...] Read more.
Microgrids are evolving from simple hybrid systems into complex, multi-energy platforms with high-dimensional optimization challenges due to technological diversification, sector coupling, and increased data granularity. This review systematically examines the intersection of microgrid optimization and metaheuristic algorithms, focusing on the period from 2015 to 2025. We first trace the technological evolution of microgrids and identify the drivers of increased optimization complexity. We then provide a structured overview of metaheuristic algorithms—including evolutionary, swarm intelligence, physics-based, and human-inspired approaches—and discuss their suitability for high-dimensional search spaces. Through a comparative analysis of case studies, we demonstrate that metaheuristics such as genetic algorithms, particle swarm optimization, and the gray wolf optimizer can reduce the computation time to under 10% of that required by an exhaustive search while effectively handling multimodal, constrained objectives. The review further highlights the growing role of hybrid algorithms and the need to incorporate uncertainty into optimization models. We conclude that future microgrid design will increasingly rely on adaptive and hybrid metaheuristics, supported by standardized benchmark problems, to navigate the growing dimensionality and ensure resilient, cost-effective, and sustainable systems. This work provides a roadmap for researchers and practitioners in selecting and developing optimization frameworks for the next generation of microgrids. Full article
Show Figures

Figure 1

20 pages, 11036 KB  
Article
GMF-Net: A Gaussian-Matched Fusion Network for Weak Small Object Detection in Satellite Laser Ranging Imagery
by Wei Zhu, Weiming Gong, Yong Wang, Yi Zhang and Jinlong Hu
Sensors 2026, 26(2), 407; https://doi.org/10.3390/s26020407 - 8 Jan 2026
Abstract
Detecting small objects in Satellite Laser Ranging (SLR) CCD images is critical yet challenging due to low signal-to-noise ratios and complex backgrounds. Existing frameworks often suffer from high computational costs and insufficient feature extraction capabilities for such tiny targets. To address these issues, [...] Read more.
Detecting small objects in Satellite Laser Ranging (SLR) CCD images is critical yet challenging due to low signal-to-noise ratios and complex backgrounds. Existing frameworks often suffer from high computational costs and insufficient feature extraction capabilities for such tiny targets. To address these issues, we propose the Gaussian-Matched Fusion Network (GMF-Net), a lightweight and high-precision detector tailored for SLR scenarios. The core scientific innovation lies in the Gaussian-Matched Convolution (GMConv) module. Unlike standard convolutions, GMConv is theoretically grounded in the physical Gaussian energy distribution of SLR targets. It employs multi-directional heterogeneous sampling to precisely match target energy decay, enhancing central feature response while suppressing background noise. Additionally, we incorporate a Cross-Stage Partial Pyramidal Convolution (CSPPC) to reduce parameter redundancy and a Cross-Feature Attention (CFA) module to bridge multi-scale features. To validate the method, we constructed the first dedicated SLR-CCD dataset. Experimental results show that GMF-Net achieves an mAP@50 of 93.1% and mAP@50–95 of 52.4%. Compared to baseline models, parameters are reduced by 26.6% (to 2.2 M) with a 27.4% reduction in computational load, demonstrating a superior balance between accuracy and efficiency for automated SLR systems. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

22 pages, 2930 KB  
Article
Developing and Assessing the Performance of a Machine Learning Model for Analyzing Drinking Behaviors in Minipigs for Experimental Research
by Frederik Deutch, Lars Schmidt Hansen, Firas Omar Saleh, Marc Gjern Weiss, Constanca Figueiredo, Cyril Moers, Anna Krarup Keller and Stefan Rahr Wagner
Sensors 2026, 26(2), 402; https://doi.org/10.3390/s26020402 - 8 Jan 2026
Abstract
Monitoring experimental animals is essential for ethical, scientific, and financial reasons. Conventional observation methods are limited by subjectivity and time constraints. Camera-based monitoring combined with machine learning offers a promising solution for automating the monitoring process. This study aimed to validate and assess [...] Read more.
Monitoring experimental animals is essential for ethical, scientific, and financial reasons. Conventional observation methods are limited by subjectivity and time constraints. Camera-based monitoring combined with machine learning offers a promising solution for automating the monitoring process. This study aimed to validate and assess the performance of a machine learning model for analyzing drinking behavior in minipigs. A novel, vision-based monitoring system was developed and tested to detect drinking behavior in minipigs. The system, based on low-cost Raspberry Pi units, enabled on-site video analysis. A dataset of 5297 images was used to train a YOLOv11n object detection model to identify key features such as pig heads and water faucets. Drinking events were defined by the spatial proximity of these features within video frames. The multi-class object detection model achieved an accuracy of above 97%. Manual validation using human-annotated ground truth on 72 h of video yielded an overall accuracy of 99.7%, with a precision of 99.7%, recall of 99.2%, and F1-score of 99.5%. Drinking patterns for three pigs were analyzed using 216 h of video. The results revealed a bimodal drinking pattern and substantial inter-pig variability. A limitation to the study was chosen methods missing distinguishment between multiple pigs and the absence of quantification of water intake. This study demonstrates the feasibility of a low-cost, computer vision-based system for monitoring drinking behavior in individually housed experimental pigs, supporting earlier detection of illness. Full article
Show Figures

Figure 1

15 pages, 16716 KB  
Article
MCAH-ACO: A Multi-Criteria Adaptive Hybrid Ant Colony Optimization for Last-Mile Delivery Vehicle Routing
by De-Tian Chu, Xin-Yu Cheng, Lin-Yuan Bai and Hai-Feng Ling
Sensors 2026, 26(2), 401; https://doi.org/10.3390/s26020401 - 8 Jan 2026
Abstract
The growing demand for efficient last-mile delivery has made routing optimization a critical challenge for logistics providers. Traditional vehicle routing models typically minimize a single criterion, such as travel distance or time, without considering broader social and environmental impacts. This paper proposes a [...] Read more.
The growing demand for efficient last-mile delivery has made routing optimization a critical challenge for logistics providers. Traditional vehicle routing models typically minimize a single criterion, such as travel distance or time, without considering broader social and environmental impacts. This paper proposes a novel Multi-Criteria Adaptive Hybrid Ant Colony Optimization (MCAH-ACO) algorithm for solving the delivery vehicle routing problem formulated as a Multiple Traveling Salesman Problem (MTSP). The proposed MCAH-ACO introduces three key innovations: a multi-criteria pheromone decomposition strategy that maintains separate pheromone matrices for each optimization objective, an adaptive weight balancing mechanism that dynamically adjusts criterion weights to prevent dominance by any single objective, and a 2-opt local search enhancement integrated with elite archive diversity preservation. A comprehensive cost function is designed to integrate four categories of factors: distance, time, social-environmental impact, and safety. Extensive experiments on real-world data from the Greater Toronto Area demonstrate that MCAH-ACO significantly outperforms existing approaches including Genetic Algorithm (GA), Adaptive GA, and standard Max–Min Ant System (MMAS), achieving 12.3% lower total cost and 18.7% fewer safety-critical events compared with the best baseline while maintaining computational efficiency. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

28 pages, 2746 KB  
Systematic Review
A Review of the Transition from Industry 4.0 to Industry 5.0: Unlocking the Potential of TinyML in Industrial IoT Systems
by Margarita Terziyska, Iliana Ilieva, Zhelyazko Terziyski and Nikolay Komitov
Sci 2026, 8(1), 10; https://doi.org/10.3390/sci8010010 - 7 Jan 2026
Abstract
The integration of artificial intelligence into the Industrial Internet of Things (IIoT), supported by edge computing architectures, marks a new paradigm of intelligent automation. Tiny Machine Learning (TinyML) is emerging as a key technology that enables the deployment of machine learning models on [...] Read more.
The integration of artificial intelligence into the Industrial Internet of Things (IIoT), supported by edge computing architectures, marks a new paradigm of intelligent automation. Tiny Machine Learning (TinyML) is emerging as a key technology that enables the deployment of machine learning models on ultra-low-power devices. This study presents a systematic review of 110 peer-reviewed publications (2020–2025) identified from Scopus, Web of Science, and IEEE Xplore following the PRISMA protocol. Bibliometric and thematic analyses were conducted using Biblioshiny and VOSviewer to identify major trends, architectural approaches, and industrial applications of TinyML. The results reveal four principal research clusters: edge intelligence and energy efficiency, federated and explainable learning, human-centric systems, and sustainable resource management. Importantly, the surveyed industrial implementations report measurable gains—typically reducing inference latency to the millisecond range, lowering on-device energy cost to the sub-milliwatt regime, and sustaining high task accuracy, thereby substantiating the practical feasibility of TinyML in real IIoT settings. The analysis indicates a conceptual shift from engineering- and energy-focused studies toward cognitive, ethical, and security-oriented perspectives aligned with the principles of Industry 5.0. TinyML is positioned as a catalyst for the transition from automation to cognitive autonomy and as a technological foundation for building energy-efficient, ethical, and sustainable industrial ecosystems. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

23 pages, 3153 KB  
Article
SSCW-YOLO: A Lightweight and High-Precision Model for Small Object Detection in UAV Scenarios
by Zhuolun He, Rui She, Bo Tan, Jiajian Li and Xiaolong Lei
Drones 2026, 10(1), 41; https://doi.org/10.3390/drones10010041 - 7 Jan 2026
Abstract
To address the problems of missed and false detections caused by insufficient feature quality in small object detection from UAV perspectives, this paper proposes a UAV small object detection algorithm based on YOLOv8 feature optimization. A spatial cosine convolution module is introduced into [...] Read more.
To address the problems of missed and false detections caused by insufficient feature quality in small object detection from UAV perspectives, this paper proposes a UAV small object detection algorithm based on YOLOv8 feature optimization. A spatial cosine convolution module is introduced into the backbone network to optimize spatial features, thereby alleviating the problem of small object feature loss and improving the detection accuracy and speed of the model. An improved C2f_SCConv feature fusion module is employed for feature integration, which effectively reduces feature redundancy in spatial and channel dimensions, thereby lowering model complexity and computational cost. Meanwhile, the WIoU loss function is used to replace the original CIoU loss function, reducing the interference of geometric factors in anchor box regression, enabling the model to focus more on low-quality anchor boxes, and enhancing its small object detection capability. Ablation and comparative experiments on the VisDrone dataset validate the effectiveness of the proposed algorithm for small object detection from UAV perspectives, while generalization experiments on the DOTA and SSDD datasets demonstrate that the algorithm possesses strong generalization performance. Full article
Show Figures

Figure 1

23 pages, 1096 KB  
Article
A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference
by Chi Zhang, Fenhua Bai, Jinhua Wan and Yu Chen
Electronics 2026, 15(2), 275; https://doi.org/10.3390/electronics15020275 - 7 Jan 2026
Abstract
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth [...] Read more.
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth of modern deep neural networks rapidly consumes this budget, necessitating frequent, computationally expensive bootstrapping operations to refresh the noise. This bootstrapping process has emerged as the primary performance bottleneck. Current noise management strategies are predominantly static, triggering bootstrapping at pre-defined, fixed intervals. This approach is sub-optimal for deep, complex architectures, leading to excessive computational overhead and potential accuracy degradation due to cumulative precision loss. To address this challenge, we propose a Deep Network-aware Adaptive Noise-budget Management mechanism, a novel mechanism that formulates noise budget allocation as a sequential decision problem optimized via reinforcement learning. The core of the proposed mechanism comprises two components. First, we construct a layer-aware noise consumption prediction model to accurately estimate the heterogeneous computational costs and noise accumulation across different network layers. Second, we design a Deep Q-Network-driven optimization algorithm. This Deep Q-Network agent is trained to derive a globally optimal policy, dynamically determining the optimal timing and network location for executing bootstrapping operations, based on the real-time output of the noise predictor and the current network state. This approach shifts from a static, pre-defined strategy to an adaptive, globally optimized one. Experimental validation on several typical deep neural network architectures demonstrates that the proposed mechanism significantly outperforms state-of-the-art fixed strategies, markedly reducing redundant bootstrapping overhead while maintaining model performance. Full article
(This article belongs to the Special Issue Security and Privacy in Artificial Intelligence Systems)
25 pages, 6216 KB  
Article
Three-Dimensional Surface High-Precision Modeling and Loss Mechanism Analysis of Motor Efficiency Map Based on Driving Cycles
by Jiayue He, Yan Sui, Qiao Liu, Zehui Cai and Nan Xu
Energies 2026, 19(2), 302; https://doi.org/10.3390/en19020302 - 7 Jan 2026
Abstract
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy [...] Read more.
Amid fossil-fuel depletion and worsening environmental impacts, battery electric vehicles (BEVs) are pivotal to the energy transition. Energy management in BEVs relies on accurate motor efficiency maps, yet real-time onboard control demands models that balance fidelity with computational cost. To address map inaccuracy under real driving and the high runtime cost of 2-D interpolation, we propose a driving-cycle-aware, physically interpretable quadratic polynomial-surface framework. We extract priority operating regions on the speed–torque plane from typical driving cycles and model electrical power Pe  as a function of motor speed n and mechanical power Pm. A nested model family (M3–M6) and three fitting strategies—global, local, and region-weighted—are assessed using R2, RMSE, a computational complexity index (CCI), and an Integrated Criterion for accuracy–complexity and stability (ICS). Simulations on the Worldwide Harmonized Light Vehicles Test Cycle, the China Light-Duty Vehicle Test Cycle, and the Urban Dynamometer Driving Schedule show that region-weighted fitting consistently achieves the best or near-best ICS; relative to Global fitting, mean ICS decreases by 49.0%, 46.4%, and 90.6%, with the smallest variance. Regarding model order, the four-term M4 +Pm2 offers the best accuracy–complexity trade-off. Finally, the region-weighted fitting M4 +Pm2 polynomial model was integrated into the vehicle-level economic speed planning model based on the dynamic programming algorithm. In simulations covering a 27 km driving distance, this model reduced computational time by approximately 87% compared to a linear interpolation method based on a two-dimensional lookup table, while achieving an energy consumption deviation of about 0.01% relative to the lookup table approach. Results demonstrate that the proposed model significantly alleviates computational burden while maintaining high energy consumption prediction accuracy, thereby providing robust support for real-time in-vehicle applications in whole-vehicle energy management. Full article
(This article belongs to the Special Issue Challenges and Research Trends of Energy Management)
Show Figures

Figure 1

Back to TopTop