Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (18,131)

Search Parameters:
Keywords = learning state

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2863 KB  
Article
A Physics-Informed Hybrid Ensemble for Robust and High-Fidelity Temperature Forecasting in PMSMs
by Rifath Bin Hossain, Md Maruf Al Hasan, Md Imran Khan, Monzur Ahmed, Yuting Lin and Xuchao Pan
World Electr. Veh. J. 2026, 17(3), 133; https://doi.org/10.3390/wevj17030133 (registering DOI) - 5 Mar 2026
Abstract
The deployment of artificial intelligence in safety-critical industrial systems is hindered by a core trust deficit, as models trained via empirical risk minimization often fail catastrophically in out-of-distribution (OOD) scenarios. We address this challenge by developing a physics-informed hybrid ensemble that achieves state-of-the-art [...] Read more.
The deployment of artificial intelligence in safety-critical industrial systems is hindered by a core trust deficit, as models trained via empirical risk minimization often fail catastrophically in out-of-distribution (OOD) scenarios. We address this challenge by developing a physics-informed hybrid ensemble that achieves state-of-the-art accuracy and robustness for Permanent Magnet Synchronous Motor (PMSM) temperature forecasting. Our methodology first calibrates a Lumped-Parameter Thermal Network (LPTN) to serve as a physics engine for generating physically consistent data augmentations, which then pre-trains a Temporal Convolutional Network (TCN) encoder via self-supervision, with the final prediction assembled from the physics model’s baseline guess and a correction learned by an ensemble of gradient boosting models on a rich, multi-modal feature set. Evaluated against a suite of strong baselines, our hybrid ensemble achieves a state-of-the-art Root Mean Squared Error of 5.24 °C on a challenging OOD stress test composed of the most chaotic operational profiles. Most compellingly, our model’s performance improved by an unprecedented −10.68% under these extreme stress conditions where standard, purely data-driven models collapsed. This demonstrated robustness, combined with a statistically valid Coverage Under Shift (CUS) Gap of only 1.43%, provides a complete blueprint for building high-performance, trustworthy AI, enabling safer and more efficient control of critical cyber-physical systems and motivating future research into physics-guided pre-training for other industrial assets. Full article
Show Figures

Graphical abstract

22 pages, 1687 KB  
Article
Data-Driven Offline Compensation of Robotic Welding Trajectories Using 3D Optical Metrology in Industrial Manufacturing
by Alexandru Costinel Filip, Dorian Cojocaru and Ionel Cristian Vladu
Appl. Sci. 2026, 16(5), 2510; https://doi.org/10.3390/app16052510 - 5 Mar 2026
Abstract
The geometric variability of industrial components represents a persistent challenge in robotic arc welding, particularly in high-volume manufacturing environments where parts are positioned in fixtures based on nominal CAD assumptions. Even moderate deviations in dimensions or seating conditions can lead to weld defects, [...] Read more.
The geometric variability of industrial components represents a persistent challenge in robotic arc welding, particularly in high-volume manufacturing environments where parts are positioned in fixtures based on nominal CAD assumptions. Even moderate deviations in dimensions or seating conditions can lead to weld defects, rework, and reduced process capability when conventional offline programming is employed. This paper presents an applied industrial workflow for adaptive robotic welding trajectory correction that integrates full-field 3D optical metrology with a data-driven deep reinforcement learning (DRL) model. Prior to welding, each component is scanned using a structured-light 3D system, and critical geometric deviations are extracted relative to the nominal CAD model. These deviations define a compact state representation that is mapped, via a trained DRL agent, to corrective translational and rotational adjustments of the welding trajectory. Importantly, all trajectory corrections are computed offline, ensuring compatibility with standard industrial robot controllers and avoiding real-time computational overheads. The proposed approach is validated using real production data from an industrial batch of 5000 components characterized by significant dimensional variability and limited process capability. Experimental results demonstrate a reduction in welding defects exceeding 90%, elimination of rework associated with improper part positioning, and an improvement of the overall process performance to a sigma level of 5.219. The results show that combining 3D optical metrology with learning-based trajectory adaptation enables robust compensation of part-level geometric deviations without mechanical fixture modifications. The proposed method provides a practical and scalable solution for improving welding quality in manufacturing environments affected by upstream variability and imperfect part positioning. Full article
Show Figures

Figure 1

34 pages, 2208 KB  
Article
Small Language Models for Phishing Website Detection: Cost, Performance, and Privacy Trade-Offs
by Georg Goldenits, Philip König, Sebastian Raubitzek and Andreas Ekelhart
J. Cybersecur. Priv. 2026, 6(2), 48; https://doi.org/10.3390/jcp6020048 - 5 Mar 2026
Abstract
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models [...] Read more.
Phishing websites pose a major cybersecurity threat, exploiting unsuspecting users and causing significant financial and organisational harm. Traditional machine learning approaches for phishing detection often require extensive feature engineering, continuous retraining, and costly infrastructure maintenance. At the same time, proprietary large language models (LLMs) have demonstrated strong performance in phishing-related classification tasks, but their operational costs and reliance on external providers limit their practical adoption in many business environments. This paper presents a detection pipeline for malicious websites and investigates the feasibility of Small Language Models (SLMs) using raw HTML code and URLs. A key advantage of these models is that they can be deployed on local infrastructure, providing organisations with greater control over data and operations. We systematically evaluate 15 commonly used SLMs, ranging from 1 billion to 70 billion parameters, benchmarking their classification accuracy, computational requirements, and cost-efficiency. Our results highlight the trade-offs between detection performance and resource consumption. While SLMs underperform compared to state-of-the-art proprietary LLMs, the gap is moderate: the best SLM achieves an F1-score of 0.893 (Llama3.3:70B), compared to 0.929 for GPT-5.2, indicating that open-source models can provide a viable and scalable alternative to external LLM services. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

23 pages, 12225 KB  
Article
Stain-Standardized Deep Learning Framework for Robust Leukocyte Segmentation Across Heterogeneous Cytological Datasets
by Leila Ryma Lazouni, Mourtada Benazzouz, Fethallah Hadjila, Mohammed El Amine Lazouni and Mostafa El Habib Daho
Information 2026, 17(3), 262; https://doi.org/10.3390/info17030262 - 5 Mar 2026
Abstract
Accurate leukocyte segmentation remains challenging in automated hematological analysis due to staining variability, heterogeneous imaging conditions, and morphological diversity across cytological datasets, severely limiting deep learning model generalization. This work proposes a dual-module framework designed to achieve stain-invariant and robust leukocyte segmentation. The [...] Read more.
Accurate leukocyte segmentation remains challenging in automated hematological analysis due to staining variability, heterogeneous imaging conditions, and morphological diversity across cytological datasets, severely limiting deep learning model generalization. This work proposes a dual-module framework designed to achieve stain-invariant and robust leukocyte segmentation. The first module performs explicit stain standardization by combining a VGG-based encoder, a transformer bottleneck, and a convolutional decoder to harmonize diverse inputs toward a Wright–Giemsa reference appearance. The second module introduces a multi-encoder segmentation architecture integrating complementary spatial, leukocyte-specific, and nucleus-focused representations extracted from multiple color spaces. The framework is evaluated on six public and clinical datasets covering multiple staining protocols, magnifications, and imaging scenarios. Experimental results demonstrate consistent high performance, with Dice coefficients exceeding 96% on most datasets and systematic improvements over state-of-the-art methods. Extensive ablation studies confirm the synergistic contributions of stain-standardization and multi-encoder fusion to model robustness and cross-dataset generalization. This framework overcomes stain variability and domain shift, offering a practical tool for automated leukocyte analysis in clinical settings. Full article
Show Figures

Graphical abstract

17 pages, 4113 KB  
Article
PHER: A Method for Solving the Sparse Reward Problem of a Manipulator Grasping Task
by Dianfan Zhang, Mutian Yang, Yuxuan Wang, Yameng Dong, Shuhong Cheng and Kunpeng Zhao
Technologies 2026, 14(3), 164; https://doi.org/10.3390/technologies14030164 - 5 Mar 2026
Abstract
Off-policy reinforcement learning is usually used to train the grasping task model of the manipulator. However, in the training process, it is difficult to collect enough successful experience data and rewards for learning and training; that is, there is a problem of sparse [...] Read more.
Off-policy reinforcement learning is usually used to train the grasping task model of the manipulator. However, in the training process, it is difficult to collect enough successful experience data and rewards for learning and training; that is, there is a problem of sparse rewards. Hindsight experience replay (HER) allows the agent to relabel the completed states. However, not all failed experiences have the same effect on learning and training. Facing the many transitions generated by the environment during operation, adopting a random uniform sampling method from the experience replay buffer will result in low data utilization and slow convergence. This paper proposes using a prioritized sampling method to sample the relabelled transitions, and then combines various off-policy reinforcement learning algorithms with it for training in simulated environments. This paper uses the prioritized sampling method, which allows the agent to access more important transitions earlier and accelerate the convergence of training. The results demonstrate that hindsight experience replay with prioritization (PHER) exhibits significantly faster convergence compared to other methods. Full article
Show Figures

Figure 1

26 pages, 513 KB  
Article
Consolidated Bioprocessing of Lignocellulosic Biomass: A Review of Experimental Advances and Modeling Approaches
by Mark Korang Yeboah and Dirk Söffker
Bioresour. Bioprod. 2026, 2(1), 4; https://doi.org/10.3390/bioresourbioprod2010004 - 5 Mar 2026
Abstract
Growing global energy demand and concerns over climate change and fossil fuel depletion have increased interest in sustainable bioproducts such as ethanol. Unlike first-generation (1G) ethanol derived from food crops (e.g., corn), second-generation (2G) ethanol is produced from lignocellulosic biomass, an abundant non-food [...] Read more.
Growing global energy demand and concerns over climate change and fossil fuel depletion have increased interest in sustainable bioproducts such as ethanol. Unlike first-generation (1G) ethanol derived from food crops (e.g., corn), second-generation (2G) ethanol is produced from lignocellulosic biomass, an abundant non-food resource that addresses key sustainability concerns. Consolidated bioprocessing (CBP) integrates enzyme production, hydrolysis, and fermentation into a single step, using either microbial consortia or engineered microorganisms, thereby simplifying the process and potentially reducing costs compared with separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). However, CBP systems are complex due to dynamic interactions among microbial communities, metabolic pathways, and process conditions. Addressing this complexity requires modeling approaches that capture nonlinear relationships and support robust process optimization. Machine learning (ML)-based models offer data-driven tools to represent complex bioprocess dynamics, improve predictive accuracy, and optimize bioproduct formation, thereby supporting progress toward commercial viability. Although CBP can be applied to a range of bioproducts, this review primarily focuses on lignocellulosic ethanol and closely related biofuels. The review provides a comprehensive overview of key CBP processes, the current state of CBP modeling, major limitations, and the emerging role of ML in addressing modeling challenges. It summarizes recent modeling techniques for CBP, including polynomial models and response surface methodologies, and discusses regression and neural network approaches in detail. Both first-principles and data-driven modeling strategies are considered, highlighting advances that can improve the scalability and efficiency of CBP for bioproduction. Overall, this review offers perspectives on modeling-enabled pathways for utilizing low-cost lignocellulosic biomass in sustainable bioprocessing. Full article
Show Figures

Figure 1

29 pages, 3905 KB  
Article
CS-MLAkNN: A Cost-Sensitive Adaptive k-Nearest Neighbors Algorithm for Imbalanced Multi-Label Learning
by Zhengyao Shen, Jicong Duan, Ying Wang and Hualong Yu
Symmetry 2026, 18(3), 448; https://doi.org/10.3390/sym18030448 - 5 Mar 2026
Abstract
Multi-label data usually carries a complex structural class imbalance, which significantly affects the overall predictive performance of multi-label learning models. Although many studies have investigated this problem, most existing methods rely on resampling, static cost weighting, or ensemble learning. Few studies simultaneously consider [...] Read more.
Multi-label data usually carries a complex structural class imbalance, which significantly affects the overall predictive performance of multi-label learning models. Although many studies have investigated this problem, most existing methods rely on resampling, static cost weighting, or ensemble learning. Few studies simultaneously consider cost information and neighborhood size within the local statistical model of ML-kNN. To address this issue, this paper proposes a cost-sensitive adaptive k-nearest neighbors algorithm, named CS-MLAkNN, for imbalanced multi-label learning. The algorithm implements a dual cost-sensitive strategy at both the feature and label levels within the ML-kNN framework. Specifically, feature-level cost sensitivity is achieved through distance weighting during the training phase. In the prediction phase, label distribution information is incorporated into the posterior probability calculation to achieve label-level cost sensitivity. Moreover, the optimal number of neighbors (k) is determined adaptively through cross-validation. CS-MLAkNN maintains the simplicity and interpretability of the original ML-kNN, and meanwhile it explicitly introduces cost sensitivity and adaptiveness into three key steps: distance metric, posterior decision, and neighbor determination. Experimental results on 14 benchmark datasets demonstrate that the proposed method achieves optimal or near-optimal performance across various evaluation metrics. It also shows significant advantages over other state-of-the-art imbalanced multi-label learning algorithms. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Symmetry/Asymmetry)
Show Figures

Figure 1

19 pages, 1083 KB  
Review
Circulating RNA as a Functional Component of Liquid Biopsy in Cancer: Concepts, Classification, and Clinical Applications
by Kyung-Hee Kim and Byong Chul Yoo
Int. J. Mol. Sci. 2026, 27(5), 2403; https://doi.org/10.3390/ijms27052403 - 5 Mar 2026
Abstract
Liquid biopsy has become an integral component of precision oncology, with circulating tumor DNA serving as the dominant analyte for genomic profiling and disease monitoring. However, DNA-based approaches are intrinsically limited in their ability to capture dynamic cellular states, functional adaptation, and tumor–host [...] Read more.
Liquid biopsy has become an integral component of precision oncology, with circulating tumor DNA serving as the dominant analyte for genomic profiling and disease monitoring. However, DNA-based approaches are intrinsically limited in their ability to capture dynamic cellular states, functional adaptation, and tumor–host interactions. Circulating RNA has emerged as a complementary class of liquid biopsy biomarkers that reflects active transcriptional programs and systemic biological responses. In this review, we conceptualize circulating RNA as a liquid transcriptome and propose a structured classification framework based on physical carriers, RNA biotypes, and layers of biological interpretation. We describe how circulating RNA signals encode tissue-of-origin information, cell-state dynamics, and host immune responses, thereby enabling system-level insight into cancer biology beyond mutation-centric analyses. Recent large-scale profiling efforts and advances in extracellular RNA characterization further support the biological relevance and analytical feasibility of circulating RNA across diverse biofluids. We discuss emerging applications of circulating RNA across the cancer continuum, including early cancer detection and multi-cancer screening, tissue-of-origin inference, longitudinal monitoring of treatment response, detection of adaptive resistance, and immunotherapy stratification. In parallel, we critically examine key technical, analytical, and computational challenges that currently limit reproducibility and clinical translation, emphasizing the importance of standardized workflows, transparent reporting, and multi-center validation. Finally, we outline future directions for integrating circulating RNA with genomic and proteomic biomarkers, supported by advances in artificial intelligence and machine learning. Collectively, this review positions circulating RNA as a functionally informative and clinically promising component of next-generation liquid biopsy strategies in oncology. Full article
(This article belongs to the Special Issue Advances in the Translational Preclinical Research)
Show Figures

Figure 1

17 pages, 33308 KB  
Article
Mapping of Threatened Vereda Wetlands in the Brazilian Midwest Using a Domain-Specific U-Net
by Jeaneth Machicao, Alexandre Augusto Barbosa, Leandro O. Salles, Peter Mann Toledo, Pedro Luiz P. Corrêa, Luiz Flamarion B. Oliveira, Rosane Garcia Collevatti, Eduardo Barroso de Souza and Jean Pierre H. B. Ometto
Remote Sens. 2026, 18(5), 791; https://doi.org/10.3390/rs18050791 - 5 Mar 2026
Abstract
The palm swamp landscapes, particularly the Vereda wetlands and their associated swamp gallery forests (VED.SGF), comprise essential yet threatened ecosystems within the Brazilian Cerrado. In addition to supporting significant portions of biodiversity, they provide critical ecosystem services such as storing and filtering excess [...] Read more.
The palm swamp landscapes, particularly the Vereda wetlands and their associated swamp gallery forests (VED.SGF), comprise essential yet threatened ecosystems within the Brazilian Cerrado. In addition to supporting significant portions of biodiversity, they provide critical ecosystem services such as storing and filtering excess rainwater and serving as major carbon reservoirs in organic soils. These wetlands are directly linked to the drainage systems of the headwaters of the main Cerrado river basins, which together account for about two-thirds of Brazil’s hydrographic basins. Mapping and managing VED.SGF ecosystems through remote sensing present major challenges addressed in this first study. Their narrow, dendritic, and complex tabular spatial pattern, often elongated along watersheds on scales of hundreds of kilometers, suffering distortions due to human impact, and the limited amount of annotated data make segmentation particularly challenging. Existing deep learning (DL) methods, typically pre-trained on natural images, struggle to capture the spectral and spatial intricacies of these ecosystems. This study introduces a trained-from-scratch U-Net model supported by field-based experimental procedures to ensure high-quality wetland annotations. The resulting dataset covers approximately 7300 km2 in western Bahia and provides domain-specific weights tailored to remote sensing applications. Using high-resolution (4.6 m) RGB mosaics, the model was trained, validated, and tested to establish a reproducible and scalable pipeline. The proposed method achieved robust results in an independent test area of 8040 km2, with a mean IoU of 0.728, F1-score of 0.843, and Cohen’s Kappa of 0.837. These results demonstrate consistent performance and strong generalization to new areas, establishing a scientifically reliable baseline that situates the model competitively within the current state of the art. By releasing both the model weights and annotated dataset, this study provides valuable resources to advance future research on mapping and monitoring these unique and strategic wetland ecosystems. Full article
(This article belongs to the Special Issue Intelligent Remote Sensing for Wetland Mapping and Monitoring)
Show Figures

Figure 1

23 pages, 2454 KB  
Article
Sustainable Maritime Applications with Lightweight Classifier Using Modified MobileNet
by Gandeva Bayu Satrya, Febrian Kurniawan, Gelar Budiman, Adelia Octora Pristisahida, Bledug Kusuma Prasaja Moesdradjad, I Nyoman Apraz Ramatryana and Salah Eddine Choutri
Technologies 2026, 14(3), 161; https://doi.org/10.3390/technologies14030161 - 5 Mar 2026
Abstract
The enormously growing demand for seafood has resulted in the over-exploitation of marine resources, pushing certain species to the brink of extinction. Overfishing is one of the main issues in sustainable marine development. To support marine resource protection and sustainable fishing, this study [...] Read more.
The enormously growing demand for seafood has resulted in the over-exploitation of marine resources, pushing certain species to the brink of extinction. Overfishing is one of the main issues in sustainable marine development. To support marine resource protection and sustainable fishing, this study proposes advanced fish classification techniques using state-of-the-art machine learning (ML). Specifically, the proposed method enables the precise identification of protected fish species, among other features. In this paper, we present a system-level optimization of the MobileNet architecture, termed M-MobileNet, designed to operate efficiently on resource-limited hardware environments. Our classifier is constructed by a refined modification of the well-known MobileNet neural network, resulting in a reduction of parameters. Furthermore, we have collected, organized, and compiled an original and comprehensive labeled dataset of 37,462 images of fish native to the Indonesian archipelago. The proposed model is trained on this dataset to classify images of captured fish and accurately identify their respective species. Furthermore, the system provides recommendations regarding the consumability of the catch. Compared to the MobileNet deep neural network structure, our model utilizes only 50% of the top-layer parameters, with approximately 42% GTX 860M utility. This configuration results in achieving up to 97% accuracy of classification. Considering the constrained computing capacity prevalent on many fishing vessels, our proposed model offers a practical solution for on-site fish classification. Moreover, synchronized implementation of the proposed model across multiple vessels can provide valuable insights into the movement and location of various fish species. Full article
Show Figures

Figure 1

28 pages, 3180 KB  
Article
A Dual-Stream State-Space Fusion Network with Implicit Neural Representation for Hyperspectral–Multispectral Image Fusion
by Baisen Liu, Shuaiwei Wang, Hongxia Chu, Weiming Zheng and Weili Kong
Remote Sens. 2026, 18(5), 789; https://doi.org/10.3390/rs18050789 - 4 Mar 2026
Abstract
Hyperspectral–multispectral (HSI–MSI) image fusion aims to reconstruct high-spatial-resolution hyperspectral images (HR-HSIs) by combining the spectral fidelity of low-resolution HSIs (LR-HSIs) with the spatial details of high-resolution MSIs (HR-MSIs). A key challenge is preserving spectral–spatial consistency under cross-modal resolution mismatch, where inadequate long-range dependency [...] Read more.
Hyperspectral–multispectral (HSI–MSI) image fusion aims to reconstruct high-spatial-resolution hyperspectral images (HR-HSIs) by combining the spectral fidelity of low-resolution HSIs (LR-HSIs) with the spatial details of high-resolution MSIs (HR-MSIs). A key challenge is preserving spectral–spatial consistency under cross-modal resolution mismatch, where inadequate long-range dependency modeling and unstable inter-modality interaction may induce spectral distortion and structural discontinuities. This paper proposes DSIR-Net (DSIR), a dual-stream state-space fusion architecture equipped with an implicit neural representation (INR) module. DSIR decouples spectral and spatial representation learning into two coordinated streams and leverages state-space modeling to aggregate global context efficiently during progressive fusion. Moreover, INR-based coordinate-conditioned refinement provides continuous sub-pixel compensation, enhancing high-frequency detail recovery while suppressing fusion-induced artifacts. Across four commonly used benchmark datasets, DSIR shows consistent advantages over the competing methods in both numerical metrics and visual reconstruction quality. In addition to sharper structural details, DSIR preserves spectral information more faithfully. Using the best result among the baselines on each dataset as reference, the PSNR improvements are 0.040 dB (Houston), 0.204 dB (PaviaU), 0.093 dB (Botswana), and 0.163 dB (Chikusei). Full article
Show Figures

Figure 1

25 pages, 1526 KB  
Review
An Evolution of Our Understanding of Decomplexification Estimation for Early Detection, Monitoring and Modeling of Human Physiology
by Milena Čukić Radenković, Camillo Porcaro and Victoria Lopez
Fractal Fract. 2026, 10(3), 169; https://doi.org/10.3390/fractalfract10030169 - 4 Mar 2026
Abstract
Human physiology is among the most complex systems in nature, characterized by intricate structural and functional networks and rich temporal dynamics. Electrophysiological signals produced by different tissues/organs reflect physiological activity, and are inherently non-stationary, non-linear, and noisy. This work focuses on fractal analysis, [...] Read more.
Human physiology is among the most complex systems in nature, characterized by intricate structural and functional networks and rich temporal dynamics. Electrophysiological signals produced by different tissues/organs reflect physiological activity, and are inherently non-stationary, non-linear, and noisy. This work focuses on fractal analysis, a framework that captures the self-similar and scale-free properties of electrophysiological signals, which is considered to act as an output of complex physiological structures that generate complex processes. Central to this approach is the principle of ‘decomplexification’, whereby aging and disease are associated with a loss of physiological complexity. We discuss key algorithms, particularly Higuchi’s fractal dimension, which is often combined with other nonlinear measures and machine-learning models for real-time analysis of electrophysiological signals. Evidence shows that fractal metrics enable the early detection and monitoring of neurological and psychiatric disorders, outperforming traditional spectral measures. In movement disorders and mood disorders, fractal and nonlinear features show high diagnostic accuracy. Beyond diagnostics, we discuss therapeutic applications, including the prediction of responsiveness to non-invasive brain stimulation. Here, we envisage the evolution of one fractal or nonlinear measure use, to several measures applied, then use it as a feature for machine learning, and then realize that a whole cluster of biomarkers must be used to reflect the state of autonomic profile, which then can be used for ontology-based application profiles that can be machine-actionable. In addition, we discuss the fractal and fractional description of transport processes, which offer innovative improvement for a much more accurate description of physiological reality as a prerequisite for further modeling: for example, this is needed for digital twins to support the clinical translation of fractal analysis for personalized medicine. In essence, if one is trying to mathematically describe or quantify structures or processes in human physiology, fractal and fractional are the supreme and adequate approach to accurately model that reality. Full article
Show Figures

Figure 1

19 pages, 1677 KB  
Article
Optimization of Accuracy-Sensitive Task Offloading and Model Update in Vehicular Edge Computing
by Yuanjie Bai and Junbin Liang
Electronics 2026, 15(5), 1072; https://doi.org/10.3390/electronics15051072 - 4 Mar 2026
Abstract
Vehicular edge computing (VEC) enables vehicles to offload computation-intensive tasks to roadside units (RSUs) equipped with deep learning (DL) models, thereby supporting low-latency and accuracy-sensitive intelligent vehicular tasks. To adapt DL models to evolving task requirements and time-varying vehicular environments, the RSUs must [...] Read more.
Vehicular edge computing (VEC) enables vehicles to offload computation-intensive tasks to roadside units (RSUs) equipped with deep learning (DL) models, thereby supporting low-latency and accuracy-sensitive intelligent vehicular tasks. To adapt DL models to evolving task requirements and time-varying vehicular environments, the RSUs must consume limited computing and memory resources to retrieve optimized parameters from the cloud to update local models. During these updates, the DL models cannot provide services to tasks, and vice versa. However, the limited computational and memory resources of RSUs make it challenging to determine which tasks to offload and which DL models to update, in order to maximize task acceptance rates and quality of service. In this paper, we investigate the joint optimization of accuracy-sensitive task offloading and DL model updating in VEC systems. We formulate the problem as a mixed-integer nonlinear programming (MINLP) problem that aims to maximize a weighted utility function of task acceptance rate (AR) and quality of service (QoS), subject to latency, accuracy, and resource constraints. The formulated problem is shown to be NP-hard. To enable efficient decision making, we propose a heuristic algorithm termed the Load-Accuracy-Sensitive Joint Task Offloading and Model Update algorithm. The proposed algorithm leverages real-time system state information and jointly considers transmission feasibility, RSU workload, model accuracy matching, and queue-aware load balancing when making task offloading and model update decisions. Extensive simulation results demonstrate that the proposed algorithm outperforms benchmark algorithms. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 7825 KB  
Article
STAG-Net: A Lightweight Spatial–Temporal Attention GCN for Real-Time 6D Human Pose Estimation in Human–Robot Collaboration Scenarios
by Chunxin Yang, Ruoyu Jia, Qitong Guo, Xiaohang Shi, Masahiro Hirano and Yuji Yamakawa
Robotics 2026, 15(3), 54; https://doi.org/10.3390/robotics15030054 - 4 Mar 2026
Abstract
Most existing research in human pose estimation focuses on predicting joint positions, paying limited attention to recovering the full 6D human pose, which comprises both 3D joint positions and bone orientations. Position-only methods treat joints as independent points, often resulting in structurally implausible [...] Read more.
Most existing research in human pose estimation focuses on predicting joint positions, paying limited attention to recovering the full 6D human pose, which comprises both 3D joint positions and bone orientations. Position-only methods treat joints as independent points, often resulting in structurally implausible poses and increased sensitivity to depth ambiguities—cases where poses share nearly identical joint positions but differ significantly in limb orientations. Incorporating bone orientation information helps enforce geometric consistency, yielding more anatomically plausible skeletal structures. Additionally, many state-of-the-art methods rely on large, computationally expensive models, which limit their applicability in real-time scenarios, such as human–robot collaboration. In this work, we propose STAG-Net, a novel 2D-to-6D lifting network that integrates Graph Convolutional Networks (GCNs), attention mechanisms, and Temporal Convolutional Networks (TCNs). By simultaneously learning joint positions and bone orientations, STAG-Net promotes geometrically consistent skeletal structures while remaining lightweight and computationally efficient. On the Human3.6M benchmark, STAG-Net achieves an MPJPE of 41.8 mm using 243 input frames. In addition, we introduce a lightweight single-frame variant, STG-Net, which achieves 50.8 mm MPJPE while operating in real time at 60 FPS using a single RGB camera. Extensive experiments on multiple large-scale datasets demonstrate the effectiveness and efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

19 pages, 1359 KB  
Article
ESO-Enhanced Actor–Critic Reinforcement Learning-Optimised Trajectory Tracking Control for 3-DOF Marine Vessels
by Xiaoling Liang and Jiajian Li
Mathematics 2026, 14(5), 867; https://doi.org/10.3390/math14050867 - 4 Mar 2026
Abstract
This paper develops an extended-state-observer (ESO)-enhanced actor–critic reinforcement learning (RL) scheme for the trajectory tracking control of 3-DOF marine vessels subject to uncertain hydrodynamics and environmental disturbances. A coordinate-consistent error construction is provided to obtain an exact strict-feedback second-order uncertain template. On this [...] Read more.
This paper develops an extended-state-observer (ESO)-enhanced actor–critic reinforcement learning (RL) scheme for the trajectory tracking control of 3-DOF marine vessels subject to uncertain hydrodynamics and environmental disturbances. A coordinate-consistent error construction is provided to obtain an exact strict-feedback second-order uncertain template. On this basis, an Hamilton–Jacobi–Bellman (HJB)-inspired optimised control structure is implemented: the critic approximates the optimal value-gradient and the actor generates the optimised control law. A key simplification is employed: rather than minimising the squared Bellman residual via complex gradients, we introduce an HJB-inspired actor–critic consistency regularisation through a weight-matching coupling. This yields computationally light online update laws and enables transparent Lyapunov-based stability analysis while not claiming exact HJB satisfaction or policy optimality. The ESO estimates lumped uncertainty and provides feedforward compensation, so the RL module learns only the observer residual. A composite Lyapunov analysis establishes the semi-global uniform ultimate boundedness of tracking errors and boundedness of all observer signals. Practical implementation with thruster allocation, explicit wind–wave–current disturbance shaping filters, and a theory-aligned ablation protocol are provided for reproducibility. Full article
Show Figures

Figure 1

Back to TopTop