Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (766)

Search Parameters:
Keywords = data errors propagation modelling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1998 KB  
Article
Analysis of the Measurement Uncertainties in the Characterization Tests of Lithium-Ion Cells
by Thomas Hußenether, Carlos Antônio Rufino Júnior, Tomás Selaibe Pires, Tarani Mishra, Jinesh Nahar, Akash Vaghani, Richard Polzer, Sergej Diel and Hans-Georg Schweiger
Energies 2026, 19(3), 825; https://doi.org/10.3390/en19030825 - 4 Feb 2026
Abstract
The transition to renewable energy systems and electric mobility depends on the effectiveness, reliability, and durability of lithium-ion battery technology. Accurate modeling and control of battery systems are essential to ensure safety, efficiency, and cost-effectiveness in electric vehicles and grid storage. In engineering [...] Read more.
The transition to renewable energy systems and electric mobility depends on the effectiveness, reliability, and durability of lithium-ion battery technology. Accurate modeling and control of battery systems are essential to ensure safety, efficiency, and cost-effectiveness in electric vehicles and grid storage. In engineering and materials science, battery models depend on physical parameters such as capacity, energy, state of charge (SOC), internal resistance, power, and self-discharge rate. These parameters are affected by measurement uncertainty. Despite the widespread use of lithium-ion cells, few studies quantify how measurement uncertainty propagates to derived battery parameters and affects predictive modeling. This study quantifies how uncertainty in voltage, current, and temperature measurements reduces the accuracy of derived parameters used for simulation and control. This work presents a comprehensive uncertainty analysis of 18650 format lithium-ion cells with nickel cobalt aluminum oxide (NCA), nickel manganese cobalt oxide (NMC), and lithium iron phosphate (LFP) cathodes. It applies the law of error propagation to quantify uncertainty in key battery parameters. The main result shows that small variations in voltage, current, and temperature measurements can produce measurable deviations in internal resistance and SOC. These findings challenge the common assumption that such uncertainties are negligible in practice. The results also highlight a risk for battery management systems that rely on these parameters for control and diagnostics. The results show that propagated uncertainty depends on chemistry because of differences in voltage profiles, kinetic limitations, and temperature sensitivity. This observation informs cell selection and testing for specific applications. Improved quantification and control of measurement uncertainty can improve model calibration and reduce lifetime and cost risks in battery systems. These results support more robust diagnostic strategies and more defensible warranty thresholds. This study shows that battery testing and modeling should report and propagate measurement uncertainty explicitly. This is important for data-driven and physics-informed models used in industry and research. Full article
Show Figures

Figure 1

18 pages, 1357 KB  
Article
Zero-Inflated Data Analysis Using Graph Neural Networks with Convolution
by Sunghae Jun
Computers 2026, 15(2), 104; https://doi.org/10.3390/computers15020104 - 2 Feb 2026
Viewed by 5
Abstract
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most [...] Read more.
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most keyword frequencies are zero. Conventional statistical approaches, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, explicitly separate a structural zero component from a count component, but they typically assume independent observations and can be unstable when covariates are high-dimensional and sparse. To address these limitations, this paper proposes a graph-based zero-inflated learning framework that combines simple graph convolution (SGC) with zero-inflated count regression heads such as ZIP and ZINB. We first construct an observation graph by connecting similar samples, and then apply SGC to propagate and smooth features over the graph, producing convolutional representations that incorporate neighborhood information while remaining computationally lightweight. The resulting representations are used as covariates in ZIP and ZINB heads, which preserve probabilistic interpretability through maximum likelihood learning. Our experiments on simulated zero-inflated datasets with controlled zero ratios demonstrate that the proposed ZIP+SGC and ZINB+SGC consistently reduce prediction errors compared with their non-graph baselines, as measured by mean absolute error and root mean squared error. Overall, the proposed approach provides an efficient and interpretable way to integrate graph neural computation with zero-inflated modeling for sparse count prediction problems. Full article
Show Figures

Figure 1

19 pages, 772 KB  
Article
EVformer: A Spatio-Temporal Decoupled Transformer for Citywide EV Charging Load Forecasting
by Mengxin Jia and Bo Yang
World Electr. Veh. J. 2026, 17(2), 71; https://doi.org/10.3390/wevj17020071 - 31 Jan 2026
Viewed by 80
Abstract
Accurate forecasting of citywide electric vehicle (EV) charging load is critical for alleviating station-level congestion, improving energy dispatching, and supporting the stability of intelligent transportation systems. However, large-scale EV charging networks exhibit complex and heterogeneous spatio-temporal dependencies, and existing approaches often struggle to [...] Read more.
Accurate forecasting of citywide electric vehicle (EV) charging load is critical for alleviating station-level congestion, improving energy dispatching, and supporting the stability of intelligent transportation systems. However, large-scale EV charging networks exhibit complex and heterogeneous spatio-temporal dependencies, and existing approaches often struggle to scale with increasing station density or long forecasting horizons. To address these challenges, we develop a modular spatio-temporal prediction framework that decouples temporal sequence modeling from spatial dependency learning under an encoder–decoder paradigm. For temporal representation, we introduce a global aggregation mechanism that compresses multi-station time-series signals into a shared latent context, enabling efficient modeling of long-range interactions while mitigating the computational burden of cross-channel correlation learning. For spatial representation, we design a dynamic multi-scale attention module that integrates graph topology with data-driven neighbor selection, allowing the model to adaptively capture both localized charging dynamics and broader regional propagation patterns. In addition, a cross-step transition bridge and a gated fusion unit are incorporated to improve stability in multi-horizon forecasting. The cross-step transition bridge maps historical information to future time steps, reducing error propagation. The gated fusion unit adaptively merges the temporal and spatial features, dynamically adjusting their contributions based on the forecast horizon, ensuring effective balance between the two and enhancing prediction accuracy across multiple time steps. Extensive experiments on a real-world dataset of 18,061 charging piles in Shenzhen demonstrate that the proposed framework achieves superior performance over state-of-the-art baselines in terms of MAE, RMSE, and MAPE. Ablation and sensitivity analyses verify the effectiveness of each module, while efficiency evaluations indicate significantly reduced computational overhead compared with existing attention-based spatio-temporal models. Full article
(This article belongs to the Section Vehicle Management)
27 pages, 16299 KB  
Article
Numerical Simulation of Mechanical Parameters of Oil Shale Rock in Minfeng Subsag
by Yuhao Huo, Qing You and Xiaoqiang Liu
Processes 2026, 14(3), 476; https://doi.org/10.3390/pr14030476 - 29 Jan 2026
Viewed by 203
Abstract
Rock mechanical parameters can provide fundamental data for the numerical simulation of hydraulic fracturing, aiding in the construction of hydraulic fracturing models. Due to the laminated nature of shale, constructing a hydraulic fracturing model requires obtaining the rock mechanical parameters of each lamina [...] Read more.
Rock mechanical parameters can provide fundamental data for the numerical simulation of hydraulic fracturing, aiding in the construction of hydraulic fracturing models. Due to the laminated nature of shale, constructing a hydraulic fracturing model requires obtaining the rock mechanical parameters of each lamina and the bedding planes. However, acquiring the mechanical parameters of individual shale laminas through physical experiments demands that, after rock mechanics testing, cracks propagate along the centre of the laminae without connecting additional bedding planes, which imposes extremely high requirements on shale samples. Current research on the rock mechanics of the Minfeng subsag shale is relatively limited. Therefore, to obtain the rock mechanical parameters of each lamina and the bedding planes in the Minfeng subsag shale, a numerical simulation approach can be employed. The model, built using PFC2D, is based on prior X-ray diffraction (XRD) analysis, conventional thin-section observation, scanning electron microscopy (SEM), Brazilian splitting tests, and triaxial compression tests. It replicates the processes of the Brazilian splitting and triaxial compression experiments, assigning initial parameters to different bedding planes based on lithology. A trial-and-error method is then used to adjust the parameters until the simulated curves match the physical experimental curves, with errors within 10%. The model parameters for each lamina at this stage are then applied to single-lithology Brazilian splitting, biaxial compression, and three-point bending models for simulation, ultimately obtaining the tensile strength, uniaxial compressive strength, Poisson’s ratio, Young’s modulus, brittleness index, and Mode I fracture toughness for each lamina. Simulation results show that the Minfeng subsag shale exhibits strong heterogeneity, with all obtained rock mechanical parameters spanning a wide range. Calculated brittleness indices for each lamina mostly fall within the “good” and “medium” ranges, with carbonate laminae generally demonstrating better brittleness than felsic laminae. Fracture toughness also clearly divides into two ranges: mixed carbonate shale laminae have overall higher fracture toughness than mixed felsic laminae. Full article
(This article belongs to the Special Issue Advances in Reservoir Simulation and Multiphase Flow in Porous Media)
Show Figures

Figure 1

19 pages, 8143 KB  
Article
300-GHz Photonics-Aided Wireless 2 × 2 MIMO Transmission over 200 m Using GMM-Enhanced Duobinary Unsupervised Adaptive CNN
by Luhan Jiang, Jianjun Yu, Qiutong Zhang, Wen Zhou and Min Zhu
Sensors 2026, 26(3), 842; https://doi.org/10.3390/s26030842 - 27 Jan 2026
Viewed by 200
Abstract
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and [...] Read more.
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and antenna diversity techniques are utilized in this work to increase system capacity without changing the bandwidth of transmitted signals. Meanwhile, duobinary shaping is used to solve the problem of bandwidth limitation of components in the system, and the final duobinary signals are recovered by maximum likelihood sequence detection (MLSD). A Gaussian mixture model (GMM)-enhanced duobinary unsupervised adaptive convolutional neural network (DB-UACNN) is proposed, to further deal with channel noise. Based on the technologies above, a 2 × 2 multiple-input multiple-output (MIMO) photonic-aided terahertz wireless transmission system at 300 GHz is demonstrated. Experimental results have proved that the signal-to-noise ratio (SNR) gain of duobinary shaping is up to 1.87 dB and 1.70 dB in X-polarization and Y-polarization. The proposed GMM-enhanced DB-UACNN also shows extra SNR gain of up to 2.59 dB and 2.63 dB in X-polarization and Y-polarization, compared to the conventional duobinary filter. The high transmission rate of 100 Gbit/s over the distance of 200 m is finally realized under a 7% hard-decision forward error correction (HD-FEC) threshold. Full article
Show Figures

Figure 1

38 pages, 6181 KB  
Article
An AIoT-Based Framework for Automated English-Speaking Assessment: Architecture, Benchmarking, and Reliability Analysis of Open-Source ASR
by Paniti Netinant, Rerkchai Fooprateepsiri, Ajjima Rukhiran and Meennapa Rukhiran
Informatics 2026, 13(2), 19; https://doi.org/10.3390/informatics13020019 - 26 Jan 2026
Viewed by 281
Abstract
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things [...] Read more.
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things (AIoT). This study presents an AIoT-based framework for automated English-speaking assessment that integrates architecture and system design, ASR benchmarking, and reliability analysis on edge devices. The proposed AIoT-oriented architecture incorporates a lightweight scoring framework capable of analyzing pronunciation, fluency, prosody, and CEFR-aligned speaking proficiency within an automated assessment system. Seven open-source ASR models—four Whisper variants (tiny, base, small, and medium) and three Vosk models—were systematically benchmarked in terms of recognition accuracy, inference latency, and computational efficiency. Experimental results indicate that Whisper-medium deployed on the Raspberry Pi 5 achieved the strongest overall performance, reducing inference latency by 42–48% compared with the Raspberry Pi 4 and attaining the lowest Word Error Rate (WER) of 6.8%. In contrast, smaller models such as Whisper-tiny, with a WER of 26.7%, exhibited two- to threefold higher scoring variability, demonstrating how recognition errors propagate into automated assessment reliability. System-level testing revealed that the Raspberry Pi 5 can sustain near real-time processing with approximately 58% CPU utilization and around 1.2 GB of memory, whereas the Raspberry Pi 4 frequently approaches practical operational limits under comparable workloads. Validation using real learner speech data (approximately 100 sessions) confirmed that the proposed system delivers accurate, portable, and privacy-preserving speaking assessment using low-power edge hardware. Overall, this work introduces a practical AIoT-based assessment framework, provides a comprehensive benchmark of open-source ASR models on edge platforms, and offers empirical insights into the trade-offs among recognition accuracy, inference latency, and scoring stability in edge-based ASR deployments. Full article
Show Figures

Figure 1

24 pages, 5159 KB  
Article
Forest Age Estimation by Integrating Tree Species Identity and Multi-Source Remote Sensing: Validating Heterogeneous Growth Patterns Through the Plant Economic Spectrum Theory
by Xiyu Zhang, Chao Zhang, Li Zhou, Huan Liu, Lianjin Fu and Wenlong Yang
Remote Sens. 2026, 18(3), 407; https://doi.org/10.3390/rs18030407 - 26 Jan 2026
Viewed by 232
Abstract
Current mainstream remote sensing approaches to forest age estimation frequently neglect interspecific differences in functional traits, which may limit the accurate representation of species-specific tree growth strategies. This study develops and validates a technical framework that incorporates multi-source remote sensing and tree species [...] Read more.
Current mainstream remote sensing approaches to forest age estimation frequently neglect interspecific differences in functional traits, which may limit the accurate representation of species-specific tree growth strategies. This study develops and validates a technical framework that incorporates multi-source remote sensing and tree species functional trait heterogeneity to systematically improve the accuracy of plantation age mapping. We constructed a processing chain—“multi-source feature fusion–species identification–heterogeneity modeling”—for a typical karst plantation landscape in southeastern Yunnan. Using the Google Earth Engine (GEE) platform, we integrated Sentinel-1/2 and Landsat time-series data, implemented a Gradient Boosting Decision Tree (GBDT) algorithm for species classification, and built age estimation models that incorporate species identity as a proxy for the growth strategy heterogeneity delineated by the Plant Economic Spectrum (PES) theory. Key results indicate: (1) Species classification reached an overall accuracy of 89.34% under spatial block cross-validation, establishing a reliable basis for subsequent modeling. (2) The operational model incorporating species information achieved an R2 (coefficient of determination) of 0.84 (RMSE (Root Mean Square Error) = 6.52 years) on the test set, demonstrating a substantial improvement over the baseline model that ignored species heterogeneity (R2 = 0.62). This demonstrates that species identity serves as an effective proxy for capturing the growth strategy heterogeneity described by the Plant Economic Spectrum (PES) theory, which is both distinguishable and valuable for modeling within the remote sensing feature space. (3) Error propagation analysis demonstrated strong robustness to classification uncertainties (γ = 0.23). (4) Plantation structure in the region was predominantly young-aged, with forests aged 0–20 years covering over 70% of the area. Despite inherent uncertainties in ground-reference age data, the integrated framework exhibited clear relative superiority, improving R2 from 0.62 to 0.84. Both error propagation analysis (γ = 0.23) and Monte Carlo simulations affirmed the robustness of the tandem workflow and the stability of the findings, providing a reliable methodology for improved-accuracy plantation carbon sink quantification. Full article
Show Figures

Figure 1

48 pages, 1973 KB  
Review
A Review on Reverse Engineering for Sustainable Metal Manufacturing: From 3D Scans to Simulation-Ready Models
by Elnaeem Abdalla, Simone Panfiglio, Mariasofia Parisi and Guido Di Bella
Appl. Sci. 2026, 16(3), 1229; https://doi.org/10.3390/app16031229 - 25 Jan 2026
Viewed by 257
Abstract
Reverse engineering (RE) has been increasingly adopted in metal manufacturing to digitize legacy parts, connect “as-is” geometry to mechanical performance, and enable agile repair and remanufacturing. This review consolidates scan-to-simulation workflows that transform 3D measurement data (optical/laser scanning and X-ray computed tomography) into [...] Read more.
Reverse engineering (RE) has been increasingly adopted in metal manufacturing to digitize legacy parts, connect “as-is” geometry to mechanical performance, and enable agile repair and remanufacturing. This review consolidates scan-to-simulation workflows that transform 3D measurement data (optical/laser scanning and X-ray computed tomography) into simulation-ready models for structural assessment and manufacturing decisions, with an explicit focus on sustainability. Key steps are reviewed, from acquisition planning and metrological error sources to point-cloud/mesh processing, CAD/feature reconstruction, and geometry preparation for finite-element analysis (watertightness, defeaturing, meshing strategies, and boundary condition transfer). Special attention is given to uncertainty quantification and the propagation of geometric deviations into stress, stiffness, and fatigue predictions, enabling robust accept/reject and repair/replace choices. Sustainability is addressed through a lightweight reporting framework covering material losses, energy use, rework, and lead time across the scan–model–simulate–manufacture chain, clarifying when digitalization reduces scrap and over-processing. Industrial use cases are discussed for high-value metal components (e.g., molds, turbine blades, and marine/energy parts) where scan-informed simulation supports faster and more reliable decision making. Open challenges are summarized, including benchmark datasets, standardized reporting, automation of feature recognition, and integration with repair process simulation (DED/WAAM) and life-cycle metrics. A checklist is proposed to improve reproducibility and comparability across RE studies. Full article
(This article belongs to the Section Mechanical Engineering)
Show Figures

Figure 1

23 pages, 6538 KB  
Article
Multi-Scale Graph-Decoupling Spatial–Temporal Network for Traffic Flow Forecasting in Complex Urban Environments
by Hongtao Li, Wenzheng Liu and Huaixian Chen
Electronics 2026, 15(3), 495; https://doi.org/10.3390/electronics15030495 - 23 Jan 2026
Viewed by 190
Abstract
Accurate traffic flow forecasting is a fundamental component of Intelligent Transportation Systems and proactive urban mobility management. However, the inherent complexity of urban traffic flow, characterized by non-stationary dynamics and multi-scale temporal dependencies, poses significant modeling challenges. Existing spatio-temporal models often struggle to [...] Read more.
Accurate traffic flow forecasting is a fundamental component of Intelligent Transportation Systems and proactive urban mobility management. However, the inherent complexity of urban traffic flow, characterized by non-stationary dynamics and multi-scale temporal dependencies, poses significant modeling challenges. Existing spatio-temporal models often struggle to reconcile the discrepancy between static physical road constraints and highly dynamic, state-dependent spatial correlations, while their reliance on fixed temporal receptive fields limits the capacity to disentangle overlapping periodicities and stochastic fluctuations. To bridge these gaps, this study proposes a novel Multi-scale Graph-Decoupling Spatial–temporal Network (MS-GSTN). MS-GSTN leverages a Hierarchical Moving Average decomposition module to recursively partition raw traffic flow signals into constituent patterns across diverse temporal resolutions, ranging from systemic daily trends to high-frequency transients. Subsequently, a Tri-graph Spatio-temporal Fusion module synergistically models scale-specific dependencies by integrating an adaptive temporal graph, a static spatial graph, and a data-driven dynamic spatial graph within a unified architecture. Extensive experiments on four large-scale real-world benchmark datasets demonstrate that MS-GSTN consistently achieves superior forecasting accuracy compared to representative state-of-the-art models. Quantitatively, the proposed framework yields an overall reduction in Mean Absolute Error of up to 6.2% and maintains enhanced stability across multiple forecasting horizons. Visualization analysis further confirms that MS-GSTN effectively identifies scale-dependent spatial couplings, revealing that long-term traffic flow trends propagate through global network connectivity while short-term variations are governed by localized interactions. Full article
Show Figures

Figure 1

21 pages, 1683 KB  
Article
Method of Estimating Wave Height from Radar Images Based on Genetic Algorithm Back-Propagation (GABP) Neural Network
by Yang Meng, Jinda Wang, Zhanjun Tian, Fei Niu and Yanbo Wei
Information 2026, 17(1), 109; https://doi.org/10.3390/info17010109 - 22 Jan 2026
Viewed by 101
Abstract
In the domain of marine remote sensing, the real-time monitoring of ocean waves is a research hotspot, which employs acquired X-band radar images to retrieve wave information. To enhance the accuracy of the classical spectrum method using the extracted signal-to-noise ratio (SNR) from [...] Read more.
In the domain of marine remote sensing, the real-time monitoring of ocean waves is a research hotspot, which employs acquired X-band radar images to retrieve wave information. To enhance the accuracy of the classical spectrum method using the extracted signal-to-noise ratio (SNR) from an image sequence, data from the preferred analysis area around the upwind is required. Additionally, the accuracy requires further improvement in cases of low wind speed and swell. For shore-based radar, access to the preferred analysis area cannot be guaranteed in practice, which limits the measurement accuracy of the spectrum method. In this paper, a method using extracted SNRs and an optimized genetic algorithm back-propagation (GABP) neural network model is proposed to enhance the inversion accuracy of significant wave height. The extracted SNRs from multiple selected analysis regions, included angles, and wind speed are employed to construct a feature vector as the input parameter of the GABP neural network. Considering the not-completely linear relationship of wave height to the SNR derived from radar images, the GABP network model is used to fit the relationship. Compared with the classical SNR-based method, the correlation coefficient using the GABP neural network is improved by 0.14, and the root mean square error is reduced by 0.20 m. Full article
(This article belongs to the Section Information Processes)
Show Figures

Graphical abstract

14 pages, 1255 KB  
Article
Real-Time Control of Six-DOF Serial Manipulators via Learned Spherical Kinematics
by Meher Madhu Dharmana and Pramod Sreedharan
Robotics 2026, 15(1), 27; https://doi.org/10.3390/robotics15010027 - 21 Jan 2026
Viewed by 145
Abstract
Achieving reliable and real-time inverse kinematics (IK) for 6-degree-of-freedom (6-DoF) spherical-wrist manipulators remains a significant challenge. Analytical formulations often struggle with complex geometries and modeling errors, and standard numerical solvers (e.g., Levenberg–Marquardt) can stall near singularities or converge slowly. Purely data-driven approaches may [...] Read more.
Achieving reliable and real-time inverse kinematics (IK) for 6-degree-of-freedom (6-DoF) spherical-wrist manipulators remains a significant challenge. Analytical formulations often struggle with complex geometries and modeling errors, and standard numerical solvers (e.g., Levenberg–Marquardt) can stall near singularities or converge slowly. Purely data-driven approaches may require large networks and struggle with extrapolation. In this paper, we propose a low-latency, polynomial-based IK solution for spherical-wrist robots. The method leverages spherical coordinates and low-degree polynomial fits for the first three (positional) joints, coupled with a closed-form analytical solver for the final three (wrist) joints. An iterative partial-derivative refinement adjusts the polynomial-based angle estimates using spherical-coordinate errors, ensuring near-zero final error without requiring a full Jacobian matrix. The method systematically enumerates up to eight valid IK solutions per target pose. Our experiments against Levenberg–Marquardt, damped least-squares, and an fmincon baseline show an approximate 8.1× speedup over fmincon while retaining higher accuracy and multi-branch coverage. Future extensions include enhancing robustness through uncertainty propagation, adapting the approach to non-spherical wrists, and developing criteria-based automatic solution-branch selection. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

23 pages, 40386 KB  
Article
Attention-Based TCN for LOS/NLOS Identification Using UWB Ranging and Angle Data
by Yuhao Zeng, Guangqiang Yin, Yuhong Zhang, Li Zhan, Di Zhang, Dewen Wen, Zhan Li and Shuaishuai Zhai
Electronics 2026, 15(2), 448; https://doi.org/10.3390/electronics15020448 - 20 Jan 2026
Viewed by 102
Abstract
In the Internet of Things (IoT), ultra-wideband (UWB) plays an essential role in localization and navigation. However, in indoor environments, UWB signals are often blocked by obstacles, leading to non-line-of-sight (NLOS) propagation. Thus, reliable line-of-sight (LOS)/NLOS identification is essential for reducing errors and [...] Read more.
In the Internet of Things (IoT), ultra-wideband (UWB) plays an essential role in localization and navigation. However, in indoor environments, UWB signals are often blocked by obstacles, leading to non-line-of-sight (NLOS) propagation. Thus, reliable line-of-sight (LOS)/NLOS identification is essential for reducing errors and enhancing the robustness of localization. This paper focuses on a single-anchor UWB configuration and proposes a temporal deep learning framework that jointly exploits two-way ranging (TWR) and angle-of-arrival (AOA) measurements for LOS/NLOS identification. At the core of the model is a temporal convolutional network (TCN) augmented with a self-attentive pooling mechanism, which enables the extraction of dynamic propagation patterns and temporal contextual information. Experimental evaluations on real-world measurement data show that the proposed method achieves an accuracy of 96.65% on the collected dataset and yields accuracies ranging from 88.72% to 93.56% across the three scenes, outperforming representative deep learning baselines. These results indicate that jointly exploiting geometric and temporal information in a single-anchor configuration is an effective approach for robust UWB indoor positioning. Full article
Show Figures

Figure 1

23 pages, 1109 KB  
Review
A Review of End-to-End Decision Optimization Research: An Architectural Perspective
by Wenya Zhang and Gendao Li
Algorithms 2026, 19(1), 86; https://doi.org/10.3390/a19010086 - 20 Jan 2026
Viewed by 251
Abstract
Traditional decision optimization methods primarily focus on model construction and solution, leaving parameter estimation and inter-variable relationships to statistical research. The traditional approach divides problem-solving into two independent stages: predict first and then optimize. This decoupling leads to the propagation of prediction errors-even [...] Read more.
Traditional decision optimization methods primarily focus on model construction and solution, leaving parameter estimation and inter-variable relationships to statistical research. The traditional approach divides problem-solving into two independent stages: predict first and then optimize. This decoupling leads to the propagation of prediction errors-even minor inaccuracies in predictions can be amplified into significant decision biases during the optimization phase. To tackle this issue, scholars have proposed end-to-end decision optimization methods, which integrate the prediction and decision-making stages into a unified framework. By doing so, these approaches effectively mitigate error propagation and enhance overall decision performance. From an architectural design perspective, this review focuses on categorizing end-to-end decision optimization methods based on how the prediction and decision modules are integrated. It classifies mainstream approaches into three typical paradigms: constructing closed-loop loss functions, building differentiable optimization layers, and parameterizing the representation of optimization problems. It also examines their implementation pathways leveraging deep learning technologies. The strengths and limitations of these paradigms essentially stem from the inherent trade-offs in their architectural designs. Through a systematic analysis of existing research, this paper identifies key challenges in three core areas: data, variable relationships, and gradient propagation. Among these, handling non-convexity and complex constraints is critical for model generalization, while quantifying decision-dependent endogenous uncertainty remains an indispensable challenge for practical deployment. Full article
Show Figures

Figure 1

30 pages, 10980 KB  
Article
Fatigue Assessment of Weathering Steel Welded Joints Based on Fracture Mechanics and Machine Learning
by Jianxing Du, Han Su and Jinsheng Du
Buildings 2026, 16(2), 399; https://doi.org/10.3390/buildings16020399 - 18 Jan 2026
Viewed by 207
Abstract
To improve the computational efficiency of complex fatigue assessments, this study proposes a framework that integrates high-fidelity finite element analysis (FEA)with ensemble learning for evaluating the fatigue performance of weathering steel welded joints. First, a three-dimensional crack propagation model for cruciform fillet welds [...] Read more.
To improve the computational efficiency of complex fatigue assessments, this study proposes a framework that integrates high-fidelity finite element analysis (FEA)with ensemble learning for evaluating the fatigue performance of weathering steel welded joints. First, a three-dimensional crack propagation model for cruciform fillet welds was developed on the ABAQUS-FRANC3D platform, with a validation error of less than 20%. Subsequently, a large-scale parametric analysis was conducted. The results indicate that as the stress amplitude increases from 67.5 MPa to 99 MPa, the fatigue life decreases to 40.29% of the baseline value. When the stress amplitude reaches 180 MPa, the fatigue life drops sharply to 14.28% of the baseline. Within the stress ratio range of 0.1 to 0.7, increasing the initial crack size from 0.075 mm to 0.5 mm reduces the fatigue life to between 85.78% and 86.48% of the baseline. Edge cracks, influenced by stress concentration, exhibit approximately 15.2% shorter fatigue life compared to central cracks, while the maximum variation in fatigue life due to crack geometry is only 10.25%. Second, an Extremely Randomized Trees surrogate model constructed based on the simulation data demonstrates excellent performance. Finally, by integrating this model with Paris’s law, the developed prediction framework achieves high consistency with numerical simulation results, with all predicted values falling within the two-standard-deviation interval. This data-driven approach can effectively replace computationally intensive finite element analysis, enabling efficient structural safety assessments. Full article
Show Figures

Figure 1

15 pages, 1784 KB  
Article
Deep Neural Network-Based Inversion Method for Electron Density Profiles in Ionograms
by Longlong Niu, Chen Zhou, Na Wei, Guosheng Han, ZhongXin Deng and Wen Liu
Atmosphere 2026, 17(1), 88; https://doi.org/10.3390/atmos17010088 - 15 Jan 2026
Viewed by 181
Abstract
Accurate inversion of ionograms of the ionosonde is of great significance for studying ionospheric structure and radio wave propagation. Traditional inversion methods usually describe the electron density profile based on preset polynomial functions, but such functions are difficult to fully match the complex [...] Read more.
Accurate inversion of ionograms of the ionosonde is of great significance for studying ionospheric structure and radio wave propagation. Traditional inversion methods usually describe the electron density profile based on preset polynomial functions, but such functions are difficult to fully match the complex dynamic distribution characteristics of the ionosphere, especially in accurately representing special positions such as the F2 layer peak. To this end, this paper proposes an inversion model based on a Variational Autoencoder, named VSII-VAE, which realizes the mapping from ionograms to electron density profiles through an encoder–decoder structure. To enable the model to learn inversion patterns with physical significance, we introduced physical constraints into the latent variable space and the decoder, constructing a neural network inversion model that integrates data-driven approaches with physical mechanisms. Using multi-class ionograms as input and the electron density measured by Incoherent Scatter Radar as the training target, experimental results show that the electron density profiles retrieved by VSII-VAE are highly consistent with ISR observations, with errors between synthetic virtual heights and measured virtual heights generally below 5 km. On the independent test set, the model evaluation metrics reached R2 = 0.82, RMSE = 0.14 MHz, rp = 0.94, outperforming the ARTIST method and verifying the effectiveness and superiority of the model inversion. Full article
(This article belongs to the Special Issue Research and Space-Based Exploration on Space Plasma)
Show Figures

Figure 1

Back to TopTop