Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,754)

Search Parameters:
Keywords = initial errors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 9229 KB  
Article
A Hybrid Offline–Online Kalman–RBF Framework for Accurate Relative Humidity Forecasting
by Athanasios Donas, George Galanis, Ioannis Pytharoulis and Ioannis Th. Famelis
Atmosphere 2026, 17(2), 162; https://doi.org/10.3390/atmos17020162 (registering DOI) - 31 Jan 2026
Abstract
Accurate humidity forecasts are crucial for environmental and operational applications, yet Numerical Weather Prediction systems frequently exhibit systematic and random errors. To address this problem, this study introduces a modified hybrid post-processing approach that extends a previously developed methodology, enabling a direct comparison [...] Read more.
Accurate humidity forecasts are crucial for environmental and operational applications, yet Numerical Weather Prediction systems frequently exhibit systematic and random errors. To address this problem, this study introduces a modified hybrid post-processing approach that extends a previously developed methodology, enabling a direct comparison of computational efficiency and predictive capacity. The proposed framework integrates a quadratic Kalman Filter with a Radial Basis Function Neural Network trained via the Orthogonal Least Squares algorithm and updated online through Recursive Least Squares. This modified method was evaluated via a time-window process, using forecasts from the Weather Research and Forecasting model and recorded observations from stations in northern Greece. The results show substantial improvements in forecast accuracy, as the Bias was reduced by over 85%, and the MAE and RMSE decreased by approximately 65% and 58%, respectively, compared with the baseline model. Furthermore, the proposed framework also demonstrates enhanced computational efficiency, reducing processing time by more than 95% relative to the initial methodology. Full article
22 pages, 3678 KB  
Article
Neuro-Adaptive Finite-Time Command-Filter Backstepping Control of Full State Feedback Nonlinear System
by Jiaxun Che, Mengxuan Zhang and Lin Sun
Symmetry 2026, 18(2), 274; https://doi.org/10.3390/sym18020274 (registering DOI) - 31 Jan 2026
Abstract
This work develops a neuro-adaptive finite-time command-filtered backstepping (CFB) control framework for full-state feedback systems. The design methodology initiates with error transformation techniques to embed finite-time prescribed performance (FT-PP) specifications into the control architecture. Building upon this foundation, a dynamic error compensation system [...] Read more.
This work develops a neuro-adaptive finite-time command-filtered backstepping (CFB) control framework for full-state feedback systems. The design methodology initiates with error transformation techniques to embed finite-time prescribed performance (FT-PP) specifications into the control architecture. Building upon this foundation, a dynamic error compensation system is formulated to neutralize filtering artifacts induced by the finite-time command filter (FT-CF), thereby achieving precise finite-time convergence. To address state estimation requirements, we construct a neural network-based state estimation framework utilizing radial basis function neural networks (RBFNNs) for simultaneous uncertainty approximation and unmeasurable state reconstruction. The synthesis of FT-PP constraints and neural state estimation culminates in the derivation of an adaptive control law with Lyapunov-stable update rules, theoretically ensuring tracking errors enter and remaining within small neighborhoods of target compact sets within predefined finite time horizons. The simulation experiments cover both numerical simulation and actual case studies, which verify the feasibility and effectiveness of the proposed control mode. Full article
(This article belongs to the Special Issue Symmetry in Control Systems: Theory, Design, and Application)
19 pages, 1845 KB  
Article
Don’t Tell Us How Strong It Feels! Converging and Discriminant Validity of an Indirect Measure of Emotional Evidence Accumulation Efficiency
by Rotem Berkovich, Deanna M. Barch, Nachshon Meiran and Erin K. Moran
J. Intell. 2026, 14(2), 19; https://doi.org/10.3390/jintelligence14020019 (registering DOI) - 31 Jan 2026
Abstract
The prevalent method for measuring emotional experiences is self-report scales. However, this method is prone to bias, affected by retrospective errors, and limited in studying individual differences due to variability in how individuals interpret scale values. In the present study, we tested the [...] Read more.
The prevalent method for measuring emotional experiences is self-report scales. However, this method is prone to bias, affected by retrospective errors, and limited in studying individual differences due to variability in how individuals interpret scale values. In the present study, we tested the convergent validity of an alternative approach, which infers emotional components from computational modeling as applied to binary pleasant/unpleasant reports about affective images. Reaction times and choices were modeled to estimate the drift rate (efficiency of emotional evidence accumulation) and the boundary (decision caution). Participants (N = 191) also completed five self-report questionnaires assessing affect, anhedonia, depressive symptoms, and pleasure. Only one correlation reached evidence level (Bayes Factor > 10): Higher consummatory pleasure was negatively associated with drift rate for unpleasant emotions (r(178) = −0.258). This suggests that individuals who typically experience greater in-the-moment pleasure accumulate evidence less efficiently toward unpleasant judgments. Other correlations were absent or inconclusive, potentially reflecting differences in temporal focus and in the specific facets of emotion for each measure. Overall, these results provide some initial support for the convergent and discriminant validity of the drift rate as an indirect measure of online emotional experience. Full article
Show Figures

Figure 1

18 pages, 938 KB  
Article
Changes in Richness, Abundance, and Occurrence of Beetles in South Korea over Ten Years: Identifier Bias and Selection of Climate Change Indicators
by Tae-Sung Kwon, Sung-Soo Kim, Go-Eun Park and Youngwoo Nam
Insects 2026, 17(2), 156; https://doi.org/10.3390/insects17020156 - 30 Jan 2026
Abstract
Climate change is rapidly altering the distribution and abundance of species, with significant impacts on regional ecosystems, including reduced ecosystem services and the loss of biodiversity. Accurately predicting changes in the distribution and abundance of taxa under future climate scenarios is, therefore, crucial. [...] Read more.
Climate change is rapidly altering the distribution and abundance of species, with significant impacts on regional ecosystems, including reduced ecosystem services and the loss of biodiversity. Accurately predicting changes in the distribution and abundance of taxa under future climate scenarios is, therefore, crucial. In South Korea, beetle data collected via pitfall traps from approximately 300 forest sites between 2007 and 2009 (30 families, 4 genera, and 150 species) were used to forecast changes in their abundance and distribution under climate change scenarios RCP 4.5 and 8.5. This study evaluated the accuracy of those predictions using data from a subsequent survey conducted between 2017 and 2019. We compared species richness, abundance, changes in abundance (i.e., number of individuals), and occurrence (i.e., number of occupied sites) using data from 273 sites that were surveyed in both the initial (2007–2009) and follow-up (2017–2019) periods. All four parameters were found to be significantly influenced by the identifiers. This identifier bias was attributed to the omission of morphologically similar species in the initial survey or the loss of individuals during the preparation process of dry specimens. As a result, increases in abundance and distribution appear to have been affected by identification errors, whereas decreases more closely reflect actual ecological changes. When the comparison between predicted and observed results was restricted to taxa with reduced abundance and distribution, the number of taxa that matched the predictions was significantly higher than that of those that did not. Based on ease of identification, abundance, and sensitivity to climate change, we selected a set of indicator taxa (four families, two genera, and seven species) for climate change monitoring. Full article
(This article belongs to the Section Insect Ecology, Diversity and Conservation)
Show Figures

Figure 1

25 pages, 3415 KB  
Article
Quantifying the Performance of Distributed Large-Volume Metrology Systems for Dynamic Measurements: Methodology Development
by David Gorman, Claire Pottier, Marta Cibrian and Samual Johnston
Metrology 2026, 6(1), 7; https://doi.org/10.3390/metrology6010007 - 30 Jan 2026
Abstract
Limitations associated with traditional automation approaches within manufacturing have driven the pursuit of more flexible and intelligent robot guidance methods. One promising development in this area is the integration of external multitarget six degrees of freedom (6 DoF) distributed large-volume metrology (DLVM) into [...] Read more.
Limitations associated with traditional automation approaches within manufacturing have driven the pursuit of more flexible and intelligent robot guidance methods. One promising development in this area is the integration of external multitarget six degrees of freedom (6 DoF) distributed large-volume metrology (DLVM) into the control loop. Although multiple standards exist across dimensional metrology, motion tracking, indoor positioning, robot guidance, and machine tool accuracy, there is no harmonised, technology-agnostic standard that fully encompasses the unique challenges of 6 DoF DLVM systems for dynamic applications. This work identifies key gaps in the current standards’ landscape and presents a technology-agnostic candidate test methodology intended to support future standardisation of dynamic DLVM performance evaluation. The method provides a metrologically grounded spatial reference path and a temporal alignment strategy so that position and orientation errors can be reported in the intrinsic coordinates of the path. The paper covers the basic principle of the test, artefact construction, synchronisation strategies, preliminary error modelling, and a baseline uncertainty approach, and reports representative results from initial prototype trials on a multi-nodal distance-camera DLVM system. The prototype results demonstrate feasibility and highlight temporal sampling and traceable timing as current limiting factors for fully deconvolving latency and pose error; these aspects are therefore positioned as instrumentation requirements and the focus of ongoing work. Full article
(This article belongs to the Special Issue Advances in Optical 3D Metrology)
Show Figures

Figure 1

27 pages, 16299 KB  
Article
Numerical Simulation of Mechanical Parameters of Oil Shale Rock in Minfeng Subsag
by Yuhao Huo, Qing You and Xiaoqiang Liu
Processes 2026, 14(3), 476; https://doi.org/10.3390/pr14030476 - 29 Jan 2026
Abstract
Rock mechanical parameters can provide fundamental data for the numerical simulation of hydraulic fracturing, aiding in the construction of hydraulic fracturing models. Due to the laminated nature of shale, constructing a hydraulic fracturing model requires obtaining the rock mechanical parameters of each lamina [...] Read more.
Rock mechanical parameters can provide fundamental data for the numerical simulation of hydraulic fracturing, aiding in the construction of hydraulic fracturing models. Due to the laminated nature of shale, constructing a hydraulic fracturing model requires obtaining the rock mechanical parameters of each lamina and the bedding planes. However, acquiring the mechanical parameters of individual shale laminas through physical experiments demands that, after rock mechanics testing, cracks propagate along the centre of the laminae without connecting additional bedding planes, which imposes extremely high requirements on shale samples. Current research on the rock mechanics of the Minfeng subsag shale is relatively limited. Therefore, to obtain the rock mechanical parameters of each lamina and the bedding planes in the Minfeng subsag shale, a numerical simulation approach can be employed. The model, built using PFC2D, is based on prior X-ray diffraction (XRD) analysis, conventional thin-section observation, scanning electron microscopy (SEM), Brazilian splitting tests, and triaxial compression tests. It replicates the processes of the Brazilian splitting and triaxial compression experiments, assigning initial parameters to different bedding planes based on lithology. A trial-and-error method is then used to adjust the parameters until the simulated curves match the physical experimental curves, with errors within 10%. The model parameters for each lamina at this stage are then applied to single-lithology Brazilian splitting, biaxial compression, and three-point bending models for simulation, ultimately obtaining the tensile strength, uniaxial compressive strength, Poisson’s ratio, Young’s modulus, brittleness index, and Mode I fracture toughness for each lamina. Simulation results show that the Minfeng subsag shale exhibits strong heterogeneity, with all obtained rock mechanical parameters spanning a wide range. Calculated brittleness indices for each lamina mostly fall within the “good” and “medium” ranges, with carbonate laminae generally demonstrating better brittleness than felsic laminae. Fracture toughness also clearly divides into two ranges: mixed carbonate shale laminae have overall higher fracture toughness than mixed felsic laminae. Full article
(This article belongs to the Special Issue Advances in Reservoir Simulation and Multiphase Flow in Porous Media)
Show Figures

Figure 1

12 pages, 1025 KB  
Article
Enhancing Whisper Fine-Tuning with Discrete Wavelet Transform-Based LoRA Initialization
by Liang Lan, Molin Fang, Yuxuan Chen, Daliang Wang and Wenyong Wang
Electronics 2026, 15(3), 586; https://doi.org/10.3390/electronics15030586 - 29 Jan 2026
Viewed by 33
Abstract
In low-resource automatic speech recognition (ASR) scenarios, parameter-efficient fine-tuning (PEFT) has become a crucial approach for adapting large pre-trained speech models. Although low-rank adaptation (LoRA) offers clear advantages in efficiency, stability, and deployment friendliness, its performance remains constrained because random initialization fails to [...] Read more.
In low-resource automatic speech recognition (ASR) scenarios, parameter-efficient fine-tuning (PEFT) has become a crucial approach for adapting large pre-trained speech models. Although low-rank adaptation (LoRA) offers clear advantages in efficiency, stability, and deployment friendliness, its performance remains constrained because random initialization fails to capture the time–frequency structural characteristics of speech signals. To address this limitation, this work proposes a structured initialization mechanism that integrates LoRA with the discrete wavelet transform (DWT). By combining wavelet-based initialization, a multi-scale fusion mechanism, and a residual strategy, the proposed method constructs a low-rank adaptation subspace that better aligns with the local time–frequency properties of speech signals. Discrete Wavelet Transform-Based LoRA Initialization (DWTLoRA) enables LoRA modules to incorporate prior modeling of speech dynamics at the start of fine-tuning, substantially reducing the search space of ineffective directions during early training and improving convergence speed, training stability, and recognition accuracy under low-resource conditions. Experimental results on Sichuan dialect speech recognition based on the Whisper architecture demonstrate that the proposed DWTLoRA initialization outperforms standard LoRA and several PEFT baseline methods in terms of character error rate (CER) and training efficiency, confirming the critical role of signal-structure-aware initialization in low-resource ASR. Full article
Show Figures

Figure 1

12 pages, 1819 KB  
Article
Single-Cell Comparison of Small Intestinal Neuroendocrine Tumors and Enterochromaffin Cells from Two Patients
by Fredrik Axling, Elham Barazeghi, Per Hellman, Olov Norlén, Samuel Backman and Peter Stålberg
Cancers 2026, 18(3), 435; https://doi.org/10.3390/cancers18030435 - 29 Jan 2026
Viewed by 32
Abstract
Background: Several studies have attempted to identify the initiating drivers of small intestinal neuroendocrine tumor (SI-NET) development and the molecular mechanisms underlying their progression and metastatic spread. Previous gene expression studies have used bulk microarrays or RNA sequencing to compare tumor tissue with [...] Read more.
Background: Several studies have attempted to identify the initiating drivers of small intestinal neuroendocrine tumor (SI-NET) development and the molecular mechanisms underlying their progression and metastatic spread. Previous gene expression studies have used bulk microarrays or RNA sequencing to compare tumor tissue with normal intestinal mucosa. However, the intestine comprises multiple distinct cell types, and bulk analyses are limited by this cellular heterogeneity, which can confound tumor-specific signals. Methods: We performed single-cell RNA sequencing on primary SI-NETs and paired normal mucosa from two patients to directly compare tumor cells with their cells of origin, the enterochromaffin (EC) cells. To minimize type I errors, we applied a two-step validation strategy by overlapping differentially expressed genes with an external single-cell dataset and cross-referencing candidate genes for enteroendocrine expression in the Human Protein Atlas. Results: For further distinction and characterization, ECs were subdivided into serotonergic and non-serotonergic clusters. This analysis revealed that the SI-NET cells are transcriptionally more similar to serotonergic ECs, consistent with serum metabolite profiles derived from clinical parameters. Our analyses uncovered a loss-of-expression program characterized by regulators of epithelial differentiation and in parallel, a gain-of-expression program displayed neuronal signaling gene induction, implicating functional reprogramming toward neuronal-like properties. Together, these specific losses and gains suggest that our patient-derived SI-NETs undergo adaptation through both loss of enteroendocrine functions and acquisition of neurobiological-promoting signaling pathways. Conclusions: These findings nominate candidate drivers for further functional validation and highlight potential therapeutic strategies in our patient cohort, including restoring suppressed Notch signaling and targeting aberrant neuronal signaling networks. However, even with a two-step validation procedure, the modest cohort size limits statistical power and generalizability, particularly for the proposed association to a serotonergic phenotype. Larger, multi-patient single-cell studies are required to confirm these mechanisms and establish their clinical relevance. Full article
(This article belongs to the Section Cancer Pathophysiology)
Show Figures

Figure 1

25 pages, 9037 KB  
Article
The Development and Performance Validation of a Real-Time Stress Extraction Device for Deep Mining-Induced Stress
by Bojia Xi, Pengfei Shan, Biao Jiao, Huicong Xu, Zheng Meng, Ke Yang, Zhongming Yan and Long Zhang
Sensors 2026, 26(3), 875; https://doi.org/10.3390/s26030875 - 29 Jan 2026
Viewed by 50
Abstract
Under deep mining conditions, coal and rock masses are subjected to high in situ stress and strong mining-induced disturbances, leading to intensified stress unloading, concentration, and redistribution processes. The stability of surrounding rock is therefore closely related to mine safety. Direct, real-time, and [...] Read more.
Under deep mining conditions, coal and rock masses are subjected to high in situ stress and strong mining-induced disturbances, leading to intensified stress unloading, concentration, and redistribution processes. The stability of surrounding rock is therefore closely related to mine safety. Direct, real-time, and continuous monitoring of in situ stress magnitude, orientation, and evolution is a critical requirement for deep underground engineering. To overcome the limitations of conventional stress monitoring methods under high-stress and strong-disturbance conditions, a novel in situ stress monitoring device was developed, and its performance was systematically verified through laboratory experiments. Typical unloading–reloading and biaxial unequal stress paths of deep surrounding rock were adopted. Tests were conducted on intact specimens and specimens with initial damage levels of 30%, 50%, and 70% to evaluate monitoring performance under different degradation conditions. The results show that the device can stably acquire strain signals throughout the entire loading–unloading process. The inverted monitoring stress exhibits high consistency with the loading system in terms of evolution trends and peak stress positions, with peak stress errors below 5% and correlation coefficients (R2) exceeding 0.95. Although more serious initial damage increases high-frequency fluctuations in the monitoring curves, the overall evolution pattern and unloading response remain stable. Combined acoustic emission results further confirm the reliability of the monitoring outcomes. These findings demonstrate that the proposed device enables accurate and dynamic in situ stress monitoring under deep mining conditions, providing a practical technical approach for surrounding rock stability analysis and disaster prevention. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 4545 KB  
Article
3D Medical Image Segmentation with 3D Modelling
by Mária Ždímalová, Kristína Boratková, Viliam Sitár, Ľudovít Sebö, Viera Lehotská and Michal Trnka
Bioengineering 2026, 13(2), 160; https://doi.org/10.3390/bioengineering13020160 - 29 Jan 2026
Viewed by 58
Abstract
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and [...] Read more.
Background/Objectives: The segmentation of three-dimensional radiological images constitutes a fundamental task in medical image processing for isolating tumors from complex datasets in computed tomography or magnetic resonance imaging. Precise visualization, volumetry, and treatment monitoring are enabled, which are critical for oncology diagnostics and planning. Volumetric analysis surpasses standard criteria by detecting subtle tumor changes, thereby aiding adaptive therapies. The objective of this study was to develop an enhanced, interactive Graphcut algorithm for 3D DICOM segmentation, specifically designed to improve boundary accuracy and 3D modeling of breast and brain tumors in datasets with heterogeneous tissue intensities. Methods: The standard Graphcut algorithm was augmented with a clustering mechanism (utilizing k = 2–5 clusters) to refine boundary detection in tissues with varying intensities. DICOM datasets were processed into 3D volumes using pixel spacing and slice thickness metadata. User-defined seeds were utilized for tumor and background initialization, constrained by bounding boxes. The method was implemented in Python 3.13 using the PyMaxflow library for graph optimization and pydicom for data transformation. Results: The proposed segmentation method outperformed standard thresholding and region growing techniques, demonstrating reduced noise sensitivity and improved boundary definition. An average Dice Similarity Coefficient (DSC) of 0.92 ± 0.07 was achieved for brain tumors and 0.90 ± 0.05 for breast tumors. These results were found to be comparable to state-of-the-art deep learning benchmarks (typically ranging from 0.84 to 0.95), achieved without the need for extensive pre-training. Boundary edge errors were reduced by a mean of 7.5% through the integration of clustering. Therapeutic changes were quantified accurately (e.g., a reduction from 22,106 mm3 to 14,270 mm3 post-treatment) with an average processing time of 12–15 s per stack. Conclusions: An efficient, precise 3D tumor segmentation tool suitable for diagnostics and planning is presented. This approach is demonstrated to be a robust, data-efficient alternative to deep learning, particularly advantageous in clinical settings where the large annotated datasets required for training neural networks are unavailable. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

26 pages, 6698 KB  
Article
A Novel Decomposition-Prediction Framework for Predicting InSAR-Derived Ground Displacement: A Case Study of the XMLC Landslide in China
by Mimi Peng, Jing Xue, Zhuge Xia, Jiantao Du and Yinghui Quan
Remote Sens. 2026, 18(3), 425; https://doi.org/10.3390/rs18030425 - 28 Jan 2026
Viewed by 107
Abstract
Interferometric Synthetic Aperture Radar (InSAR) is an advanced imaging geodesy technique for detecting and characterizing surface deformation with high spatial resolution and broad spatial coverage. However, as an inherently post-event observation method, InSAR suffers from limited capability for near-real-time and short-term updates of [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) is an advanced imaging geodesy technique for detecting and characterizing surface deformation with high spatial resolution and broad spatial coverage. However, as an inherently post-event observation method, InSAR suffers from limited capability for near-real-time and short-term updates of deformation time series. In this paper, we proposed a data-driven adaptive framework for deformation prediction based on a hybrid deep learning method to accurately predict the InSAR-derived deformation time series and take the Xi’erguazi−Mawo landslide complex (XMLC) as a case study. The InSAR-derived time series was initially decomposed into trend and periodic components with a two-step decomposition process, which were thereafter modeled separately to enhance the characterization of motion kinematics and prediction accuracy. After retrieving the observations from the multi-temporal InSAR method, two-step signal decomposition was then performed using the Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Variational Mode Decomposition (VMD). The decomposed trend and periodic components were further evaluated using statistical hypothesis testing to verify their significance and reliability. Compared with the single-decomposition model, the further decomposition leads to an overall improvement in prediction accuracy, i.e., the Mean Absolute Errors (MAEs) and the Root Mean Square Errors (RMSEs) are reduced by 40–49% and 36–42%, respectively. Subsequently, the Radial Basis Function (RBF) neural network and the proposed CNN-BiLSTM-SelfAttention (CBS) models were constructed to predict the trend and periodic variations, respectively. The CNN and self-attention help to extract local features in time series and strengthen the ability to capture global dependencies and key fluctuation patterns. Compared with the single network model in prediction, the MAEs and RMSEs are reduced by 22–57% and 4–33%, respectively. Finally, the two predicted components were integrated to generate the fused deformation prediction results. Ablation experiments and comparative experiments show that the proposed method has superior ability. Through rapid and accurate prediction of InSAR-derived deformation time series, this research could contribute to the early-warning systems of slope instabilities. Full article
Show Figures

Figure 1

16 pages, 10849 KB  
Article
LLM4ATS: Applying Large Language Models for Auto-Testing Scripts in Automobiles
by Zeyuan Li, Wei Li, Yuezhao Liu, Wenhao Li and Min Chen
Big Data Cogn. Comput. 2026, 10(2), 41; https://doi.org/10.3390/bdcc10020041 - 28 Jan 2026
Viewed by 64
Abstract
This paper introduces LLM4ATS, a framework integrating large language models, RAG, and closed-loop verification to automatically generate highly reliable automotive automated test scripts from natural language descriptions. Addressing the complex linguistic structure, strict rules, and strong dependency on the in-vehicle communication database inherent [...] Read more.
This paper introduces LLM4ATS, a framework integrating large language models, RAG, and closed-loop verification to automatically generate highly reliable automotive automated test scripts from natural language descriptions. Addressing the complex linguistic structure, strict rules, and strong dependency on the in-vehicle communication database inherent in ATS scripts, LLM4ATS innovatively employs fine-grained line-level generation and a rule-guided iterative refinement mechanism. The framework first enhances prompt context by retrieving relevant information from constructed syntax and case knowledge bases via RAG. Subsequently, each generated script line undergoes rigorous verification through a two-stage validator: initial syntax validation followed by semantic compliance checks against the communication database for signal paths and value domains. Any errors trigger structured feedback, driving iterative refinement by the large language model until fully compliant scripts are produced. This paper evaluated the framework’s effectiveness on real ATS datasets, testing models including GPT-3.5, GPT-4, Qwen2.5-7B, and Qwen2.5-72B-Instruct. Experimental results demonstrate that compared to zero-shot and few-shot baseline methods, the LLM4ATS framework significantly improves generation quality and pass rates across all models. Notably, the strongest GPT-4 model achieved a script pass rate of 91% with LLM4ATS, up from 42% in zero-shot mode, and validated functional effectiveness on a specified in-vehicle hardware platform (Chery Fengyun T28 dashboard). At the same time, expert manual evaluations confirmed the superior performance of the generated scripts in correctness, readability, and compliance with industry standards. Full article
Show Figures

Figure 1

14 pages, 10199 KB  
Article
Relaxing Accurate Initialization for Monocular Dynamic Scene Reconstruction with Gaussian Splatting
by Xinyu Wang, Jiafu Chen, Wei Xing, Huaizhong Lin and Lei Zhao
Appl. Sci. 2026, 16(3), 1321; https://doi.org/10.3390/app16031321 - 28 Jan 2026
Viewed by 76
Abstract
Monocular dynamic scene reconstruction is a challenging task due to the inherent limitation of observing the scene from a single viewpoint at each timestamp, particularly in the presence of object motion and illumination changes. Recent methods combine Gaussian Splatting with deformation modeling to [...] Read more.
Monocular dynamic scene reconstruction is a challenging task due to the inherent limitation of observing the scene from a single viewpoint at each timestamp, particularly in the presence of object motion and illumination changes. Recent methods combine Gaussian Splatting with deformation modeling to enable fast training and rendering; however, their performance in real-world scenarios strongly depends on accurate point cloud initialization. When such initialization is unavailable and random point clouds are used instead, reconstruction quality degrades significantly. To address this limitation, we propose an optimization strategy that relaxes the requirement for accurate initialization in Gaussian-Splatting-based monocular dynamic scene reconstruction. The scene is first reconstructed under a static assumption using all monocular frames, allowing stable convergence of background regions. Based on reconstruction errors, a subset of Gaussians is then activated as dynamic to model motion and deformation. In addition, an annealing jitter regularization term is introduced to improve robustness to camera pose inaccuracies commonly observed in real-world datasets. Extensive experiments on established benchmarks demonstrate that the proposed method enables stable training from randomly initialized point clouds and achieves reconstruction performance comparable to approaches relying on accurate point cloud initialization. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 2683 KB  
Article
Development and Validation of an Optical Sensor-Based Automated Urine Flow Meter for Real-Time Patient Monitoring
by Piyush Hota, Adithya Shyamala Pandian, Rodrigo E. Domínguez, Manni Mo, Bo Fu, Sandra Miranda, Pinar Cay-Durgun, Dheeraj Sirganagari, Michael Serhan, Peter Serhan, Kevin Abi Karam, Naomi M. Gades, Peter Wiktor, Leslie Thomas, Mary Laura Lind and Erica Forzani
Sensors 2026, 26(3), 849; https://doi.org/10.3390/s26030849 - 28 Jan 2026
Viewed by 193
Abstract
Acute kidney injury (AKI) affects thousands of hospitalized patients annually, yet early detection remains challenging as serum creatinine elevation lags behind clinical deterioration. Decreased urine output (UO) represents a key diagnostic criterion of AKI, sometimes manifesting hours before biochemical changes; however, current manual [...] Read more.
Acute kidney injury (AKI) affects thousands of hospitalized patients annually, yet early detection remains challenging as serum creatinine elevation lags behind clinical deterioration. Decreased urine output (UO) represents a key diagnostic criterion of AKI, sometimes manifesting hours before biochemical changes; however, current manual monitoring methods are labor-intensive and prone to error. Here, we developed and validated a simple, cost-effective automated urine flow meter using non-contact optical sensors, a peristaltic pump, and microcontroller-based automation for precise, real-time monitoring of urine output in clinical settings, named P-meter. Three successive prototypes (V1, V2, V3) were validated against gold-standard gravimetric measurements over 285 h of testing during animal experiments that required bladder catheterization. Iterative refinement addressed miniaturization challenges, fluid dynamics optimization, and sensor positioning to achieve progressively improved accuracy. The optimized V3 prototype demonstrated further enhanced volumetric precision, stability, and flow accuracy with near-unity linearity vs. reference method (R2 = 0.9889), minimal bias (mean error −0.1 mL), and 94.18% agreement within confidence limits (n = 86), outperforming the initial V1 prototype (R2 = 0.9971, mean error −1.69 mL, n = 207) and intermediate V2 design (R2 = 0.9941, mean error 3.63 mL, n = 390), primarily in terms of reduced bias and improved agreement. The P-meter offers accurate urine output monitoring at a lower cost than commercial systems, facilitating its use in early AKI detection and thereby improving patient outcomes. Full article
(This article belongs to the Special Issue Novel Optical Sensors for Biomedical Applications—2nd Edition)
Show Figures

Figure 1

27 pages, 909 KB  
Article
Job Demands and Resources During Digital Transformation in Public Administration: A Qualitative Study
by Victoria Sump, Tanja Wirth, Volker Harth and Stefanie Mache
Behav. Sci. 2026, 16(2), 187; https://doi.org/10.3390/bs16020187 - 27 Jan 2026
Viewed by 125
Abstract
Digital transformation poses significant challenges to employee well-being, particularly in public administration, where hierarchical structures, increasing digitalization pressures, and high mental health-related absenteeism underscore the need to understand individual and job demands and resources. This study explores these aspects from the perspectives of [...] Read more.
Digital transformation poses significant challenges to employee well-being, particularly in public administration, where hierarchical structures, increasing digitalization pressures, and high mental health-related absenteeism underscore the need to understand individual and job demands and resources. This study explores these aspects from the perspectives of employees and supervisors in public administration. Between September 2023 and February 2024, semi-structured interviews were conducted with eight employees and eleven supervisors from public administration organizations in Northern Germany and analyzed using deductive–inductive qualitative content analysis based on the Job Demands-Resources model. Identified individual resources included technical affinity, error tolerance, and willingness to learn, while key job resources involved early and transparent communication, attentive leadership, technical support, and counseling services, with most job resources linked to leadership behavior and work organization. Reported job demands comprised insufficient participation, inadequate planning, and lengthy procedures, whereas personal demands included fears and concerns about upcoming changes and negative attitudes toward transformation. The variation in perceived demands and resources highlights the individuality of the employees’ experiences. The findings provide initial insights into factors influencing psychological well-being at work during digital transformation, emphasizing the importance of participatory communication, employee involvement, leadership awareness of stressors, and competence development. Future research should employ longitudinal and interventional designs to improve causal understanding and generalizability. Full article
Show Figures

Figure 1

Back to TopTop