Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (163)

Search Parameters:
Keywords = on-line error correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 234 KB  
Article
Identifying “Ina Jane Doe”: The Forensic Anthropologists’ Role in Revising and Correcting Narratives in a Cold Case
by Amy R. Michael, Samantha H. Blatt, Jennifer D. Bengtson, Ashanti Maronie, Samantha Unwin and Jose Sanchez
Humans 2026, 6(1), 1; https://doi.org/10.3390/humans6010001 (registering DOI) - 30 Dec 2025
Abstract
The 1992 cold case homicide of “Ina Jane Doe” illustrates how an interdisciplinary team worked to identify the decedent using a combined approach of skeletal re-analysis, updated forensic art informed by anthropologists’ input, archival research, and forensic investigative genetic genealogy. The original forensic [...] Read more.
The 1992 cold case homicide of “Ina Jane Doe” illustrates how an interdisciplinary team worked to identify the decedent using a combined approach of skeletal re-analysis, updated forensic art informed by anthropologists’ input, archival research, and forensic investigative genetic genealogy. The original forensic art for “Ina Jane Doe” showed an over-pathologization of skeletal features and an inaccurate hairstyle; however, the case gained notoriety on internet true crime forums leading to speculation about the decedent’s intellectual capacity and physical appearance. The “Ina Jane Doe” case demonstrates the importance of advocating for skeletal re-analysis as more robust methods and technologies emerge in forensic science, as well as the impact of sustained public interest in cold cases. In this case, continuous public interest and online speculation led to anthropologists constructing a team of experts to correct and revise narratives about the decedent. Forensic anthropologists’ role in cold cases may include offering skeletal re-analysis, recognizing and correcting errors in the original estimations of the biological profile, searching for missing person matches, and/or working collaboratively with subject matter experts in forensic art, odontology and forensic investigative genetic genealogy. Full article
32 pages, 8738 KB  
Article
Fuzzy Adaptive Impedance Control Method for Underwater Manipulators Based on Bayesian Recursive Least Squares and Displacement Correction
by Baoju Wu, Xinyu Liu, Nanmu Hui, Yan Huo, Jiaxiang Zheng and Changjin Dong
Machines 2026, 14(1), 39; https://doi.org/10.3390/machines14010039 - 28 Dec 2025
Viewed by 31
Abstract
During constant-force operations in complex marine environments, underwater manipulators are affected by hydrodynamic disturbances and unknown, time-varying environment stiffness. Under classical impedance control (IC), this often leads to large transient contact forces and steady-state force errors, making high-precision compliant control difficult to achieve. [...] Read more.
During constant-force operations in complex marine environments, underwater manipulators are affected by hydrodynamic disturbances and unknown, time-varying environment stiffness. Under classical impedance control (IC), this often leads to large transient contact forces and steady-state force errors, making high-precision compliant control difficult to achieve. To address this issue, this study proposes a Bayesian recursive least-squares-based fuzzy adaptive impedance control (BRLS-FAIC) strategy with displacement correction for underwater manipulators. Within a position-based impedance-control framework, a Bayesian Recursive Least Squares (BRLS) stiffness identifier is constructed by incorporating process and measurement noise into a stochastic regression model, enabling online estimation of the environment stiffness and its covariance under noisy, time-varying conditions. The identified stiffness is used in a displacement-correction law derived from the contact model to update the reference position, thereby removing dependence on the unknown environment location and reducing steady-state force bias. On this basis, a three-input/two-output fuzzy adaptive impedance tuner, driven by the force error, its rate of change, and a stiffness-perception index, adjusts the desired damping and stiffness online under amplitude limitation and first-order filtering. Using an underwater manipulator dynamic model that includes buoyancy and hydrodynamic effects, MATLAB simulations are carried out for step, ramp, and sinusoidal stiffness variations and for planar, inclined, and curved contact scenarios. The results show that, compared with classical IC and fuzzy adaptive impedance control (FAIC), the proposed BRLS-FAIC strategy reduces steady-state force errors, shortens force and position settling times, and suppresses peak contact forces in variable-stiffness underwater environments. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

17 pages, 3318 KB  
Article
Collaborative Control for a Robot Manipulator via Interaction-Force-Based Impedance Method and Extremum Seeking Optimization
by Ming Pi
Sensors 2025, 25(24), 7648; https://doi.org/10.3390/s25247648 - 17 Dec 2025
Viewed by 261
Abstract
This paper introduces an adaptive impedance control strategy for robotic manipulators, developed through the extremum seeking technique. A model-based disturbance observer (DOB) is employed to estimate contact forces, removing the dependency on torque sensors. An impedance vector is constructed to correct the errors [...] Read more.
This paper introduces an adaptive impedance control strategy for robotic manipulators, developed through the extremum seeking technique. A model-based disturbance observer (DOB) is employed to estimate contact forces, removing the dependency on torque sensors. An impedance vector is constructed to correct the errors arising from motor uncertainties and unknown couplings, without considering the threshold value of the control parameters. Joint tracking errors and fluctuations in contact force are incorporated into the cost function. For various tasks, suitable control parameters are adaptively optimized in real time using an extremum seeking approach, which continuously evaluates the cost function. A rigorous analysis is conducted on the stability of the proposed controller. Compared to conventional approaches, the proposed adaptive impedance control offers a more streamlined design for adjusting the manipulator’s contact impedance. Experimental results confirm that the extremum seeking strategy successfully tuned the controller parameters online according to variations in the cost function. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 2160 KB  
Article
Human–Robot Interaction for a Manipulator Based on a Neural Adaptive RISE Controller Using Admittance Model
by Shengli Chen, Lin Jiang, Keqiang Bai, Yuming Chen, Xiaoang Xu, Guanwu Jiang and Yueyue Liu
Electronics 2025, 14(24), 4862; https://doi.org/10.3390/electronics14244862 - 10 Dec 2025
Viewed by 291
Abstract
Human–robot cooperative tasks require physical human–robot interaction (pHRI) systems that can adapt to individual human behaviors while ensuring robustness and stability. This paper presents a dual-loop control framework combining an admittance outer loop and a neural adaptive inner loop based on the Robust [...] Read more.
Human–robot cooperative tasks require physical human–robot interaction (pHRI) systems that can adapt to individual human behaviors while ensuring robustness and stability. This paper presents a dual-loop control framework combining an admittance outer loop and a neural adaptive inner loop based on the Robust Integral of the Sign of the Error (RISE) approach. The outer loop reshapes the manipulator trajectory according to interaction forces, ensuring compliant motion and user safety. The inner-loop Adaptive RISE–RBFNN controller compensates for unknown nonlinear dynamics and bounded disturbances through online neural learning and robust sign-based correction, guaranteeing semi-global asymptotic convergence. Quantitative results demonstrate that the proposed adaptive RISE controller with neural-network error compensation (ARINNSE) achieves superior performance in the Joint-1 tracking task, reducing the root-mean-square tracking error by approximately 51.7% and 42.3% compared to conventional sliding mode control and standard RISE methods, respectively, while attaining the smallest maximum absolute error and maintaining control energy consumption comparable to that of RISE. Under human–robot interaction scenarios, the controller preserves stable, bounded control inputs and rapid error convergence even under time-varying disturbances. These results confirm that the proposed admittance-based RISE–RBFNN framework provides enhanced robustness, adaptability, and compliance, making it a promising approach for safe and efficient human–robot collaboration. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

38 pages, 2752 KB  
Article
Dual-Layer Optimization Control for Furnace Temperature Setting and Tracking in Municipal Solid Waste Incineration Process
by Yicong Wu, Wei Wang, Jian Tang, Zenan Li and Jian Rong
Sustainability 2025, 17(23), 10577; https://doi.org/10.3390/su172310577 - 25 Nov 2025
Viewed by 281
Abstract
In the global trend towards a sustainable circular economy, incineration technology is widely used for the treatment of municipal solid waste (MSW), as it effectively achieves waste harmlessness, reduction, and energy recovery. During the MSW incineration (MSWI) process, the furnace temperature (FT) is [...] Read more.
In the global trend towards a sustainable circular economy, incineration technology is widely used for the treatment of municipal solid waste (MSW), as it effectively achieves waste harmlessness, reduction, and energy recovery. During the MSW incineration (MSWI) process, the furnace temperature (FT) is closely linked to pollutant emission concentrations. Therefore, precise control and stable monitoring of the FT are essential for minimizing pollution emissions. However, existing studies generally treat the optimization of FT setpoint value and tracking control as separate issues, lacking a unified optimization framework that can link environmental objectives with control parameters in an online, automatic, and closed-loop manner. To address these issues, a dual-layer optimization control method for FT setting and tracking, aimed at minimizing pollutant concentrations, is proposed. In the first layer, the optimization targets the lowest possible NOx and CO2 emission concentrations, using a genetic algorithm (GA) to determine optimal FT setpoints. In the second layer, the optimization minimizes the Integral of Time-weighted Absolute Error (ITAE) as the performance index, optimizing the parameters of multi-loop PID controllers via an improved GA. Additionally, an innovative shared-memory judgment mechanism is proposed to transmit process data in real time. Based on residual dynamic correction of the optimization function, an effective double-loop closed control architecture is established. Experimental validation shows that, compared to traditional methods, the optimized control system exhibits faster setpoint value tracking, smaller steady-state errors, and stronger anti-interference capabilities, leading to a significant reduction in pollutant emissions. This study provides a new approach for intelligent optimization control in MSWI with substantial application prospects. Full article
Show Figures

Figure 1

24 pages, 11339 KB  
Article
A Simulation Modeling of Temporal Multimodality in Online Streams
by Abdurrahman Alshareef
Information 2025, 16(11), 999; https://doi.org/10.3390/info16110999 - 18 Nov 2025
Viewed by 350
Abstract
Temporal variability in online streams arises in information systems where heterogeneous modalities exhibit varying latencies and delay distributions. Efficient synchronization strategies help to establish a reliable flow and ensure a correct delivery. This work establishes a formal modeling foundation for addressing temporal dynamics [...] Read more.
Temporal variability in online streams arises in information systems where heterogeneous modalities exhibit varying latencies and delay distributions. Efficient synchronization strategies help to establish a reliable flow and ensure a correct delivery. This work establishes a formal modeling foundation for addressing temporal dynamics in streams with multimodality using a discrete-event system specification framework. This specification captures different latencies and interarrival dynamics inherent in multimodal flows. The framework also incorporates a Markov variant to account for variations in delay processes, thereby capturing timing uncertainty in a single modality. The proposed models are modular, with built-in mechanisms for diverse temporal integration, thereby facilitating heterogeneity in information flows and communication. Various structural and behavioral forms can be flexibly represented and readily simulated. The devised experiments demonstrate, across several model permutations, the time-series behavior of individual stream components and the overall composed system, highlighting performance metrics in both, quantifying composability and modular effects, and incorporating learnability into the simulation of multimodal streams. The primary motivation of this work is to enhance the degree of fitting within formal simulation frameworks and to enable adaptive, learnable distribution modeling in multimodal settings that combine synthetic and real input data. We demonstrate the resulting errors and degradation when replacing real sensor data with synthetic inputs at different dropping probabilities. Full article
Show Figures

Graphical abstract

14 pages, 1192 KB  
Article
Global Variations in Surgical Techniques and Postoperative Care for Radial Forearm Free Flap (RFFF) in Head & Neck Surgery: A Cross-Sectional International Survey
by Elena Russo, Andrea Costantino, Giannicola Iannella, Filippo Marchi, Antonio Greco, Luca Calabrese, Antonella Polimeni, Remo Accorona, Armando De Virgilio and RFFFSurv Collaborative
J. Clin. Med. 2025, 14(22), 8023; https://doi.org/10.3390/jcm14228023 - 12 Nov 2025
Viewed by 458
Abstract
Objective: This cross-sectional survey aimed to comprehensively gather data on radial forearm free flap (RFFF) utilization and practices in head and neck reconstructive surgery. Methods: An online questionnaire was organized into seven sections: demographics, surgeon experience, harvesting techniques, microsurgical considerations, postoperative [...] Read more.
Objective: This cross-sectional survey aimed to comprehensively gather data on radial forearm free flap (RFFF) utilization and practices in head and neck reconstructive surgery. Methods: An online questionnaire was organized into seven sections: demographics, surgeon experience, harvesting techniques, microsurgical considerations, postoperative care, flap monitoring, and outcomes. It was distributed by email to 216 head and neck reconstructive surgeons who attended the International Federation of Head and Neck Oncologic Societies (IFHNOS) congress in Rome (21–25 June 2023) using the congress mailing list. Responses were collected from 54 surgeons (25% response rate), representing 15 countries across Europe, Asia, the Americas, and Oceania, underscoring the international scope of the survey between 5 February and 25 March 2024. The questionnaire was not formally piloted or validated. Missing data were managed on a per-question basis. Descriptive statistics were used, and 95% confidence intervals (CIs) were calculated for key surgical outcomes to indicate estimate precision. Associations between categorical variables were analyzed using Pearson’s χ2 test with Cramér’s V as an effect size, and relationships between continuous variables were examined using Spearman’s rank correlation (ρ) with 95% confidence intervals (CIs). Given the exploratory design and limited sample size, no correction for multiple comparisons was applied, and the risk of both Type I and Type II errors was acknowledged. Results: Variations were observed in harvesting techniques, microsurgical preferences, and postoperative care protocols. Most surgeons initiated flap harvesting concurrently with tumor resection, primarily preserving superficial sensory nerves. Regarding venous outflow, 50% of respondents preferred the cephalic vein, 19% used comitant veins, and 29% utilized both systems when possible. Perioperative antibiotic use was standard practice, though anticoagulant preferences and flap monitoring methods varied. The study achieved a high success rate for RFFF procedures, exceeding 95%, with venous thrombosis identified as the main cause of flap failure. No significant correlations were found between flap failure rate and training method (p = 0.21), specialty (p = 0.37), annual number of RFFF procedures (p = 0.89), surgeon age (p = 0.42), or hospital type (p = 0.48). Effect sizes were small to moderate, indicating weak or negligible associations. Similarly, perioperative factors such as anticoagulant use (p = 0.84), preoperative antibiotics (p = 0.42), surgical instruments (p = 0.61), suture techniques (p = 0.51), and donor vein selection (p = 0.20) showed no statistically significant associations with flap loss. Patient satisfaction assessments were inconsistent, with only 39% of surgeons routinely performing them. Conclusions: The study provides valuable insights into current RFFF practices and outcomes across an international cohort of head and neck surgeons, highlighting patterns and variability in techniques, perioperative care, and monitoring strategies. Full article
Show Figures

Figure 1

23 pages, 3754 KB  
Article
Target Tracking with Adaptive Morphological Correlation and Neural Predictive Modeling
by Victor H. Diaz-Ramirez and Leopoldo N. Gaxiola-Sanchez
Appl. Sci. 2025, 15(21), 11406; https://doi.org/10.3390/app152111406 - 24 Oct 2025
Viewed by 376
Abstract
A tracking method based on adaptive morphological correlation and neural predictive models is presented. The morphological correlation filters are optimized according to the aggregated binary dissimilarity-to-matching ratio criterion and are adapted online to appearance variations of the target across frames. Morphological correlation filtering [...] Read more.
A tracking method based on adaptive morphological correlation and neural predictive models is presented. The morphological correlation filters are optimized according to the aggregated binary dissimilarity-to-matching ratio criterion and are adapted online to appearance variations of the target across frames. Morphological correlation filtering enables reliable detection and accurate localization of the target in the scene. Furthermore, trained neural models predict the target’s expected location in subsequent frames and estimate its bounding box from the correlation response. Effective stages for drift correction and tracker reinitialization are also proposed. Performance evaluation results for the proposed tracking method on four image datasets are presented and discussed using objective measures of detection rate (DR), location accuracy in terms of normalized location error (NLE), and region-of-support estimation in terms of intersection over union (IoU). The results indicate a maximum average performance of 90.1% in DR, 0.754 in IoU, and 0.004 in NLE on a single dataset, and 83.9%, 0.694, and 0.015, respectively, across all four datasets. In addition, the results obtained with the proposed tracking method are compared with those of five widely used correlation filter-based trackers. The results show that the suggested morphological-correlation filtering, combined with trained neural models, generalizes well across diverse tracking conditions. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

26 pages, 1386 KB  
Review
A Review of Large Language Models for Automated Test Case Generation
by Arda Celik and Qusay H. Mahmoud
Mach. Learn. Knowl. Extr. 2025, 7(3), 97; https://doi.org/10.3390/make7030097 - 9 Sep 2025
Cited by 1 | Viewed by 8693
Abstract
Automated test case generation aims to improve software testing by reducing the manual effort required to create test cases. Recent advancements in large language models (LLMs), with their ability to understand natural language and generate code, have identified new opportunities to enhance this [...] Read more.
Automated test case generation aims to improve software testing by reducing the manual effort required to create test cases. Recent advancements in large language models (LLMs), with their ability to understand natural language and generate code, have identified new opportunities to enhance this process. In this review, the focus is on the use of LLMs in test case generation to identify the effectiveness of the proposed methods compared with existing tools and potential directions for future research. A literature search was conducted using online resources, filtering the studies based on the defined inclusion and exclusion criteria. This paper presents the findings from the selected studies according to the three research questions and further categorizes the findings based on the common themes. These findings highlight the opportunities and challenges associated with the use of LLMs in this domain. Although improvements were observed in metrics such as test coverage, usability, and correctness, limitations such as inconsistent performance and compilation errors were highlighted. This provides a state-of-the-art review of LLM-based test case generation, emphasizing the potential of LLMs to improve automated testing while identifying areas for further advancements. Full article
Show Figures

Graphical abstract

20 pages, 5495 KB  
Article
An Online Correction Method for System Errors in the Pipe Jacking Inertial Guidance System
by Yutong Zu, Lu Wang, Zheng Zhou, Da Gong, Yuanbiao Hu and Gansheng Yang
Mathematics 2025, 13(17), 2764; https://doi.org/10.3390/math13172764 - 28 Aug 2025
Viewed by 630
Abstract
The pipe-jacking inertial guidance method is a key technology to solve the guidance problems of complex pipe-jacking projects, such as long distances and curves. However, since its guidance information is obtained by gyroscope integration. System errors will accumulate over time and affect the [...] Read more.
The pipe-jacking inertial guidance method is a key technology to solve the guidance problems of complex pipe-jacking projects, such as long distances and curves. However, since its guidance information is obtained by gyroscope integration. System errors will accumulate over time and affect the guidance accuracy. To address the above issues, this study proposes an intelligent online system error correction scheme based on single-axis rotation and data backtracking. The method enhances system observability by actively exciting the sensor states and introducing data reuse technology. Then, a Bayesian optimization algorithm is incorporated to construct a multi-objective function. The algorithm autonomously searches for the optimal values of three key control parameters, thereby constructing an optimal correction strategy. The results show that the inclination accuracy improving by 99.36%. The tool face accuracy improving by 94.05%. The azimuth accuracy improving by 94.42% improvement. By comparing different correction schemes, the proposed method shows better performance in estimating gyro bias. In summary, the proposed method uses single-axis rotation and data backtracking, and can correct system errors in inertial navigation effectively. It has better value for engineering and provides a technical foundation for high-accuracy navigation in tunnel, pipe-jacking, and other complex tasks with low-cost inertial systems. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

33 pages, 3525 KB  
Article
Investigation into the Performance Enhancement and Configuration Paradigm of Partially Integrated RL-MPC System
by Wanqi Guo and Shigeyuki Tateno
Mathematics 2025, 13(15), 2341; https://doi.org/10.3390/math13152341 - 22 Jul 2025
Viewed by 895
Abstract
The improvement of the partially integrated reinforcement learning-model predictive control (RL-MPC) system is developed in the paper by introducing the Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms. This framework differs from the traditional ones, which completely [...] Read more.
The improvement of the partially integrated reinforcement learning-model predictive control (RL-MPC) system is developed in the paper by introducing the Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithms. This framework differs from the traditional ones, which completely substitute the MPC prediction model; instead, an RL agent refines predictions through feedback correction and thus maintains interpretability while improving robustness. Most importantly, the study details two configuration paradigms: decoupled (offline policy application) and coupled (online policy update) and tests them for their effectiveness in trajectory tracking tasks within simulation and real-life experiments. A decoupled framework based on TD3 showed significant improvements in control performance compared to the rest of the implemented paradigms, especially concerning Integral of Time-weighted Absolute Error (ITAE) and mean absolute error (MAE). This work also illustrated the advantages of partial integration in balancing adaptability and stability, thus making it suitable for real-time applications in robotics. Full article
Show Figures

Figure 1

15 pages, 2325 KB  
Article
Research on Quantitative Analysis Method of Infrared Spectroscopy for Coal Mine Gases
by Feng Zhang, Yuchen Zhu, Lin Li, Suping Zhao, Xiaoyan Zhang and Chaobo Chen
Molecules 2025, 30(14), 3040; https://doi.org/10.3390/molecules30143040 - 20 Jul 2025
Viewed by 976
Abstract
Accurate and reliable detection of coal mine gases is the key to ensuring the safe service of coal mine production. Fourier Transform Infrared (FTIR) spectroscopy, due to its high sensitivity, non-destructive nature, and potential for online monitoring, has emerged as a key technique [...] Read more.
Accurate and reliable detection of coal mine gases is the key to ensuring the safe service of coal mine production. Fourier Transform Infrared (FTIR) spectroscopy, due to its high sensitivity, non-destructive nature, and potential for online monitoring, has emerged as a key technique in gas detection. However, the complex underground environment often causes baseline drift in IR spectra. Furthermore, the variety of gas species and uneven distribution of concentrations make it difficult to achieve precise and reliable online analysis using existing quantitative methods. This paper aims to perform a quantitative analysis of coal mine gases by FTIR. It utilized the adaptive smoothness parameter penalized least squares method to correct the drifted spectra. Subsequently, based on the infrared spectral distribution characteristics of coal mine gases, they could be classified into gases with mutually distinct absorption peaks and gases with overlapping absorption peaks. For gases with distinct absorption peaks, three spectral lines, including the absorption peak and its adjacent troughs, were selected for quantitative analysis. Spline fitting, polynomial fitting, and other curve fitting methods are used to establish a functional relationship between characteristic parameters and gas concentration. For gases with overlapping absorption peaks, a wavelength selection method bassed on the impact values of variables and population analysis was applied to select variables from the spectral data. The selected variables were then used as input features for building a model with a backpropagation (BP) neural network. Finally, the proposed method was validated using standard gases. Experimental results show detection limits of 0.5 ppm for CH4, 1 ppm for C2H6, 0.5 ppm for C3H8, 0.5 ppm for n-C4H10, 0.5 ppm for i-C4H10, 0.5 ppm for C2H4, 0.2 ppm for C2H2, 0.5 ppm for C3H6, 1 ppm for CO, 0.5 ppm for CO2, and 0.1 ppm for SF6, with quantification limits below 10 ppm for all gases. Experimental results show that the absolute error is less than 0.3% of the full scale (F.S.) and the relative error is within 10%. These results demonstrate that the proposed infrared spectral quantitative analysis method can effectively analyze mine gases and achieve good predictive performance. Full article
Show Figures

Figure 1

16 pages, 1919 KB  
Review
Review of Utilisation Methods of Multi-Source Precipitation Products for Flood Forecasting in Areas with Insufficient Rainfall Gauges
by Yanhong Dou, Ke Shi, Hongwei Cai, Min Xie and Ronghua Liu
Atmosphere 2025, 16(7), 835; https://doi.org/10.3390/atmos16070835 - 9 Jul 2025
Viewed by 936
Abstract
The continuous release of global precipitation products offers a stable data source for flood forecasting in areas without rainfall gauges. However, due to constraints of forecast timeliness, only no/short-lag precipitation products can be utilised for flood forecasting, but these products are prone to [...] Read more.
The continuous release of global precipitation products offers a stable data source for flood forecasting in areas without rainfall gauges. However, due to constraints of forecast timeliness, only no/short-lag precipitation products can be utilised for flood forecasting, but these products are prone to significant errors. Therefore, the keys of flood forecasting in areas lacking rainfall gauges are selecting appropriate precipitation products, improving the accuracy of precipitation products, and reducing the errors of precipitation products by combination with hydrology models. This paper first presents the current no/short-lag precipitation products that are continuously updated online and for which the download of long series historical data is supported. Based on this, this paper reviews the utilisation methods of multi-source precipitation products for flood forecasting in areas with insufficient rainfall gauges from three perspectives: methods for precipitation product performance evaluation, multi-source precipitation fusion methods, and methods for coupling precipitation products with hydrological models. Finally, future research priorities are summarized: (i) to construct a quantitative evaluation system that can take into account both the accuracy and complementarity of precipitation products; (ii) to focus on the improvement of the areal precipitation fields interpolated by gauge-based precipitation in multi-source precipitation fusion; (iii) to couple real-time correction of flood forecasts and multi-source precipitation; and (iv) to enhance global sharing and utilization of rain gauge–radar data for improving the accuracy of satellite-based precipitation products. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

22 pages, 8780 KB  
Article
PCA Weight Determination-Based InSAR Baseline Optimization Method: A Case Study of the HaiKou Phosphate Mining Area in Kunming, Yunnan Province, China
by Weimeng Xu, Jingchun Zhou, Jinliang Wang, Huihui Mei, Xianjun Ou and Baixuan Li
Remote Sens. 2025, 17(13), 2163; https://doi.org/10.3390/rs17132163 - 24 Jun 2025
Cited by 1 | Viewed by 929
Abstract
In InSAR processing, optimizing baselines by selecting appropriate interferometric pairs is crucial for ensuring interferogram quality and improving InSAR monitoring accuracy. However, in multi-temporal InSAR processing, the quality of interferometric pairs is constrained by spatiotemporal baseline parameters and surface scattering characteristics. Traditional selection [...] Read more.
In InSAR processing, optimizing baselines by selecting appropriate interferometric pairs is crucial for ensuring interferogram quality and improving InSAR monitoring accuracy. However, in multi-temporal InSAR processing, the quality of interferometric pairs is constrained by spatiotemporal baseline parameters and surface scattering characteristics. Traditional selection methods, such as those based on average coherence thresholding, consider only a single factor and do not account for the interactions among multiple factors. This study introduces a principal component analysis (PCA) method to comprehensively analyze four factors: temporal baseline, spatial baseline, NDVI difference, and coherence, scientifically setting weights to achieve precise selection of interferometric pairs. Additionally, the GACOS (Generic Atmospheric Correction Online Service) atmospheric correction product is applied to further enhance data quality. Taking the Haikou Phosphate Mine area in Kunming, Yunnan, as the study area, surface deformation information was extracted using the SBAS-InSAR technique, and the spatiotemporal characteristics of subsidence were analyzed. The research results show the following: (1) compared with other methods, the PCA-based interferometric pair optimization method significantly improves the selection performance. The minimum value decreases to 0.248 rad, while the mean and standard deviation are reduced to 1.589 rad and 0.797 rad, respectively, effectively suppressing error fluctuations and enhancing the stability of the inversion; (2) through comparative analysis of the effective pixel ratio and standard deviation of deformation rates, as well as a comprehensive evaluation of the deformation rate probability density function (PDF) distribution, the PCA optimization method maintains a high effective pixel ratio while enhancing sensitivity to surface deformation changes, indicating its advantage in deformation monitoring in complex terrain areas; (3) the combined analysis of spatial autocorrelation (Moran’s I coefficient) and spatial correlation coefficients (Pearson and Spearman) verified the advantages of the PCA optimization method in maintaining spatial structure and result consistency, supporting its ability to achieve higher accuracy and stability in complex surface deformation monitoring. In summary, the PCA-based baseline optimization method significantly improves the accuracy of SBAS-InSAR in surface subsidence monitoring, fully demonstrating its reliability and stability in complex terrain areas, and providing a solid technical support for dynamic monitoring of surface subsidence in mining areas. Full article
Show Figures

Graphical abstract

20 pages, 8696 KB  
Article
Dynamic Error Modeling and Predictive Compensation for Direct-Drive Turntables Based on CEEMDAN-TPE-LightGBM-APC Algorithm
by Manzhi Yang, Hao Ren, Shijia Liu, Bin Feng, Juan Wei, Hongyu Ge and Bin Zhang
Micromachines 2025, 16(7), 731; https://doi.org/10.3390/mi16070731 - 22 Jun 2025
Viewed by 829
Abstract
The direct-drive turntable serves as the core actuator in high-precision macro-micro drive systems, where its positioning accuracy fundamentally determines overall system performance. Accurate error prediction and compensation technology represent a critical prerequisite for achieving continuous error compensation and predictive control in direct-drive turntables, [...] Read more.
The direct-drive turntable serves as the core actuator in high-precision macro-micro drive systems, where its positioning accuracy fundamentally determines overall system performance. Accurate error prediction and compensation technology represent a critical prerequisite for achieving continuous error compensation and predictive control in direct-drive turntables, making research on positioning error modeling, prediction, and compensation of vital importance. This study presents a dynamic continuous error compensation model for direct-drive turntables, based on an analysis of positioning error mechanisms and the implementation of a “decomposition-modeling-integration-correction” strategy, which features high flexibility, adaptability, and online prediction-correction capabilities. Our methodology comprises four key stages: Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN)-based decomposition of historical error data, development of component-specific prediction models using Tree-structured Parzen Estimator (TPE)-optimized Light Gradient Boosting Machine (LightGBM) algorithms for each Intrinsic Mode Function (IMF), integration of component predictions to generate initial values, and application of the Adaptive Prediction Correction (APC) module to produce final predictions. Validation results demonstrate substantial performance improvements, with compensated positioning error ranges reduced from [−31.83″, 41.59″] to [−15.09″, 12.07″] (test set) and from [−22.50″, 9.15″] to [−8.15″, 8.56″] (extrapolation test set), corresponding to standard deviation reductions of 71.2% and 61.6%, respectively. These findings conclusively establish the method’s effectiveness in significantly enhancing accuracy while maintaining prediction stability and operational efficiency, underscoring its considerable theoretical and practical value for error compensation in precision mechanical systems. Full article
(This article belongs to the Special Issue Advanced Manufacturing Technology and Systems, 3rd Edition)
Show Figures

Figure 1

Back to TopTop