Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,797)

Search Parameters:
Keywords = time interpolator

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2898 KB  
Article
TCN-LSTM-AM Short-Term Photovoltaic Power Forecasting Model Based on Improved Feature Selection and APO
by Ning Ye, Chaoyang Zhi, Yongchao Yu, Sen Lin and Fengxian Liu
Sensors 2025, 25(24), 7607; https://doi.org/10.3390/s25247607 (registering DOI) - 15 Dec 2025
Abstract
The inherent volatility and intermittency of solar power generation pose significant challenges to the stability of power systems. Consequently, high-precision power forecasting is critical for mitigating these impacts and ensuring reliable operation. This paper proposes a framework for photovoltaic (PV) power forecasting that [...] Read more.
The inherent volatility and intermittency of solar power generation pose significant challenges to the stability of power systems. Consequently, high-precision power forecasting is critical for mitigating these impacts and ensuring reliable operation. This paper proposes a framework for photovoltaic (PV) power forecasting that integrates refined feature engineering with deep learning models in a two-stage approach. In the feature engineering stage, a KNN-PCC-SHAP method is constructed. This method is initiated with the KNN algorithm, which is used to identify anomalous samples and perform data interpolation. PCC is then used to screen linearly correlated features. Finally, the SHAP value is used to quantitatively analyze the nonlinear contributions and interaction effects of each feature, thereby forming an optimal feature subset with higher information density. In the modeling stage, a TCN-LSTM-AM combined forecasting model is constructed to collaboratively capture the local details, long-term dependencies, and key timing features of the PV power sequence. The APO algorithm is utilized for the adaptive optimization of the crucial configuration parameters within the model. Experiments based on real PV power plants and public data show that the framework outperforms multiple comparison models in terms of key indicators such as RMSE (2.1098 kW), MAE (1.1073 kW), and R2 (0.9775), verifying that the deep integration of refined feature engineering and deep learning models is an effective way to improve the accuracy of PV power prediction. Full article
20 pages, 8033 KB  
Article
Laser Pulse-Driven Multi-Sensor Time Synchronization Method for LiDAR Systems
by Jiazhi Yang, Xingguo Han, Wenzhong Deng, Hong Jin and Biao Zhang
Sensors 2025, 25(24), 7555; https://doi.org/10.3390/s25247555 - 12 Dec 2025
Viewed by 94
Abstract
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with [...] Read more.
Multi-sensor systems require precise time synchronization for accurate data fusion. However, currently prevalent software time synchronization methods often rely on clocks provided by the Global Navigation Satellite System (GNSS), which may not offer high accuracy and can be easily affected by issues with GNSS signals. To address this limitation, this study introduces a novel laser pulse-driven time synchronization (LPTS) method in our custom-developed Light Detecting and Ranging (LiDAR) system. The LPTS method uses electrical pulses, synchronized with laser beams as the time synchronization source, driving the Micro-Controller Unit (MCU) timer within the control system to count with a timing accuracy of 0.1 μs and to timestamp the data from the Positioning and Orientation System (POS) unit or laser scanner unit. By employing interpolation techniques, the POS and laser scanner data are precisely synchronized with laser pulses, ensuring strict correlation through their timestamps. In this article, the working principles and experimental methods of both traditional time synchronization (TRTS) and LPTS methods are discussed. We have implemented both methods on experimental platforms, and the results demonstrate that the LPTS method circumvents the dependency on external time references for inter-sensor alignment and minimizes the impact of laser jitter stemming from third-party time references, without requiring additional hardware. Moreover, it elevates the internal time synchronization resolution to 0.1 μs and significantly improves relative timing precision. Full article
(This article belongs to the Section Radar Sensors)
19 pages, 2349 KB  
Article
Enhancing Extrapolation of Buckley–Leverett Solutions with Physics-Informed and Transfer-Learned Fourier Neural Operators
by Yangnan Shangguan, Junhong Jia, Ke Wu, Xianlin Ma, Rong Zhong and Zhenzihao Zhang
Appl. Sci. 2025, 15(24), 13005; https://doi.org/10.3390/app152413005 - 10 Dec 2025
Viewed by 114
Abstract
Accurate modeling of multiphase flow in porous media remains challenging due to the nonlinear transport and sharp displacement fronts described by the Buckley–Leverett (B-L) equation. Although Fourier Neural Operators (FNOs) have recently emerged as powerful surrogates for parametric partial differential equations, they exhibit [...] Read more.
Accurate modeling of multiphase flow in porous media remains challenging due to the nonlinear transport and sharp displacement fronts described by the Buckley–Leverett (B-L) equation. Although Fourier Neural Operators (FNOs) have recently emerged as powerful surrogates for parametric partial differential equations, they exhibit limited robustness when extrapolating beyond the training regime, particularly for shock-dominated fractional flows. This study aims to enhance the extrapolative performance of FNOs for one-dimensional B-L displacement. Analytical solutions were generated using Welge’s graphical method, and datasets were constructed across a range of mobility ratios. A baseline FNO was trained to predict water saturation profiles and evaluated under both interpolation and extrapolation conditions. While the standard FNO accurately reconstructs saturation profiles within the training window, it misestimates shock positions and saturation jumps when extended to longer times or higher mobility ratios. To address these limitations, we develop Physics-Informed FNOs (PI-FNOs), which embed PDE residuals and boundary constraints, and Transfer-Learned FNOs (TL-FNOs), which adapt pretrained operators to new regimes using limited data. Comparative analyses show that both approaches markedly improve extrapolation accuracy, with PI-FNOs achieving the most consistent and physically reliable performance. These findings demonstrate the potential of combining physics constraints and knowledge transfer for robust operator learning in multiphase flow systems. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Energy Systems)
Show Figures

Figure 1

31 pages, 9303 KB  
Article
Automatic Quadrotor Dispatch Missions Based on Air-Writing Gesture Recognition
by Pu-Sheng Tsai, Ter-Feng Wu and Yen-Chun Wang
Processes 2025, 13(12), 3984; https://doi.org/10.3390/pr13123984 - 9 Dec 2025
Viewed by 253
Abstract
This study develops an automatic dispatch system for quadrotor UAVs that integrates air-writing gesture recognition with a graphical user interface (GUI). The DJI RoboMaster quadrotor UAV (DJI, Shenzhen, China) was employed as the experimental platform, combined with an ESP32 microcontroller (Espressif Systems, Shanghai, [...] Read more.
This study develops an automatic dispatch system for quadrotor UAVs that integrates air-writing gesture recognition with a graphical user interface (GUI). The DJI RoboMaster quadrotor UAV (DJI, Shenzhen, China) was employed as the experimental platform, combined with an ESP32 microcontroller (Espressif Systems, Shanghai, China) and the RoboMaster SDK (version 3.0). On the Python (version 3.12.7) platform, a GUI was implemented using Tkinter (version 8.6), allowing users to input addresses or landmarks, which were then automatically converted into geographic coordinates and imported into Google Maps for route planning. The generated flight commands were transmitted to the UAV via a UDP socket, enabling remote autonomous flight. For gesture recognition, a Raspberry Pi integrated with the MediaPipe Hands module was used to capture 16 types of air-written flight commands in real time through a camera. The training samples were categorized into one-dimensional coordinates and two-dimensional images. In the one-dimensional case, X/Y axis coordinates were concatenated after data augmentation, interpolation, and normalization. In the two-dimensional case, three types of images were generated, namely font trajectory plots (T-plots), coordinate-axis plots (XY-plots), and composite plots combining the two (XYT-plots). To evaluate classification performance, several machine learning and deep learning architectures were employed, including a multi-layer perceptron (MLP), support vector machine (SVM), one-dimensional convolutional neural network (1D-CNN), and two-dimensional convolutional neural network (2D-CNN). The results demonstrated effective recognition accuracy across different models and sample formats, verifying the feasibility of the proposed air-writing trajectory framework for non-contact gesture-based UAV control. Furthermore, by combining gesture recognition with a GUI-based map planning interface, the system enhances the intuitiveness and convenience of UAV operation. Future extensions, such as incorporating aerial image object recognition, could extend the framework’s applications to scenarios including forest disaster management, vehicle license plate recognition, and air pollution monitoring. Full article
Show Figures

Figure 1

20 pages, 5173 KB  
Article
LSTM-Based Interpolation of Single-Differential Ionospheric Delays for PPP-RTK Positioning
by Minghui Lyu, Genyou Liu, Run Wang, Shengjun Hu, Gongwei Xiao and Dong Lyu
Aerospace 2025, 12(12), 1094; https://doi.org/10.3390/aerospace12121094 - 9 Dec 2025
Viewed by 109
Abstract
The accurate and rapid estimation of ionospheric delays is essential for PPP-RTK positioning. While traditional spatial interpolation methods like Kriging rely solely on geographic correlations, they often fail to capture rapid temporal variations in the ionosphere. To overcome this limitation, this paper proposes [...] Read more.
The accurate and rapid estimation of ionospheric delays is essential for PPP-RTK positioning. While traditional spatial interpolation methods like Kriging rely solely on geographic correlations, they often fail to capture rapid temporal variations in the ionosphere. To overcome this limitation, this paper proposes a long short-term memory (LSTM)-based interpolation method for interpolating ionospheric delays between satellites. The method leverages both spatial and short-term temporal correlations to generate accurate ionospheric corrections at user locations. The model uses a sliding window approach, taking the most recent 10 min of historical data as input to predict ionospheric delays at the current epoch. Experimental validation using data from a reference network in Australia—with average and maximum baseline lengths of 280 km and 650 km, respectively—demonstrates that the proposed LSTM method achieves a centimeter-level interpolation accuracy, with RMS errors between 0.06 m and 0.07 m under both quiet and geomagnetic storm conditions, significantly outperforming the Kriging method (0.27–0.44 m). In PPP-RTK, the LSTM model achieved a 3D positioning accuracy of 8.99 cm RMS during quiet periods, representing improvements of 51.9% and 28.8% over the No Constraint and Kriging methods, respectively. Under geomagnetic storm conditions, it maintained a 3D RMS of 24.54 cm—over 44% more accurate than other methods—and reduced the average time-to-first-fix (TTFF) to just 7.0 min, a 39.1% improvement. This study provides a novel approach for ionospheric spatial interpolation, demonstrating a particular robustness even during geomagnetic storms. Full article
(This article belongs to the Topic GNSS Measurement Technique in Aerial Navigation)
Show Figures

Figure 1

27 pages, 9422 KB  
Article
A 3D GeoHash-Based Geocoding Algorithm for Urban Three-Dimensional Objects
by Woochul Choi, Hongki Sung, Youngjae Jeon and Kyusoo Chong
Remote Sens. 2025, 17(24), 3964; https://doi.org/10.3390/rs17243964 - 8 Dec 2025
Viewed by 194
Abstract
The growing frequency of extreme weather, earthquakes, fires, and environmental hazards underscores the need for real-time monitoring and predictive management at the urban scale. Conventional three-dimensional spatial information systems, which rely on orthophotos and ground surveys, often suffer from computational inefficiency and data [...] Read more.
The growing frequency of extreme weather, earthquakes, fires, and environmental hazards underscores the need for real-time monitoring and predictive management at the urban scale. Conventional three-dimensional spatial information systems, which rely on orthophotos and ground surveys, often suffer from computational inefficiency and data overload when processing large and heterogeneous datasets. To address these limitations, this study introduces a three-dimensional GeoHash-based geocoding algorithm designed for lightweight, real-time, and attribute-driven digital twin operations. The proposed method comprises five integrated steps: generation of 3D GeoHash grids using longitude, latitude, and altitude coordinates; integration with GIS-based urban 3D models; level optimization using the Shape Overlap Ratio (SOR) with a threshold of 0.90; representative object labeling through weighted volume ratios; and altitude correction using DEM interpolation. Validation using a testbed in Sillim-dong, Seoul (10.19 km2), demonstrated that the framework achieved approximately 9.8 times faster 3D modeling performance than conventional orthophoto-based methods, while maintaining complete object recognition accuracy. The results confirm that the 3D GeoHash framework provides a unified spatial key structure that enhances data interoperability across querying, visualization, and simulation. This approach offers a practical foundation for operational digital twins, supporting high-efficiency 3D mapping and predictive disaster management toward resilient and data-driven urban systems. Full article
(This article belongs to the Special Issue Advances in Applications of Remote Sensing GIS and GNSS)
Show Figures

Figure 1

22 pages, 6114 KB  
Article
Remote Sensing Inversion of Full-Profile Topography Data for Coastal Wetlands Using Synergistic Multi-Platform Sensors from Space, Air, and Ground
by Jiabao Zhang, Jin Wang, Yu Dai, Yiyang Miao and Huan Li
Sensors 2025, 25(24), 7405; https://doi.org/10.3390/s25247405 - 5 Dec 2025
Viewed by 336
Abstract
This study proposes a “zonal inversion–fusion mosaicking” technical framework to address the challenge of acquiring continuous full-profile topography data in coastal wetland intertidal zones. The framework integrates and synergistically analyzes data from multi-platform sensors, including satellite, unmanned aerial vehicle (UAV), and ground-based instruments. [...] Read more.
This study proposes a “zonal inversion–fusion mosaicking” technical framework to address the challenge of acquiring continuous full-profile topography data in coastal wetland intertidal zones. The framework integrates and synergistically analyzes data from multi-platform sensors, including satellite, unmanned aerial vehicle (UAV), and ground-based instruments. Applied to the Min River Estuary wetland, this framework employs zone-specific optimization strategies: in the inundated zone, the topography was inverted using Landsat-9 OLI imagery and a Random Forest algorithm (R2 = 0.79, RMSE = 2.08 m); in the bare flat zone, a linear model was developed based on Sentinel-2 time-series imagery using the inundation frequency method, and it achieved an accuracy of R2 = 0.86 and RMSE = 0.34 m; and in the vegetated zone, high-precision topography was derived from UAV oblique photography with Kriging interpolation (RMSE = 0.10 m). The key innovation is the successful generation of a seamless full-profile digital elevation model (DEM) with an overall RMSE of 0.54 m through benchmark unification and precision-weighted fusion algorithms from the sensor data fusion perspective. This study demonstrates that the synergistic sensors framework effectively overcomes the limitations of single-sensor observations, providing a reliable and generalizable integrated solution for the full-profile topographic monitoring of tidal flats, which offers crucial support for coastal wetland research and management. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

17 pages, 1183 KB  
Article
High-Speed Scientific Computing Using Adaptive Spline Interpolation
by Daniel S. Soper
Big Data Cogn. Comput. 2025, 9(12), 308; https://doi.org/10.3390/bdcc9120308 - 2 Dec 2025
Viewed by 236
Abstract
The increasing scale of modern datasets has created a significant computational bottleneck for traditional scientific and statistical algorithms. To address this problem, the current paper describes and validates a high-performance method based on adaptive spline interpolation that can dramatically accelerate the calculation of [...] Read more.
The increasing scale of modern datasets has created a significant computational bottleneck for traditional scientific and statistical algorithms. To address this problem, the current paper describes and validates a high-performance method based on adaptive spline interpolation that can dramatically accelerate the calculation of foundational scientific and statistical functions. This is accomplished by constructing parsimonious spline models that approximate their target functions within a predefined, highly precise maximum error tolerance. The efficacy of the adaptive spline-based solutions was evaluated through benchmarking experiments that compared spline models against the widely used algorithms in the Python SciPy library for the normal, Student’s t, and chi-squared cumulative distribution functions. Across 30 trials of 10 million computations each, the adaptive spline models consistently achieved a maximum absolute error of no more than 1 × 10−8 while simultaneously ranging between 7.5 and 87.4 times faster than their corresponding SciPy algorithms. All of these improvements in speed were observed to be statistically significant at p < 0.001. The findings establish that adaptive spline interpolation can be both highly accurate and much faster than traditional scientific and statistical algorithms, thereby offering a practical pathway to accelerate both the analysis of large datasets and the progress of scientific inquiry. Full article
Show Figures

Figure 1

26 pages, 6809 KB  
Article
Intra-Urban CO2 Spatiotemporal Patterns and Driving Factors Using Multi-Source Data and AI Methods: A Case Study of Shanghai, China
by Leyi Pan, Qingyan Fu, Fan Yang, Yuchen Shao and Chao Liu
Sustainability 2025, 17(23), 10794; https://doi.org/10.3390/su172310794 - 2 Dec 2025
Viewed by 313
Abstract
Cities are major sources of anthropogenic carbon dioxide (CO2) emissions, making the study of intra-urban CO2 concentration patterns an emerging research priority. However, limited data availability and the complexity of urban environments have impeded detailed spatiotemporal analyses at the city [...] Read more.
Cities are major sources of anthropogenic carbon dioxide (CO2) emissions, making the study of intra-urban CO2 concentration patterns an emerging research priority. However, limited data availability and the complexity of urban environments have impeded detailed spatiotemporal analyses at the city scale. To address these challenges, an analysis supported by multi-source data and GeoAI methods is carried out to examine the spatial distribution, vertical variation, temporal dynamics, and driving factors of CO2 concentrations in urban areas. We combined OCO-2 satellite-derived XCO2 data (2014–2024) with ground-based measurements from the Shanghai Tower (August 2024 to March 2025), alongside meteorological and socioeconomic variables. The analysis employed spatial interpolation (inverse distance weighting), nonparametric testing (Mann–Whitney U test), time series decomposition, ordinary least squares (OLS) regression, and machine learning techniques including random forest and SHAP (SHapley Additive exPlanations) analysis. Results reveal that CO2 concentrations are significantly higher in central urban districts compared to suburban areas, with notable spatial heterogeneity. Elevated levels were detected near ports and ferry routes, with airports and industrial emissions identified as principal contributors. Vertically, CO2 concentrations decline with increasing altitude but exhibit a peak at mid-level heights. Temporally, a pronounced seasonal pattern was observed, characterized by higher concentrations in winter and lower levels in summer. Both OLS regression and machine learning models highlight proximity to emission sources, wind speed, and temperature as key determinants of spatial CO2 variability, with these factors collectively explaining 67% of the variance in OLS models. This study demonstrates how multi-source data and advanced methods can capture the spatial, vertical, and seasonal dynamics and driving factors of urban CO2 concentrations, offering insights for policy, planning, and mitigation. Full article
(This article belongs to the Special Issue AI-Driven Innovations in Urban Resilience and Climate Adaptation)
Show Figures

Figure 1

31 pages, 15453 KB  
Article
Interpolative Estimates of Electric Vehicle Recharging Point Locations in the Context of Electromobility
by Dariusz Kloskowski, Norbert Chamier-Gliszczynski, Jakub Murawski and Mariusz Wasiak
Energies 2025, 18(23), 6281; https://doi.org/10.3390/en18236281 - 29 Nov 2025
Viewed by 155
Abstract
Electromobility is a key element of efforts to reduce transport emissions at points where transport tasks are carried out (e.g., along roads, in urban areas). At the same time, the implementation of electromobility, as a whole, encompasses the movement of people and cargo [...] Read more.
Electromobility is a key element of efforts to reduce transport emissions at points where transport tasks are carried out (e.g., along roads, in urban areas). At the same time, the implementation of electromobility, as a whole, encompasses the movement of people and cargo using electric vehicles (EVs), is strongly dependent on the deployment of EV charging points, which are part of the alternative fuel infrastructure. At the current stage of electromobility development, the process of deploying alternative fuel infrastructure along the TEN-T (Trans-European transport network) is underway, a process mandated by the AFIR (Regulation for the Deployment of Alternative Fuels Infrastructure). The AFIR regulation assumes the construction of infrastructure adapted to serve low- and zero-emission vehicles along the TEN-T network. The elements of the infrastructure under construction include a recharging pool, a recharging station, a recharging point for electric vehicles (EVs), and hydrogen refueling stations for fuel cell electric vehicles (FCEVs). It should be noted that infrastructure elements must be adapted to support light-duty electric vehicles (eLDVs) and heavy-duty electric vehicles (eHDVs). This approach expands the possibilities of using electric vehicles in passenger and freight transport within the TEN-T network. The aim of this article is to estimate the impact of electric vehicle charging points on electromobility in a selected area. During the research phase, spatial interpolation of electric vehicle charging points was conducted using GIS tools. The spatial interpolation of electric vehicle charging points presented in the article represents an innovative approach at the stage of analysis and development of alternative fuel infrastructure along the TEN-T network. Full article
Show Figures

Figure 1

23 pages, 715 KB  
Article
Diffusion Dominated Drug Release from Cylindrical Matrices
by George Kalosakas and Eirini Gontze
Processes 2025, 13(12), 3850; https://doi.org/10.3390/pr13123850 - 28 Nov 2025
Viewed by 332
Abstract
Drug delivery from cylindrical tablets of arbitrary dimensions is discussed here, using the analytical solution of diffusion equation. Utilizing dimensionless quantities, we show that the release profiles are determined by a unique parameter, represented by the aspect ratio of the cylindrical formulation. Fractional [...] Read more.
Drug delivery from cylindrical tablets of arbitrary dimensions is discussed here, using the analytical solution of diffusion equation. Utilizing dimensionless quantities, we show that the release profiles are determined by a unique parameter, represented by the aspect ratio of the cylindrical formulation. Fractional release curves are presented for different values of the aspect ratio, covering a range of many orders of magnitude. The corresponding release profiles lie in between the two opposite limits of release from thin slabs and two-dimensional radial release, pertinent to the cases of thin and long cylinders, respectively. In a quest for a part of the delivery process closer to a zero-order release, the release rate is calculated, which is found to exhibit the typical behavior of purely diffusional release systems. Two simple fitting formulae, containing two parameters each, are considered to approximate the infinite series of the exact solution: The stretched exponential (Weibull) function and a recently suggested expression interpolating between the correct time dependencies at the initial and final stages of the process. The latter provides a better fitting in all cases. The variation of the fitting parameters with the aspect ratio of the device is presented for both fitting functions. We also calculate the characteristic release time, which is found to correspond to an amount of fractional release between 64% and around 68% depending on the cylindrical aspect ratio. We discuss how the last quantities can be used to estimate the drug diffusion coefficient from experimental release profiles and apply these ideas to published drug delivery data. Full article
Show Figures

Figure 1

16 pages, 4784 KB  
Article
FZC-TDE: The Algorithm for Real-Time Ultrasonic Stress Measurement at Low Sampling Rates
by Feifei Qiu, Bing Chen, Chunlang Luo, Jiakai Chen, Ziyong He, Jun Zhao and Guoqing Gou
Micromachines 2025, 16(12), 1340; https://doi.org/10.3390/mi16121340 - 27 Nov 2025
Viewed by 268
Abstract
Micro–nano-sized processing equipment requires high levels of precision, necessitating residual stress measurement to maintain stability. Ultrasonic stress measurement is an effective method but is hindered by high sampling-rate requirements, leading to excessive power consumption and hardware costs. This study presents a low-sampling-rate method [...] Read more.
Micro–nano-sized processing equipment requires high levels of precision, necessitating residual stress measurement to maintain stability. Ultrasonic stress measurement is an effective method but is hindered by high sampling-rate requirements, leading to excessive power consumption and hardware costs. This study presents a low-sampling-rate method based on the novel Frequency-domain Zero-padded Cross-correlation Time Delay Estimation (FZC-TDE) algorithm. Tensile validation experiments determined the minimum hardware sampling-rate requirement: rates below 25 MSps (even with interpolation) fail to characterize temporal delay variations effectively, and a rate of at least 20 times the signal frequency is required for ±10 MPa accuracy. The proposed FZC-TDE utilizes a frequency-domain fusion operation (frequency-domain zero-padding interpolation combined with cross-correlation) to enable real-time, high-resolution delay measurement at low rates. Comparative experiments show that time-domain interpolation methods (Linear, PCH, Cubic Spline) achieve similar stress estimation accuracy at the same rate (e.g., 7.4–8.7 MPa error at 100 MSps), while FZC-TDE (10.3 MPa error) offers superior computational efficiency. At 100 MSps, FZC-TDE maintains a stable computation time (~2.8 ms), while those of interpolation methods increase significantly (20–30 ms) due to higher oversampling factors. Furthermore, FZC-TDE reduces the number of arithmetic operations by 75% (2.26 million vs. ≥9.18 million for 128× oversampling on 1024 points) and exhibits slower computational load growth with oversampling ratios. Thus, FZC-TDE provides an optimal balance of acceptable accuracy and significantly enhanced efficiency, particularly for real-time or resource-constrained applications. This work reduces sampling-rate constraints and supports advancements in micro–nano-sized processing equipment and device performance. Full article
(This article belongs to the Section A:Physics)
Show Figures

Figure 1

24 pages, 16126 KB  
Article
Enhanced Lithium-Ion Battery State-of-Charge Estimation via Akima–Savitzky–Golay OCV-SOC Mapping Reconstruction and Bayesian-Optimized Adaptive Extended Kalman Filter
by Awang Abdul Hadi Isa, Sheik Mohammed Sulthan, Muhammad Norfauzi Dani and Soon Jiann Tan
Energies 2025, 18(23), 6192; https://doi.org/10.3390/en18236192 - 26 Nov 2025
Viewed by 313
Abstract
This paper introduces a novel Lithium-Ion Battery (LIB) State-of-Charge (SOC) estimation approach that integrates Akima–Savitzky–Golay curve reconstruction with a Bayesian-optimized, adaptive Extended Kalman Filter (EKF). The method addresses crucial SOC estimation challenges by means of three foundational advancements: (i) a refined open-circuit voltage [...] Read more.
This paper introduces a novel Lithium-Ion Battery (LIB) State-of-Charge (SOC) estimation approach that integrates Akima–Savitzky–Golay curve reconstruction with a Bayesian-optimized, adaptive Extended Kalman Filter (EKF). The method addresses crucial SOC estimation challenges by means of three foundational advancements: (i) a refined open-circuit voltage (OCV)-SOC curve reconstruction grounded in Akima interpolation coupled with Savitzky–Golay filtering, (ii) an adaptive EKF weighting strategy, and (iii) systematic hyperparameter value optimization executed through Bayesian optimization. Comprehensive performance validation utilizes an extensive dataset collected from LG HG2 18650 cells across temperatures of −20 °C to 40 °C, incorporating multiple standard driving cycles—namely HPPC, UDDS, HWFET, LA92, and US06 cycles. The proposed method achieves an improved estimation accuracy with an average Root Mean Square Error (RMSE) of 2.65% over the different operating conditions and temperature variations. Notably, the method markedly enhances SOC estimation reliability in the critical mid-SOC range (20–80%), while preserving the computational overhead necessary for real-time integration into Battery Management Systems (BMSs). The adaptive weighting successfully compensates for the present physical limitations, thereby delivering a resilient SOC estimation tailored for Electric Vehicle (EV) battery applications. Full article
Show Figures

Figure 1

28 pages, 1010 KB  
Review
Recent Advances in B-Mode Ultrasound Simulators
by Cindy M. Solano-Cordero, Nerea Encina-Baranda, Mailyn Pérez-Liva and Joaquin L. Herraiz
Appl. Sci. 2025, 15(23), 12535; https://doi.org/10.3390/app152312535 - 26 Nov 2025
Viewed by 632
Abstract
Ultrasound (US) imaging is one of the most accessible, non-invasive, and real-time diagnostic techniques in clinical medicine. However, conventional B-mode US suffers from intrinsic limitations such as speckle noise, operator dependence, and variability in image interpretation, which reduce diagnostic reproducibility and hinder skill [...] Read more.
Ultrasound (US) imaging is one of the most accessible, non-invasive, and real-time diagnostic techniques in clinical medicine. However, conventional B-mode US suffers from intrinsic limitations such as speckle noise, operator dependence, and variability in image interpretation, which reduce diagnostic reproducibility and hinder skill acquisition. Because accurate image acquisition and interpretation rely heavily on the operator’s experience, mastering ultrasound requires extensive hands-on training under diverse anatomical and pathological conditions. Yet, traditional educational settings rarely provide consistent exposure to such variability, making simulation-based environments essential for developing and standardizing operator expertise. This scoping review synthesizes advances from 2014 to 2024 in B-mode ultrasound simulation, identifying 80 studies through structured searches in PubMed, Scopus, Web of Science, and IEEE. Simulation methods were organized into interpolative, wave-based, ray-based, and convolution-based models, as well as emerging Artificial Intelligence (AI)-driven approaches. The review emphasizes recent simulation engines and toolboxes reported in this period and highlights the growing role of learning-based pipelines (e.g., Generative Adversarial Networks (GANs) and diffusion) for realism, scalability, and data augmentation. The results show steady progress toward high realism and computational efficiency, including Graphics Processing Unit (GPU)-accelerated transport models, physics-informed convolution, and AI-enhanced translation and synthesis. Remaining challenges include the modeling of nonlinear and dynamic effects at scale, standardizing evaluation across tasks, and integrating physics with learning to balance fidelity and speed. These findings outline current capabilities and future directions for training, validation, and diagnostic support in ultrasound imaging. Full article
Show Figures

Figure 1

22 pages, 7953 KB  
Article
Automated Evaluation of Layer Thickness Uniformity in 3D-Printed Cementitious Composites Using Deep Learning and Comparison with Manual Tracing Methods
by Jiseok Seo, Jun Lee and Bongchun Lee
Buildings 2025, 15(23), 4253; https://doi.org/10.3390/buildings15234253 - 25 Nov 2025
Viewed by 237
Abstract
Layer thickness uniformity critically influences the dimensional accuracy and mechanical performance of large-scale cementitious structures produced by material extrusion 3D printing. This study introduces a computer vision workflow that couples traditional preprocessing with a ResNet-50 convolutional neural network to automatically detect interlayer boundaries [...] Read more.
Layer thickness uniformity critically influences the dimensional accuracy and mechanical performance of large-scale cementitious structures produced by material extrusion 3D printing. This study introduces a computer vision workflow that couples traditional preprocessing with a ResNet-50 convolutional neural network to automatically detect interlayer boundaries and quantify thickness variation. Hollow 50 × 50 × 50 mm specimens, printed from mixes optimized by void ratio (0.6–0.7) for fluidity and stackability, supplied 25 labeled RGB images for training and validation. The network achieved 96% training and 95% validation accuracy, generating boundary maps that required minimal linear interpolation. Pixel-based analysis yielded uniformity indices of 0.857–0.924, closely matching those from manual tracing (0.819–0.919) but with smaller standard deviations, indicating higher measurement stability and reduced sensitivity to lighting artifacts. The proposed method therefore provides an objective, reproducible alternative to labor-intensive manual evaluation and supports real-time prediction and control of dimensional errors during construction-scale 3D printing, advancing the precision and industrial applicability of additive manufacturing with cementitious composites. However, since this study was conducted under limited variable conditions, such as a simplified and repetitive experimental environment, a larger number of images will be required for model training to enable application under more general conditions. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop