Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (317)

Search Parameters:
Keywords = moving average filters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6440 KiB  
Article
A Gravity Data Denoising Method Based on Multi-Scale Attention Mechanism and Physical Constraints Using U-Net
by Bing Liu, Houpu Li, Shaofeng Bian, Chaoliang Zhang, Bing Ji and Yujie Zhang
Appl. Sci. 2025, 15(14), 7956; https://doi.org/10.3390/app15147956 - 17 Jul 2025
Abstract
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this [...] Read more.
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this study proposes an improved U-Net deep learning framework that integrates multi-scale feature extraction and attention mechanisms. Furthermore, a Laplace consistency constraint is introduced into the loss function to enhance denoising performance and physical interpretability. Notably, the datasets used in this study are generated by the authors, involving simulations of subsurface prism distributions with realistic density perturbations (±20% of typical rock densities) and the addition of controlled Gaussian noise (5%, 10%, 15%, and 30%) to simulate field-like conditions, ensuring the diversity and physical relevance of training samples. Experimental validation on these synthetic datasets and real field datasets demonstrates the superiority of the proposed method over conventional techniques. For noise levels of 5%, 10%, 15%, and 30% in test sets, the improved U-Net achieves Peak Signal-to-Noise Ratios (PSNR) of 59.13 dB, 52.03 dB, 48.62 dB, and 48.81 dB, respectively, outperforming wavelet transforms, moving averages, and low-pass filtering by 10–30 dB. In multi-component gravity gradient denoising, our method excels in detail preservation and noise suppression, improving Structural Similarity Index (SSIM) by 15–25%. Field data tests further confirm enhanced identification of key geological anomalies and overall data quality improvement. In summary, the improved U-Net not only delivers quantitative advancements in gravity data denoising but also provides a novel approach for high-precision geophysical data preprocessing. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Earth Sciences—2nd Edition)
Show Figures

Figure 1

19 pages, 3176 KiB  
Article
Deploying an Educational Mobile Robot
by Dorina Plókai, Borsa Détár, Tamás Haidegger and Enikő Nagy
Machines 2025, 13(7), 591; https://doi.org/10.3390/machines13070591 - 8 Jul 2025
Viewed by 551
Abstract
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped [...] Read more.
This study presents the development of a software solution for processing, analyzing, and visualizing sensor data collected by an educational mobile robot. The focus is on statistical analysis and identifying correlations between diverse datasets. The research utilized the PlatypOUs mobile robot platform, equipped with odometry and inertial measurement units (IMUs), to gather comprehensive motion data. To enhance the reliability and interpretability of the data, advanced data processing techniques—such as moving averages, correlation analysis, and exponential smoothing—were employed. Python-based tools, including Matplotlib and Visual Studio Code, were used for data visualization and analysis. The analysis provided key insights into the robot’s motion dynamics; specifically, its stability during linear movements and variability during turns. By applying moving average filtering and exponential smoothing, noise in the sensor data was significantly reduced, enabling clearer identification of motion patterns. Correlation analysis revealed meaningful relationships between velocity and acceleration during various motion states. These findings underscore the value of advanced data processing techniques in improving the performance and reliability of educational mobile robots. The insights gained in this pilot project contribute to the optimization of navigation algorithms and motion control systems, enhancing the robot’s future potential in STEM education applications. Full article
Show Figures

Figure 1

17 pages, 1765 KiB  
Article
Automated Sidewalk Surface Detection Using Wearable Accelerometry and Deep Learning
by Do-Eun Park, Jong-Hoon Youn and Teuk-Seob Song
Sensors 2025, 25(13), 4228; https://doi.org/10.3390/s25134228 - 7 Jul 2025
Viewed by 276
Abstract
Walking-friendly cities not only promote health and environmental benefits but also play crucial roles in urban development and local economic revitalization. Typically, pedestrian interviews and surveys are used to evaluate walkability. However, these methods can be costly to implement at scale, as they [...] Read more.
Walking-friendly cities not only promote health and environmental benefits but also play crucial roles in urban development and local economic revitalization. Typically, pedestrian interviews and surveys are used to evaluate walkability. However, these methods can be costly to implement at scale, as they demand considerable time and resources. To address the limitations in current methods for evaluating pedestrian pathways, we propose a novel approach utilizing wearable sensors and deep learning. This new method provides benefits in terms of efficiency and cost-effectiveness while ensuring a more objective and consistent evaluation of sidewalk surfaces. In the proposed method, we used wearable accelerometers to capture participants’ acceleration along the vertical (V), anterior-posterior (AP), and medio-lateral (ML) axes. This data is then transformed into the frequency domain using Fast Fourier Transform (FFT), a Kalman filter, a low-pass filter, and a moving average filter. A deep learning model is subsequently utilized to classify the conditions of the sidewalk surfaces using this transformed data. The experimental results indicate that the proposed model achieves a notable accuracy rate of 95.17%. Full article
(This article belongs to the Special Issue Sensors for Unsupervised Mobility Assessment and Rehabilitation)
Show Figures

Figure 1

21 pages, 4859 KiB  
Article
Improvement of SAM2 Algorithm Based on Kalman Filtering for Long-Term Video Object Segmentation
by Jun Yin, Fei Wu, Hao Su, Peng Huang and Yuetong Qixuan
Sensors 2025, 25(13), 4199; https://doi.org/10.3390/s25134199 - 5 Jul 2025
Viewed by 296
Abstract
The Segment Anything Model 2 (SAM2) has achieved state-of-the-art performance in pixel-level object segmentation for both static and dynamic visual content. Its streaming memory architecture maintains spatial context across video sequences, yet struggles with long-term tracking due to its static inference framework. SAM [...] Read more.
The Segment Anything Model 2 (SAM2) has achieved state-of-the-art performance in pixel-level object segmentation for both static and dynamic visual content. Its streaming memory architecture maintains spatial context across video sequences, yet struggles with long-term tracking due to its static inference framework. SAM 2’s fixed temporal window approach indiscriminately retains historical frames, failing to account for frame quality or dynamic motion patterns. This leads to error propagation and tracking instability in challenging scenarios involving fast-moving objects, partial occlusions, or crowded environments. To overcome these limitations, this paper proposes SAM2Plus, a zero-shot enhancement framework that integrates Kalman filter prediction, dynamic quality thresholds, and adaptive memory management. The Kalman filter models object motion using physical constraints to predict trajectories and dynamically refine segmentation states, mitigating positional drift during occlusions or velocity changes. Dynamic thresholds, combined with multi-criteria evaluation metrics (e.g., motion coherence, appearance consistency), prioritize high-quality frames while adaptively balancing confidence scores and temporal smoothness. This reduces ambiguities among similar objects in complex scenes. SAM2Plus further employs an optimized memory system that prunes outdated or low-confidence entries and retains temporally coherent context, ensuring constant computational resources even for infinitely long videos. Extensive experiments on two video object segmentation (VOS) benchmarks demonstrate SAM2Plus’s superiority over SAM 2. It achieves an average improvement of 1.0 in J&F metrics across all 24 direct comparisons, with gains exceeding 2.3 points on SA-V and LVOS datasets for long-term tracking. The method delivers real-time performance and strong generalization without fine-tuning or additional parameters, effectively addressing occlusion recovery and viewpoint changes. By unifying motion-aware physics-based prediction with spatial segmentation, SAM2Plus bridges the gap between static and dynamic reasoning, offering a scalable solution for real-world applications such as autonomous driving and surveillance systems. Full article
Show Figures

Figure 1

22 pages, 22557 KiB  
Article
Depth from 2D Images: Development and Metrological Evaluation of System Uncertainty Applied to Agricultural Scenarios
by Bernardo Lanza, Cristina Nuzzi and Simone Pasinetti
Sensors 2025, 25(12), 3790; https://doi.org/10.3390/s25123790 - 17 Jun 2025
Viewed by 310
Abstract
This article describes the development, experimental validation, and uncertainty analysis of a simple-to-use model for monocular depth estimation based on optical flow. The idea is deeply rooted in the agricultural scenario, for which vehicles that move around the field are equipped with low-cost [...] Read more.
This article describes the development, experimental validation, and uncertainty analysis of a simple-to-use model for monocular depth estimation based on optical flow. The idea is deeply rooted in the agricultural scenario, for which vehicles that move around the field are equipped with low-cost cameras. In the experiment, the camera was mounted on a robot moving linearly at five different constant speeds looking at the target measurands (ArUco markers) positioned at different depths. The acquired data was processed and filtered with a moving average window-based filter to reduce noise in the estimated apparent depths of the ArUco markers and in the estimated optical flow image speeds. Two methods are proposed for model validation: a generalized approach and a complete approach that separates the input data according to their image speed to account for the exponential nature of the proposed model. The practical result obtained by the two analyses is that, to reduce the impact of uncertainty on depth estimates, it is best to have image speeds higher than 500–800 px/s. This is obtained by either moving the camera faster or by increasing the camera’s frame rate. The best-case scenario is achieved when the camera moves at 0.50–0.75 m/s and the frame rate is set to 60 fps (effectively reduced to 20 fps after filtering). As a further contribution, two practical examples are provided to offer guidance for untrained personnel in selecting the camera’s speed and camera characteristics. The developed code is made publicly available on GitHub. Full article
Show Figures

Figure 1

27 pages, 4150 KiB  
Article
Improved Liquefaction Hazard Assessment via Deep Feature Extraction and Stacked Ensemble Learning on Microtremor Data
by Oussama Arab, Soufiana Mekouar, Mohamed Mastere, Roberto Cabieces and David Rodríguez Collantes
Appl. Sci. 2025, 15(12), 6614; https://doi.org/10.3390/app15126614 - 12 Jun 2025
Viewed by 334
Abstract
The reduction in disaster risk in urban regions due to natural hazards (e.g., earthquakes, landslides, floods, and tropical cyclones) is primarily a development matter that must be treated within the scope of a broader urban development framework. Natural hazard assessment is one of [...] Read more.
The reduction in disaster risk in urban regions due to natural hazards (e.g., earthquakes, landslides, floods, and tropical cyclones) is primarily a development matter that must be treated within the scope of a broader urban development framework. Natural hazard assessment is one of the turning points in mitigating disaster risk, which typically contributes to stronger urban resilience and more sustainable urban development. Regarding this challenge, our research proposes a new approach in the signal processing chain and feature extraction from microtremor data that focuses mainly on the Horizontal-to-Vertical Spectral Ratio (HVSR) so as to assess liquefaction potential as a natural hazard using AI. The key raw seismic features of site amplification and resonance are extracted from the data via bandpass filtering, Fourier Transformation (FT), the calculation of the HVSR, and smoothing through the use of moving averages. The main novelty is the integration of machine learning, particularly stacked ensemble learning, for liquefaction potential classification from imbalanced seismic datasets. For this approach, several models are used to consider class imbalance, enhancing classification performance and offering better insight into liquefaction risk based on microtremor data. Then, the paper proposes a liquefaction detection method based on deep learning with an autoencoder and stacked classifiers. The autoencoder compresses data into the latent space, underlining the liquefaction features classified by the multi-layer perceptron (MLP) classifier and eXtreme Gradient Boosting (XGB) classifier, and the meta-model combines these outputs to put special emphasis on rare liquefaction events. This proposed methodology improved the detection of an imbalanced dataset, although challenges remain in both interpretability and computational complexity. We created a synthetic dataset of 1000 samples using realistic feature ranges that mimic the Rif data region to test model performance and conduct sensitivity analysis. Key seismic and geotechnical variables were included, confirming the amplification factor (Af) and seismic vulnerability index (Kg) as dominant predictors and supporting model generalizability in data-scarce regions. Our proposed method for liquefaction potential classification achieves 100% classification accuracy, 100% precision, and 100% recall, providing a new baseline. Compared to existing models such as XGB and MLP, the proposed model performs better in all metrics. This new approach could become a critical component in assessing liquefaction hazard, contributing to disaster mitigation and urban planning. Full article
Show Figures

Figure 1

18 pages, 2972 KiB  
Article
An Improved Extraction Scheme for High-Frequency Injection in the Realization of Effective Sensorless PMSM Control
by Indra Ferdiansyah and Tsuyoshi Hanamoto
World Electr. Veh. J. 2025, 16(6), 326; https://doi.org/10.3390/wevj16060326 - 11 Jun 2025
Viewed by 738
Abstract
High-frequency (HF) injection is a widely used technique for low-speed implementation of position sensorless permanent magnet synchronous motor control. A key component of this technique is the tracking loop control system, which extracts rotor position error and utilizes proportional–integral regulation as a position [...] Read more.
High-frequency (HF) injection is a widely used technique for low-speed implementation of position sensorless permanent magnet synchronous motor control. A key component of this technique is the tracking loop control system, which extracts rotor position error and utilizes proportional–integral regulation as a position observer for estimating the rotor position. Generally, this process relies on band-pass filters (BPFs) and low-pass filters (LPFs) to modulate signals in the quadrature current to obtain rotor position error information. However, limitations in filter accuracy and dynamic response lead to prolonged convergence times and timing inconsistencies in the estimation process, which affects real-time motor control performance. To address these issues, this study proposes an exponential moving average (EMA)-based scheme for rotor position error extraction, offering a rapid response under dynamic conditions such as direction reversals, step speed changes, and varying loads. EMA is used to pass the original rotor position information carried by the quadrature current signal, which contains HF components, with a specified smoothing factor. Then, after the synchronous demodulation process, EMA is employed to extract rotor position error information for the position observer to estimate the rotor position. Due to its computational simplicity and fast response in handling dynamic conditions, the proposed method can serve as an alternative to BPF and LPF, which are commonly used for rotor position information extraction, while also reducing computational burden and improving performance. Finally, to demonstrate its feasibility and effectiveness in improving rotor position estimation accuracy, the proposed system is experimentally validated by comparing it with a conventional system. Full article
(This article belongs to the Special Issue Permanent Magnet Motors and Driving Control for Electric Vehicles)
Show Figures

Figure 1

37 pages, 8299 KiB  
Article
Machine Learning Innovations in Renewable Energy Systems with Integrated NRBO-TXAD for Enhanced Wind Speed Forecasting Accuracy
by Zhiwen Hou, Jingrui Liu, Ziqiu Shao, Qixiang Ma and Wanchuan Liu
Electronics 2025, 14(12), 2329; https://doi.org/10.3390/electronics14122329 - 6 Jun 2025
Viewed by 478
Abstract
In the realm of renewable energy, harnessing wind power efficiently is crucial for establishing a low-carbon power system. However, the intermittent and uncertain nature of wind speed poses significant challenges for accurate prediction, which is essential for effective grid integration and dispatch management. [...] Read more.
In the realm of renewable energy, harnessing wind power efficiently is crucial for establishing a low-carbon power system. However, the intermittent and uncertain nature of wind speed poses significant challenges for accurate prediction, which is essential for effective grid integration and dispatch management. To address this challenge, this paper introduces a novel hybrid model, NRBO-TXAD, which integrates a Newton–Raphson-based optimizer (NRBO) with a Transformer and XGBoost, further enhanced by adaptive denoising techniques. The interquartile range–adaptive moving average filter (IQR-AMAF) method is employed to preprocess the data by removing outliers and smoothing the data, thereby improving the quality of the input. The NRBO efficiently optimizes the hyperparameters of the Transformer, thereby enhancing its learning performance. Meanwhile, XGBoost is utilized to compensate for any residual prediction errors. The effectiveness of the proposed model was validated using two real-world wind speed datasets. Among eight models, including LSTM, Informer, and hybrid baselines, NRBO-TXAD demonstrated superior performance. Specifically, for Case 1, NRBO-TXAD achieved a mean absolute percentage error (MAPE) of 11.24% and a root mean square error (RMSE) of 0.2551. For Case 2, the MAPE was 4.90%, and the RMSE was 0.2976. Under single-step forecasting, the MAPE for Case 2 was as low as 2.32%. Moreover, the model exhibited remarkable robustness across multiple time steps. These results confirm the model’s effectiveness in capturing wind speed fluctuations and long-range dependencies, making it a reliable solution for short-term wind forecasting. This research not only contributes to the field of signal analysis and machine learning but also highlights the potential of hybrid models in addressing complex prediction tasks within the context of artificial intelligence. Full article
Show Figures

Figure 1

20 pages, 21534 KiB  
Article
Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries
by Uliana Zbezhkhovska and Dmytro Chumachenko
Computation 2025, 13(6), 136; https://doi.org/10.3390/computation13060136 - 3 Jun 2025
Viewed by 661
Abstract
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, [...] Read more.
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, a Kalman filter, and seasonal–trend decomposition using Loess (STL)—on the forecasting accuracy of four models: LSTM, the Temporal Fusion Transformer (TFT), XGBoost, and LightGBM. Weekly case data from Ukraine, Bulgaria, Slovenia, and Greece were used to assess the models’ performance over short- (3-month) and medium-term (6-month) horizons. The results demonstrate that smoothing enhanced the models’ stability, particularly for neural architectures, and the model selection emerged as the primary driver of predictive accuracy. The LSTM and TFT models, when paired with STL or the rolling mean, outperformed the others in their short-term forecasts, while XGBoost exhibited greater robustness over longer horizons in selected countries. An ANOVA confirmed the statistically significant influence of the model type on the MAPE (p = 0.008), whereas the smoothing method alone showed no significant effect. These findings offer practical guidance for designing context-specific forecasting pipelines adapted to epidemic dynamics and variations in data quality. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

30 pages, 15147 KiB  
Article
Analysis of Numerical Instability Factors and Geometric Reconstruction in 3D SIMP-Based Topology Optimization Towards Enhanced Manufacturability
by Longbao Chen and Ding Zhou
Appl. Sci. 2025, 15(11), 6195; https://doi.org/10.3390/app15116195 - 30 May 2025
Viewed by 394
Abstract
The advancement of topology optimization (TO) and additive manufacturing (AM) has significantly enhanced structural design flexibility and the potential for lightweight structures. However, challenges such as intermediate density, mesh dependency, checkerboard patterns, and local extrema in TO can lead to suboptimal performance. Moreover, [...] Read more.
The advancement of topology optimization (TO) and additive manufacturing (AM) has significantly enhanced structural design flexibility and the potential for lightweight structures. However, challenges such as intermediate density, mesh dependency, checkerboard patterns, and local extrema in TO can lead to suboptimal performance. Moreover, existing AM technologies confront geometric constraints that limit their application. This study investigates minimum compliance as the objective function and volume as the constraint, employing the solid isotropic material with penalization method, density filtering, and the method of moving asymptotes. It examines how factors like mesh type, mesh size, volume fraction, material properties, initial density, filter radius, and penalty factor influence the TO results for a metallic gooseneck chain. The findings suggest that material properties primarily affect numerical variations along the TO path, with minimal impact on structural configuration. For both hexahedral and tetrahedral mesh types, a recommended mesh size is identified where the results show less than a 1% difference across varying mesh sizes. An initial density of 0.5 is advised, with a filter radius of approximately 2.3 to 2.5 times the average unit edge length for hexahedral meshes and 1.3 to 1.5 times for tetrahedral meshes. The suggested penalty factor ranges of 3–4 for hexahedral meshes and 2.5–3.5 for tetrahedral meshes. The optimal geometric reconstruction model achieves weight reductions of 23.46% and 22.22% compared to the original model while satisfying static loading requirements. This work contributes significantly to the integration of TO and AM in engineering, laying a robust foundation for future design endeavors. Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

19 pages, 6575 KiB  
Article
A Bluetooth Indoor Positioning System Based on Deep Learning with RSSI and AoA
by Yongjie Yang, Hao Yang and Fandi Meng
Sensors 2025, 25(9), 2834; https://doi.org/10.3390/s25092834 - 30 Apr 2025
Cited by 1 | Viewed by 1034
Abstract
Traditional received signal strength indicator (RSSI)-based and angle of arrival (AoA)-based positioning methods are highly susceptible to multipath effects, signal attenuation, and noise interference in complex indoor environments, which significantly degrade positioning accuracy. To mitigate the impact of the above deterioration, we propose [...] Read more.
Traditional received signal strength indicator (RSSI)-based and angle of arrival (AoA)-based positioning methods are highly susceptible to multipath effects, signal attenuation, and noise interference in complex indoor environments, which significantly degrade positioning accuracy. To mitigate the impact of the above deterioration, we propose a deep learning-based Bluetooth indoor positioning system, which employs a Kalman filter (KF) to reduce the angular error in AoA measurements and utilizes a median filter (MF) and moving average filter (MAF) to mitigate fluctuations in RSSI-based distance measurements. In the deep learning network architecture, we propose a convolutional neural network (CNN)–multi-head attention (MHA) model. During the training process, the backpropagation (BP) algorithm is used to compute the gradient of the loss function and update the parameters of the entire network, gradually optimizing the model’s performance. Experimental results demonstrate that our proposed indoor positioning method achieves an average error of 0.29 m, which represents a significant improvement compared to traditional RSSI and AoA methods. Additionally, it displays superior positioning accuracy when contrasted with numerous emerging indoor positioning methodologies. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 1377 KiB  
Article
Impact of Temporal Resolution on Autocorrelative Features of Cerebral Physiology from Invasive and Non-Invasive Sensors in Acute Traumatic Neural Injury: Insights from the CAHR-TBI Cohort
by Nuray Vakitbilir, Rahul Raj, Donald E. G. Griesdale, Mypinder Sekhon, Francis Bernard, Clare Gallagher, Eric P. Thelin, Logan Froese, Kevin Y. Stein, Andreas H. Kramer, Marcel J. H. Aries and Frederick A. Zeiler
Sensors 2025, 25(9), 2762; https://doi.org/10.3390/s25092762 - 27 Apr 2025
Viewed by 386
Abstract
Therapeutic management during the acute phase of traumatic brain injury (TBI) relies on continuous multimodal cerebral physiologic monitoring to detect and prevent secondary injury. These high-resolution data streams come from various invasive/non-invasive sensor technologies and challenge clinicians, as they are difficult to integrate [...] Read more.
Therapeutic management during the acute phase of traumatic brain injury (TBI) relies on continuous multimodal cerebral physiologic monitoring to detect and prevent secondary injury. These high-resolution data streams come from various invasive/non-invasive sensor technologies and challenge clinicians, as they are difficult to integrate into management algorithms and prognostic models. Data reduction techniques, like moving average filters, simplify data but may fail to address statistical autocorrelation and could introduce new properties, affecting model utility and interpretation. This study uses the CAnadian High-Resolution TBI (CAHR-TBI) dataset to examine the impact of temporal resolution changes (1 min to 24 h) on autoregressive integrated moving average (ARIMA) modeling for raw and derived cerebral physiologic signals. Stationarity tests indicated that the majority of the signals required first-order differencing to address persistent trends. A grid search identified optimal ARIMA parameters (p,d,q) for each signal and resolution. Subgroup analyses revealed population-specific differences in temporal structure, and small-scale forecasting using optimal parameters confirmed model adequacy. Variations in optimal structures across signals and patients highlight the importance of tailoring ARIMA models for precise interpretation and performance. Findings show that both raw and derived indices exhibit intrinsic ARIMA components regardless of resolution. Ignoring these features risks compromising the significance of models developed from such data. This underscores the need for careful resolution considerations in temporal modeling for TBI care. Full article
Show Figures

Figure 1

17 pages, 9440 KiB  
Article
RACFME: Object Tracking in Satellite Videos by Rotation Adaptive Correlation Filters with Motion Estimations
by Xiongzhi Wu, Haifeng Zhang, Chao Mei, Jiaxin Wu and Han Ai
Symmetry 2025, 17(4), 608; https://doi.org/10.3390/sym17040608 - 16 Apr 2025
Viewed by 348
Abstract
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos [...] Read more.
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos by introducing a rotation-adaptive tracking algorithm for correlation filters with motion estimation (RACFME). Our algorithm proposes the following improvements over the KCF method: (a) A rotation-adaptive feature enhancement module (RA) is proposed to obtain the rotated image block by affine transformation combined with the target rotation direction prior, which overcomes the disadvantage of HOG features lacking rotation adaptability, improves tracking accuracy while ensuring real-time performance, and solves the problem of tracking failure due to insufficient valid positive samples when tracking rotating targets. (b) Based on the correlation between peak response and occlusion, an occlusion detection method for vehicles and ships in satellite video is proposed. (c) Motion estimations are achieved by combining Kalman filtering with motion trajectory averaging, which solves the problem of tracking failure in the case of object occlusion. The experimental results show that the proposed RACFME algorithm can track a moving target with a 95% success score, and the RA module and ME both play an effective role. Full article
(This article belongs to the Special Issue Advances in Image Processing with Symmetry/Asymmetry)
Show Figures

Figure 1

35 pages, 19616 KiB  
Article
Frequency-Adaptive Current Control of a Grid-Connected Inverter Based on Incomplete State Observation Under Severe Grid Conditions
by Min Kang, Sung-Dong Kim and Kyeong-Hwa Kim
Energies 2025, 18(8), 1879; https://doi.org/10.3390/en18081879 - 8 Apr 2025
Viewed by 456
Abstract
Grid-connected inverter (GCI) plays a crucial role in facilitating stable and efficient power delivery, especially under severe and complex grid conditions. Harmonic distortions and imbalance of the grid voltages may degrade the grid-injected current quality. Moreover, inductive-capacitance (LC) grid impedance and the grid [...] Read more.
Grid-connected inverter (GCI) plays a crucial role in facilitating stable and efficient power delivery, especially under severe and complex grid conditions. Harmonic distortions and imbalance of the grid voltages may degrade the grid-injected current quality. Moreover, inductive-capacitance (LC) grid impedance and the grid frequency fluctuation also degrade the current control performance or stability. In order to overcome such an issue, this study presents a frequency-adaptive current control strategy of a GCI based on incomplete state observation under severe grid conditions. When LC grid impedance exists, it introduces additional states in a GCI system model. However, since the state for the grid inductance current is unmeasurable, it yields a limitation in the state feedback control design. To overcome such a limitation, this study adopts a state feedback control approach based on incomplete state observation by designing the controller only with the available states. The proposed control strategy incorporates feedback controllers with ten states, an integral controller, and resonant controllers for the robustness of the inverter operation. To reduce the reliance on additional sensing devices, a discrete-time full-state current observer is utilized. Particularly, with the aim of avoiding the grid frequency dependency of the system model, as well as the complex online discretization process, observer design is developed in the stationary reference frame. Additionally, a moving average filter (MAF)-based phase-locked loop (PLL) is incorporated for accurate frequency detection against distortions of grid voltages. For evaluating the performance of the designed control strategy, simulations and experiments are executed with severe grid conditions, including grid frequency changes, unbalanced grid voltage, harmonic distortion, and LC grid impedance. Full article
Show Figures

Figure 1

19 pages, 4998 KiB  
Article
Computer Vision-Based Robotic System Framework for the Real-Time Identification and Grasping of Oysters
by Hao-Ran Qu, Jue Wang, Lang-Rui Lei and Wen-Hao Su
Appl. Sci. 2025, 15(7), 3971; https://doi.org/10.3390/app15073971 - 3 Apr 2025
Viewed by 868
Abstract
This study addresses the labor-intensive and safety-critical challenges of manual oyster processing by innovating an advanced robotic intelligent sorting system. Central to this system is the integration of a high-resolution vision module, dual operational controllers, and the collaborative AUBO-i3 robot, all harmonized through [...] Read more.
This study addresses the labor-intensive and safety-critical challenges of manual oyster processing by innovating an advanced robotic intelligent sorting system. Central to this system is the integration of a high-resolution vision module, dual operational controllers, and the collaborative AUBO-i3 robot, all harmonized through a sophisticated Robot Operating System (ROS) framework. A specialized oyster image dataset was curated and augmented to train a robust You Only Look Once version 8 Oriented Bounding Box (YOLOv8-OBB) model, further enhanced through the incorporation of MobileNet Version 4 (MobileNetV4). This optimization reduced the number of model parameters by 50% and lowered the computational load by 23% in terms of GFLOPS (Giga Floating-point Operations Per Second). In order to capture oyster motion dynamically on a conveyor belt, a Kalman filter (KF) combined with a Low-Pass filter algorithm was employed to predict oyster trajectories, thereby improving noise reduction and motion stability. This approach achieves superior noise reduction compared to traditional Moving Average methods. The system achieved a 95.54% success rate in static gripping tests and an impressive 84% in dynamic conditions. These technological advancements demonstrate a significant leap towards revolutionizing seafood processing, offering substantial gains in operational efficiency, reducing potential contamination risks, and paving the way for a transition to fully automated, unmanned production systems in the seafood industry. Full article
Show Figures

Figure 1

Back to TopTop