Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (144)

Search Parameters:
Keywords = auto calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4138 KB  
Article
Self-Supervised Cascade Denoising Auto-Encoder for Accurate Spatial Positioning of Target by Fusing Uncalibrated Video and Low-Cost GNSS
by Xiaofei Zeng, Ruliang He, Songchen Han, Wei Li, Menglong Yang and Binbin Liang
Remote Sens. 2026, 18(8), 1161; https://doi.org/10.3390/rs18081161 - 13 Apr 2026
Viewed by 349
Abstract
Accurate measurement of the spatial position of targets in a fixed camera is critical in remote sensing applications. Visual spatial positioning methods that rely solely on images are susceptible to adverse factors such as inaccurate camera calibration, imprecise image target detection, and incorrect [...] Read more.
Accurate measurement of the spatial position of targets in a fixed camera is critical in remote sensing applications. Visual spatial positioning methods that rely solely on images are susceptible to adverse factors such as inaccurate camera calibration, imprecise image target detection, and incorrect feature point selection. Complementary to images, the ubiquitous Global Navigation Satellite System (GNSS) data can provide spatial positions of targets, but most of them are low-cost GNSSs with significant positioning noise. In order to fuse these two valuable but flawed positioning measurements to improve the accuracy and stability of spatial positioning, we propose a deep learning multi-modal spatial positioning method by fusing sequential uncalibrated video images and low-cost GNSSs. Firstly, a self-supervised cascade denoising auto-encoder (SCDAE) architecture is built to endow the auto-encoder with robustness to noise in the raw inputs. Then, based on the SCDAE and Bayesian optimal estimation, a Bayesian self-supervised multi-modal fusion positioning method SCDAE-MFP is presented to achieve accurate and stable spatial positioning by self-supervised manifold learning. Specifically, to provide visual self-supervision to the SCDAE-MFP, a visual position denoising auto-encoder module based on dual unsupervised learning is proposed. Extensive experimental results on public datasets showed that SCDAE-MFP outperformed five other classical and state-of-the-art baseline methods by an average of 56.79% in reducing positioning errors. Full article
(This article belongs to the Special Issue GNSS and Multi-Sensor Integrated Precise Positioning and Applications)
Show Figures

Figure 1

14 pages, 1143 KB  
Article
Accuracy and Reproducibility of Digital Area and Depth Measurements of Surface Wounds: Benchtop and Clinical Validation
by Ron Linden, Perry V. Mayer, Rose Raizman, Hanna Varonina, Laura M. Jones-Donaldson and Danielle Dunham
Diagnostics 2026, 16(7), 1055; https://doi.org/10.3390/diagnostics16071055 - 1 Apr 2026
Viewed by 309
Abstract
Background/Objectives: Accurate and reproducible wound measurement is essential for monitoring healing and guiding treatment decisions. Conventional ruler-based techniques are prone to geometric overestimation and operator variability. This study evaluated the accuracy and reproducibility of the MolecuLightDX wound imaging device for measuring wound [...] Read more.
Background/Objectives: Accurate and reproducible wound measurement is essential for monitoring healing and guiding treatment decisions. Conventional ruler-based techniques are prone to geometric overestimation and operator variability. This study evaluated the accuracy and reproducibility of the MolecuLightDX wound imaging device for measuring wound surface area and depth compared with ruler-based measurements and ground truth digital photography methods. Methods: This investigation comprised two companion studies: a prospective, paired, multicenter clinical study comparing MolecuLightDX measurement with the ruler method against an image-based ground truth, and a bench and clinical validation of the AutoDepth feature against a calibrated three-dimensional optical scanner. The area of study included 17 benchtop wound models and enrolled 27 patients (33 wounds; area range: 0.56–23.04 cm2) across two wound care centers, and the AutoDepth study included 17 benchtop wound models and 34 clinical wounds (depth range: 0.06–4.13 cm). Accuracy, intra- and inter-user variability, and agreement were assessed using the mean percentage error (MPE), coefficient of variation (CV), intraclass correlation coefficients (ICC), and Bland–Altman analysis. Results: The device demonstrated high accuracy and reproducibility for both wound surface area and depth measurements compared with ruler-based and ground truth digital photography methods. The MPE for surface area was <10%, representing a tenfold improvement over ruler estimation (77.9%). For wound area, intra- and inter-user CVs were <10%, and for depth, ICCs were ≈0.99. Conclusions: The MolecuLightDX device provides accurate and consistent wound area and depth measurements across diverse wound types, demonstrating superior accuracy and reproducibility compared with conventional ruler-based methods and supporting its integration into wound assessment workflows. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

18 pages, 9422 KB  
Article
A SAM2-Driven RGB-T Annotation Pipeline with Thermal-Guided Refinement for Semantic Segmentation in Search-and-Rescue Scenes
by Andrés Salas-Espinales, Ricardo Vázquez-Martín and Anthony Mandow
Modelling 2026, 7(2), 50; https://doi.org/10.3390/modelling7020050 - 4 Mar 2026
Viewed by 671
Abstract
High-quality RGB–thermal infrared (RGB-T) semantic segmentation datasets are crucial for search-and-rescue (SAR) applications, yet their development is hindered by the scarcity of annotated ground-truth and by the challenges of thermal-camera calibration, which typically depends on heated targets with limited geometric definition. Recent approaches [...] Read more.
High-quality RGB–thermal infrared (RGB-T) semantic segmentation datasets are crucial for search-and-rescue (SAR) applications, yet their development is hindered by the scarcity of annotated ground-truth and by the challenges of thermal-camera calibration, which typically depends on heated targets with limited geometric definition. Recent approaches focus on using semantic segmentation annotation tools and transferring RGB masks to multi-spectral data, but they do not fully address the need for robust cross-modal geometric validation, quality control, or human-in-the-loop reliability assessment in RGB-T segmentation. To fill this gap, we propose a validated cross-modal annotation pipeline that combines deep correspondence matching, geometric transformation (affine or homography) of RGB-T pairs, and quantitative alignment validation. Our RGB-T pipeline integrates a semi-automatic annotation pipeline based on the Segment Anything Model 2 (SAM2) in Label Studio, with guided human refinement, and incorporates quantitative cost and quality control via inter-annotator agreement before being used in downstream model training. Results across three annotators show that the proposed approach reduces annotation time by 36% while achieving high annotation quality (mean IoU = 74.9%) and strong inter-annotator agreement (mean pixel accuracy = 74.3%, Cohen’s κ = 65%). The proposed RGB-T pipeline was annotated on a SAR-oriented RGB-T dataset comprising 306 image pairs and trained on two SOTA RGB-T. These findings demonstrate the practical value of the proposed methodology and establish a reproducible framework for generating reliable RGB-T semantic segmentation datasets, complementing and extending recent multispectral auto-labeling approaches. Full article
(This article belongs to the Section Modelling in Artificial Intelligence)
Show Figures

Figure 1

24 pages, 8773 KB  
Article
Soil Displacement Estimation from Integrated Sensing Technologies in Data-Driven Models Biased by Temporal Coherence of PS-InSAR
by Raffaele Tarantini, Gaetano Miraglia, Stefania Coccimiglio, Rosario Ceravolo and Giuseppe Andrea Ferro
Land 2026, 15(2), 296; https://doi.org/10.3390/land15020296 - 10 Feb 2026
Viewed by 516
Abstract
Spaceborne Synthetic Aperture Radar (SAR) interferometry provides long-term displacement measurements, but the quality of Persistent Scatterer (PS) time series depends critically on temporal coherence. Low-coherence points often exhibit auto-uncorrelated behaviours, which may be relevant to discriminate fast phenomena. This work introduces a coherence-based [...] Read more.
Spaceborne Synthetic Aperture Radar (SAR) interferometry provides long-term displacement measurements, but the quality of Persistent Scatterer (PS) time series depends critically on temporal coherence. Low-coherence points often exhibit auto-uncorrelated behaviours, which may be relevant to discriminate fast phenomena. This work introduces a coherence-based framework that identifies the coherence threshold beyond which PS displacement series retain sufficient reliability to support modelling. The threshold is estimated by analysing how data uncertainty, inferred through Sparse Bayesian Learning (SBL) techniques, varies with coherence and by detecting abrupt changes in this relationship. Once the optimal threshold is established, only the most reliable PS are used to train an SBL regression model linking satellite line-of-sight displacement to soil temperature and surface humidity measured by a low-cost ground sensor. PS-Interferometric SAR (PS-InSAR) time series are derived from COSMO-SkyMed raw images. The SBL model employs compressive-sensing principles and latent-parameter dictionaries of basis functions, whose latent parameters are calibrated through a constrained multi-start optimisation of a normalised residual-based objective function, regularised by a sub-validation dataset. In this work, it is shown that the trained model enables temporally denser reconstruction of displacement histories than the satellite revisit cycle allows and enables continuous soil monitoring by comparing model predictions with newly acquired PS-InSAR data. Full article
(This article belongs to the Special Issue Ground Deformation Monitoring via Remote Sensing Time Series Data)
Show Figures

Figure 1

28 pages, 2068 KB  
Article
Autonomous Offroad Vehicle Real-Time Multi-Physics Digital Twin: Modeling and Validation
by Mattias Lehto, Torbjörn Lindbäck, Håkan Lideskog and Magnus Karlberg
Machines 2026, 14(1), 128; https://doi.org/10.3390/machines14010128 - 22 Jan 2026
Viewed by 407
Abstract
The use of physical vehicles and environments during vehicle research and development is highly resource-intensive, particularly for autonomous vehicles. Recently, digital models are therefore increasingly used instead, which require high levels of fidelity and validity. While the two aforementioned qualities are often lacking, [...] Read more.
The use of physical vehicles and environments during vehicle research and development is highly resource-intensive, particularly for autonomous vehicles. Recently, digital models are therefore increasingly used instead, which require high levels of fidelity and validity. While the two aforementioned qualities are often lacking, an absence of versatility for multi-purpose use is even more prevalent in current digital models. In response to these challenges, this work presents a novel real-time multi-physics digital twin of an offroad vehicle with high levels of fidelity and validity, both regarding the vehicle dynamics and hydraulics, as well as regarding the visual representation of the environment and the exteroceptive sensor emulation. The versatility of the digital twin enables its usage for vehicle development tasks concerning mechanical components and driveline, as well as for visual machine learning tasks, such as generation of auto-annotated visual training data. Development of control algorithms leveraging both visual input and mechanical systems is also enabled. Furthermore, the real-time capability allows for Hardware-in-the-Loop and Vehicle-in-the-Loop simulation. The modeling, calibration, and real-world validation of the digital twin is presented, with an emphasis on the vehicle dynamics and hydraulics. The shown validity enables advancements in the development of autonomous offroad vehicles. Full article
(This article belongs to the Special Issue Advances in Autonomous Vehicles Dynamics and Control, 2nd Edition)
Show Figures

Figure 1

34 pages, 2281 KB  
Article
Spatiotemporal Lattice-Constrained Event Linking and Automatic Labeling for Cross-Document Accident Reports
by Wenhua Zeng, Wenhu Tang, Diping Yuan, Bo Zhang and Yuhui Zeng
Appl. Sci. 2026, 16(2), 595; https://doi.org/10.3390/app16020595 - 6 Jan 2026
Viewed by 419
Abstract
Constructing reusable accident-text corpora is hindered by anonymization, heterogeneous sources, and sparse labels, which complicate cross-document event linking. We propose a spatiotemporal lattice-constrained approach that encodes administrative hierarchies and temporal granularity, defines domain-informed consistency criteria, instantiates spatial/temporal relations via a subset of RCC-8 [...] Read more.
Constructing reusable accident-text corpora is hindered by anonymization, heterogeneous sources, and sparse labels, which complicate cross-document event linking. We propose a spatiotemporal lattice-constrained approach that encodes administrative hierarchies and temporal granularity, defines domain-informed consistency criteria, instantiates spatial/temporal relations via a subset of RCC-8 and Allen’s interval algebra, estimates anchor weights via smoothing with monotonic projection, and fuses signals using a constrained monotonic network with explicit probability calibration. An active-learning decision rule—combining maximum probability with a probability-gap criterion—supports scalable automatic labeling, and controlled augmentation leverages instruction-tuned LLMs under lattice constraints. Experiments show competitive ranking (Hit@1 = 41.51%, Hit@5 = 77.33%) and discrimination (ROC-AUC = 87.34%), with the best F1 (62.46%). The method yields the lowest calibration errors (Brier = 0.14; ECE = 1.97%), maintains performance across sources, and exhibits the smallest F1 fluctuation across thresholds (Δ = 1.7%). In deployment-oriented analyses, it auto-labels 77.7% of cases with 97.51% accuracy among high-confidence outputs while routing 22.3% to review, where the true-positive rate is 81.46%. These findings indicate that integrating structured constraints with calibrated probabilistic fusion enables accurate, auditable, and scalable event linking for accident-corpus construction. Full article
Show Figures

Figure 1

15 pages, 1843 KB  
Article
Comparing Methods for Uncertainty Estimation of Paraganglioma Growth Predictions
by Evi M. C. Sijben, Vanessa Volz, Tanja Alderliesten, Peter A. N. Bosman, Berit M. Verbist, Erik F. Hensen and Jeroen C. Jansen
J. Otorhinolaryngol. Hear. Balance Med. 2026, 7(1), 3; https://doi.org/10.3390/ohbm7010003 - 6 Jan 2026
Viewed by 531
Abstract
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- [...] Read more.
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- and patient-related factors, with the tumor progression rate being a predominant determinant. Accurate prediction of tumor progression has the potential to significantly improve treatment decisions by helping to identify patients who are likely to require active treatment in the future. It furthermore enables better-informed timing for follow-up, allowing early intervention for those who will ultimately need it, and optimization of the use of resources (such as MRI scans). Crucial to this is having reliable estimates of the uncertainty associated with a future growth forecast, so that this can be taken into account in the decision-making process. Methods: For various tumor growth prediction models, two methods for uncertainty estimation were compared: a historical-based one and a Bayesian one. We also investigated how incorporating either tumor-specific or general estimates of auto-segmentation uncertainty impacts the results of growth prediction. The performance of the uncertainty estimates was examined both from a technical and a practical perspective. Study design: Method comparison study. Results: Data of 208 patients were used, comprising 311 paragangliomas and 1501 volume measurements, resulting in 2547 tumor growth predictions (a median of 10 predictions per tumor). As expected, the uncertainty increased with the length of the prediction horizon and decreased with the inclusion of more tumor measurement data in the prediction model. The historical method resulted in estimated confidence intervals where the actual value fell within the estimated 95% confidence interval 94% of the time. However, this method resulted in confidence intervals that were too wide to be clinically useful (often over 200% of the predicted volume), and showed poor ability to differentiate growing and stable tumors. The estimated confidence intervals of the Bayesian method were much narrower. However, the 95% credible intervals were too narrow, with the true tumor volume falling within them only 78% of the time, indicating underestimation of uncertainty and insufficient calibration. Despite this, the Bayesian method showed markedly better ability to distinguishing between growing and stable tumors, which has arguably the most practical value. When combining all growth models, the Bayesian method using tumor-specific auto-segmentation uncertainties resulted in an 86% correct classification of growing and non-growing tumors. Conclusions: Of the methods evaluated for predicting paraganglioma progression, the Bayesian method is the most useful in the considered context, because it shows the best ability to discriminate between growing and non-growing tumors. To determine how these methods could be used and what their value is for patients, they should be further evaluated in a clinical setting. Full article
(This article belongs to the Section Head and Neck Surgery)
Show Figures

Figure 1

22 pages, 1236 KB  
Article
An Industrial Framework for Cold-Start Recommendation in Few-Shot and Zero-Shot Scenarios
by Xulei Cao, Wenyu Zhang, Feiyang Jiang and Xinming Zhang
Information 2025, 16(12), 1105; https://doi.org/10.3390/info16121105 - 15 Dec 2025
Viewed by 1356
Abstract
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of [...] Read more.
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of historical interaction data for new users and items leads to poor recommendation performance. We first analyze the causes of the cold-start problem, highlighting the limitations of existing embedding models when faced with a lack of interaction data. To address this, we classify the features of models into three categories, leveraging the Trans Block mapping to transfer features into the semantic space of missing features. Then, we propose a model-agnostic industrial framework (MAIF) with the Auto-Selection serving mechanism to address the cold-start recommendation problem in few-shot and zero-shot scenarios without requiring training from scratch. This framework can be applied to various online models without altering the prediction for warm entities, effectively avoiding the “seesaw phenomenon” between cold and warm entities. It improves prediction accuracy and calibration performance in three cold-start scenarios of recommendation systems. Finally, both the offline experiments on real-world industrial datasets and the online advertising system on the Dazhong Dianping app validate the effectiveness of our approach, showing significant improvements in recommendation performance for cold-start scenarios. Full article
Show Figures

Figure 1

22 pages, 1663 KB  
Article
Interpretable AutoML for Predicting Unsafe Miner Behaviors via Psychological-Contract Signals
by Yong Yan and Jizu Li
AI 2025, 6(12), 314; https://doi.org/10.3390/ai6120314 - 3 Dec 2025
Viewed by 935
Abstract
Occupational safety in high-risk sectors, such as mining, depends heavily on understanding and predicting workers’ behavioural risks. However, existing approaches often overlook the psychological dimension of safety, particularly how psychological-contract violations (PCV) between miners and their organizations contribute to unsafe behavior, and they [...] Read more.
Occupational safety in high-risk sectors, such as mining, depends heavily on understanding and predicting workers’ behavioural risks. However, existing approaches often overlook the psychological dimension of safety, particularly how psychological-contract violations (PCV) between miners and their organizations contribute to unsafe behavior, and they rarely leverage interpretable artificial intelligence. This study bridges that gap by developing an explainable AutoML framework that integrates AutoGluon, SHAP, and LIME to classify miners’ safety behaviors using psychological and organizational indicators. An empirically calibrated synthetic dataset of 5000 miner profiles (20 features) was used to train multiclass (Safe, Moderate, and Unsafe) and binary (Safe and Unsafe) classifiers. The WeightedEnsemble_L2 model achieved the best performance, with 97.6% accuracy (multiclass) and 98.3% accuracy (binary). Across tasks, Post-Intervention Score, Fatigue Level, and Supervisor Support consistently emerge as high-impact features. SHAP summarizes global importance patterns, while LIME provides per-case rationale, enabling auditable, actionable guidance for safety managers. We outline ethics and deployment considerations (human-in-the-loop review, transparency, bias checks) and discuss transfer to real-world logs as future work. Results suggest that interpretable AutoML can bridge behavioural safety theory and operational decision-making by producing high-accuracy predictions with transparent attributions, informing targeted interventions to reduce unsafe behaviours in high-risk mining contexts. Full article
Show Figures

Figure 1

32 pages, 8299 KB  
Article
The Auto Sensor Test as an AE Signal Source in Concrete Specimens
by Magdalena Bacharz, Michał Teodorczyk and Jarosław Szulc
Materials 2025, 18(22), 5084; https://doi.org/10.3390/ma18225084 - 8 Nov 2025
Cited by 1 | Viewed by 749
Abstract
Numerous artificial sources of acoustic waves have been described in the literature, which are designed to replicate the process by which actual damage occurs in a given material. Knowledge of the velocity with which an acoustic wave propagates is important here, both in [...] Read more.
Numerous artificial sources of acoustic waves have been described in the literature, which are designed to replicate the process by which actual damage occurs in a given material. Knowledge of the velocity with which an acoustic wave propagates is important here, both in order to correctly locate the signal source and to determine the degree of material degradation or the location of damage that has already occurred in the medium. This work presents the results of laboratory tests comparing two sources of artificial waves in terms of determining their parameters: the Hsu–Nielsen source and a sensor with the Auto Sensor Test function. The AST function allows the sensors to send and receive an elastic wave and is used to calibrate the sensor before, during, or after the test. In this study, the impact of the positioning of the sensors on the element being tested, their spacing, and the distance of the wave source from the sensor on selected parameters of the recorded waves are analyzed: velocity, amplitude, energy, rise time, waveform shape, and wavelet maps. This work demonstrates that a sensor with the AST function can be an effective alternative for the Hsu–Nielsen source in diagnostic studies. Full article
Show Figures

Figure 1

15 pages, 3027 KB  
Article
Artificial Intelligence as a Diagnostic Tool in Preoperative Surgical Planning for Early Non-Small Cell Lung Cancer: A Single-Center Experience
by Zeljko Garabinovic, Milan Savic, Nikola Colic, Jelena Rakocevic, Maja Ercegovac, Milos Mitrovic, Katarina Lukic, Jelica Vukmirovic, Jelena Vasic Madzarevic, Stefan Stevanovic, Gordana Bisevac Peric, Miljana Bubanja and Aleksandra Pavic
J. Clin. Med. 2025, 14(21), 7609; https://doi.org/10.3390/jcm14217609 - 27 Oct 2025
Viewed by 996
Abstract
Background: Lung cancer remains the leading cause of cancer-related mortality worldwide, with non-small cell lung cancer (NSCLC) accounting for the majority of cases. Radiomics and artificial intelligence (AI) have emerged as promising tools for quantitative imaging analysis and precision staging. This study [...] Read more.
Background: Lung cancer remains the leading cause of cancer-related mortality worldwide, with non-small cell lung cancer (NSCLC) accounting for the majority of cases. Radiomics and artificial intelligence (AI) have emerged as promising tools for quantitative imaging analysis and precision staging. This study aimed to evaluate the ability of an AI-based radiomics model to preoperatively predict tumor (T) and nodal (N) stage, lymphovascular invasion (LVI), and postoperative complications in patients with early-stage NSCLC. Material and Methods: This retrospective study included 51 consecutive patients who underwent anatomical lobectomy with systematic lymph node dissection between 2019 and 2024, at the Clinic for Thoracic Surgery of the University Clinical Center of Serbia. Quantitative imaging features were extracted from preoperative CT scans using the Lesion Scout with Auto ID module (syngo.via VB50 MM, Siemens Healthineers). Radiomics and clinical predictors were analyzed using regularized logistic regression (LASSO) with five-fold cross-validation. Model performance was assessed using AUC, accuracy, sensitivity, specificity, precision, and F1 score, and calibration was evaluated using the Hosmer–Lemeshow test. Groups were compared using parametric and non-parametric tests. Correlation between the variables was assessed using Spearman’s rank correlation coefficient. All p-values less than 0.05 were considered significant. Results: The AI-based model showed excellent performance for predicting the T component (training AUC = 0.89; test AUC = 0.86; F1 = 0.81) and acceptable calibration (p = 0.41). Nodal metastasis (OR = 0.108; 95% CI: 0.011–1.069; p = 0.057) and LVI (OR = 0.519; 95% CI: 0.139–1.937; p = 0.329) were not significantly predicted. Emphysema was identified as a significant independent predictor of postoperative complications (χ2 = 5.13; p = 0.024). Conclusions: The AI-driven radiomics model demonstrated strong predictive ability for the T component and identified emphysema as a clinically relevant predictor of postoperative complications. Full article
Show Figures

Figure 1

26 pages, 6031 KB  
Article
Model-Based Design and Sensitivity Optimization of Frequency-Output Pressure Sensors for Real-Time Monitoring in Intelligent Rowing Systems
by Iaroslav Osadchuk, Oleksandr Osadchuk, Serhii Baraban, Andrii Semenov and Mariia Baraban
Electronics 2025, 14(20), 4049; https://doi.org/10.3390/electronics14204049 - 15 Oct 2025
Viewed by 766
Abstract
This study presents a model-driven approach to the design, calibration, and application of frequency-output pressure sensors integrated within an intelligent system for real-time monitoring of rowing performance. The proposed system captures biomechanical parameters of the “boat–rower” complex across 50 parallel channels with a [...] Read more.
This study presents a model-driven approach to the design, calibration, and application of frequency-output pressure sensors integrated within an intelligent system for real-time monitoring of rowing performance. The proposed system captures biomechanical parameters of the “boat–rower” complex across 50 parallel channels with a temporal resolution of 8–12 ms. At the core of the sensing architecture are parametric pressure transducers incorporating strain-gauge primary elements and microelectronic auto-generator circuits featuring negative differential resistance (NDR). These oscillating circuits convert mechanical stress into high-frequency output signals in the 1749.9–1751.9 MHz range, with pressure sensitivities from 0.365 kHz/kPa to 1.370 kHz/kPa. The sensor models are derived using physical energy conversion principles, enabling the formulation of analytical expressions for transformation and sensitivity functions. These models simplify sensitivity tuning and allow clear interpretation of how structural and electronic parameters influence output frequency. The system architecture eliminates the need for analog-to-digital converters and signal amplifiers, reducing cost and power consumption, while enabling wireless ultra high frequency (UHF) transmission of sensor data. Integrated algorithms analyze the influence of biomechanical variables on athlete performance, enabling real-time diagnostics. The proposed model-based methodology offers a scalable and accurate solution for intelligent sports instrumentation and beyond. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Position, Attitude and Motion Tracking)
Show Figures

Figure 1

24 pages, 5065 KB  
Article
Benchmark Dataset and Deep Model for Monocular Camera Calibration from Single Highway Images
by Wentao Zhang, Wei Jia and Wei Li
Sensors 2025, 25(18), 5815; https://doi.org/10.3390/s25185815 - 18 Sep 2025
Viewed by 1404
Abstract
Single-image based camera auto-calibration holds significant value for improving perception efficiency in traffic surveillance systems. However, existing approaches face dual challenges: scarcity of real-world datasets and poor adaptability to multi-view scenarios. This paper presents a systematic solution framework. First, we constructed a large-scale [...] Read more.
Single-image based camera auto-calibration holds significant value for improving perception efficiency in traffic surveillance systems. However, existing approaches face dual challenges: scarcity of real-world datasets and poor adaptability to multi-view scenarios. This paper presents a systematic solution framework. First, we constructed a large-scale synthetic dataset containing 36 highway scenarios using the CARLA 0.9.15 simulation engine, generating approximately 336,000 virtual frames with precise calibration parameters. The dataset achieves statistical consistency with real-world scenes by incorporating diverse view distributions, complex weather conditions, and varied road geometries. Second, we developed DeepCalib, a deep calibration network that explicitly models perspective projection features through the triplet attention mechanism. This network simultaneously achieves road direction vanishing point localization and camera pose estimation using only a single image. Finally, we adopted a progressive learning paradigm: robust pre-training on synthetic data establishes universal feature representations in the first stage, followed by fine-tuning on real-world datasets in the second stage to enhance practical adaptability. Experimental results indicate that DeepCalib attains an average calibration precision of 89.6%. Compared to conventional multi-stage algorithms, our method achieves a single-frame processing speed of 10 frames per second, showing robust adaptability to dynamic calibration tasks across diverse surveillance views. Full article
Show Figures

Figure 1

30 pages, 651 KB  
Article
A Fusion of Statistical and Machine Learning Methods: GARCH-XGBoost for Improved Volatility Modelling of the JSE Top40 Index
by Israel Maingo, Thakhani Ravele and Caston Sigauke
Int. J. Financial Stud. 2025, 13(3), 155; https://doi.org/10.3390/ijfs13030155 - 25 Aug 2025
Cited by 3 | Viewed by 2986
Abstract
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE [...] Read more.
Volatility modelling is a key feature of financial risk management, portfolio optimisation, and forecasting, particularly for market indices such as the JSE Top40 Index, which serves as a benchmark for the South African stock market. This study investigates volatility modelling of the JSE Top40 Index log-returns from 2011 to 2025 using a hybrid approach that integrates statistical and machine learning techniques through a two-step approach. The ARMA(3,2) model was chosen as the optimal mean model, using the auto.arima() function from the forecast package in R (version 4.4.0). Several alternative variants of GARCH models, including sGARCH(1,1), GJR-GARCH(1,1), and EGARCH(1,1), were fitted under various conditional error distributions (i.e., STD, SSTD, GED, SGED, and GHD). The choice of the model was based on AIC, BIC, HQIC, and LL evaluation criteria, and ARMA(3,2)-EGARCH(1,1) was the best model according to the lowest evaluation criteria. Residual diagnostic results indicated that the model adequately captured autocorrelation, conditional heteroskedasticity, and asymmetry in JSE Top40 log-returns. Volatility persistence was also detected, confirming the persistence attributes of financial volatility. Thereafter, the ARMA(3,2)-EGARCH(1,1) model was coupled with XGBoost using standardised residuals extracted from ARMA(3,2)-EGARCH(1,1) as lagged features. The data was split into training (60%), testing (20%), and calibration (20%) sets. Based on the lowest values of forecast accuracy measures (i.e., MASE, RMSE, MAE, MAPE, and sMAPE), along with prediction intervals and their evaluation metrics (i.e., PICP, PINAW, PICAW, and PINAD), the hybrid model captured residual nonlinearities left by the standalone ARMA(3,2)-EGARCH(1,1) and demonstrated improved forecasting accuracy. The hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model outperforms the standalone ARMA(3,2)-EGARCH(1,1) model across all forecast accuracy measures. This highlights the robustness and suitability of the hybrid ARMA(3,2)-EGARCH(1,1)-XGBoost model for financial risk management in emerging markets and signifies the strengths of integrating statistical and machine learning methods in financial time series modelling. Full article
Show Figures

Figure 1

19 pages, 12064 KB  
Article
Three-Dimensional Printed Stimulating Hybrid Smart Bandage
by Małgorzata A. Janik, Michał Pielka, Petro Kovalchuk, Michał Mierzwa and Paweł Janik
Sensors 2025, 25(16), 5090; https://doi.org/10.3390/s25165090 - 16 Aug 2025
Cited by 1 | Viewed by 1860
Abstract
The treatment of chronic wounds and pressure sores is an important challenge in the context of public health and the effectiveness of patient treatment. Therefore, new methods are being developed to reduce or, in extreme cases, to initiate and conduct the wound healing [...] Read more.
The treatment of chronic wounds and pressure sores is an important challenge in the context of public health and the effectiveness of patient treatment. Therefore, new methods are being developed to reduce or, in extreme cases, to initiate and conduct the wound healing process. This article presents an innovative smart bandage, programmable using a smartphone, which generates small amplitude impulse vibrations. The communication between the smart bandage and the smartphone is realized using BLE. The possibility of programming the smart bandage allows for personalized therapy. Owing to the built-in MEMS sensor, the smart bandage makes it possible to monitor work during rehabilitation and implement an auto-calibration procedure. The flexible, openwork mechanical structure of the dressing was made in 3D printing technology, thanks to which the solution is easy to implement and can be used together with traditional dressings to create hybrid ones. Miniature electronic circuits and actuators controlled by the PWM signal were designed as replaceable elements; thus, the openwork structure can be treated as single-use. The smart bandage containing six actuators presented in this article generates oscillations in the range from about 40 Hz to 190 Hz. The system generates low-amplitude vibrations, below 1 g. The actuators were operated at a voltage of 1.65 V to reduce energy consumption. For comparison, the actuators were also operated at the nominal voltage of 3.17 V, as specified by the manufacturer. Full article
Show Figures

Figure 1

Back to TopTop