Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,284)

Search Parameters:
Keywords = adaptive filtering

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2478 KB  
Article
Novel Adaptive Location Calibration Approach for High-Speed Railway Track Measurement Using Integrated BDS/Total Station Data
by Yong Zou, Jinguang Jiang, Jiaji Wu and Weiping Jiang
Appl. Sci. 2026, 16(6), 2958; https://doi.org/10.3390/app16062958 (registering DOI) - 19 Mar 2026
Abstract
Precise track measurement of the geometric state of high-speed railways is a prerequisite for their smooth and safe operation. Current track inspection trolleys, which integrate only an inertial navigation system (INS) and a total station (TS), rely entirely on the track control network [...] Read more.
Precise track measurement of the geometric state of high-speed railways is a prerequisite for their smooth and safe operation. Current track inspection trolleys, which integrate only an inertial navigation system (INS) and a total station (TS), rely entirely on the track control network (CPIII) deployed along the track when calibrating their absolute location to avoid INS errors. Due to the high dependency on the surrounding CPIII points, this method faces severe challenges in terms of operational efficiency and cost control. To address this issue, this study utilizes the fast and precise positioning capability of the Chinese Beidou System (BDS) and proposes a novel adaptive location calibration approach using tightly integrated BDS/TS data. Using the Kalman filtering framework, this approach integrates BDS observations with the TS distance measurements in the observation domain, and the number of CPIII points to be observed is adaptively reduced according to the surrounding environments. Thus, the absolute location of track inspection trolleys can be quickly and accurately calibrated without INS data, greatly reducing dependency on CPIII points. Experiments were conducted under two typical scenarios: open-sky and blocked BDS signals. The results demonstrate that, under open-sky scenarios, the adopted BDS-only solution achieves positioning errors of less than 1.0 cm in the north, east, and up directions within 5 min, completely getting rid of the reliance on the control network, while in obstructed scenarios, where the BDS-only solution fails to converge at the 1 cm level within 5 min, the tightly integrated BDS/TS approach, combined with CPIII data, enables fast convergence in the northward and eastward, with positioning errors of less than 1 cm. The proposed approach provides a novel location calibration scheme in the track geometric states measurement under different environments, effectively reducing the dependence of track measurement operations on CPIII points and significantly enhancing measurement efficiency and flexibility. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

23 pages, 1748 KB  
Article
Thermal Niche Differentiation Shapes the Hibernating Bat Assemblages in Bulgarian Caves Across an Elevational Gradient
by Heliana Dundarova, Ilya Acosta-Pankov, Elena Nedyalkova, Andrea Lubenova, Maksim Kolev, Krasimir Kirov, Krasimir Lakovski, Olya Genova, Valeri Parvanov, Plamenka Iskrenova, Vladimir Trifonov and Tsenka Chassovnikarova
Biology 2026, 15(6), 484; https://doi.org/10.3390/biology15060484 - 19 Mar 2026
Abstract
Elevation is a strong proxy for the thermal environment because it causes a predictable drop in temperature and food availability. This restricts cave-dwelling bats to species with specific metabolic traits, such as torpor or migration to avoid cold stress. In this context, we [...] Read more.
Elevation is a strong proxy for the thermal environment because it causes a predictable drop in temperature and food availability. This restricts cave-dwelling bats to species with specific metabolic traits, such as torpor or migration to avoid cold stress. In this context, we aimed to reveal how thermal niche differentiation structures 25 cave-dwelling bat assemblages along elevation gradients in two of the largest Bulgarian mountains—Stara Planina and Rhodopi. Multivariate PERMANOVA showed significant differences in bat assemblages among elevation groups (F = 1.616, p = 0.046), with altitude and temperature explaining 32.4% of the variance (p = 0.001). A high degree of species turnover (91.12% dissimilarity), driven by temperature niches, was observed: mesophilic Rhinolophus species dominated warm, low-elevation caves, while cold-adapted Myotis species were more common at high elevations. SIMPER analysis identified R. euryale as an indicator in low-elevation caves (p = 0.012) and the M. myotis/blythii complex at high elevations (p = 0.003). Alpha diversity showed no variation across elevation groups (p = 0.293), indicating that species turnover occurs without overall changes to local diversity. Mid-elevation assemblages lacked specific indicator species and resembled high-elevation communities, forming an ecotone. Thermal niche partitioning, as a physiological filter, shapes cave-dwelling bat assemblages and affects climate change range-shift predictions. Full article
(This article belongs to the Section Ecology)
Show Figures

Figure 1

33 pages, 31831 KB  
Article
Spherical Geodesic Bounds and a k-Circle Coverage Formulation
by Josiah Lansang and Faramarz F. Samavati
ISPRS Int. J. Geo-Inf. 2026, 15(3), 135; https://doi.org/10.3390/ijgi15030135 - 18 Mar 2026
Abstract
In this article, we introduce analogues of classic Euclidean bounds, including spherical caps, geodesic axis-aligned bounding boxes (AABBs), geodesic oriented bounding boxes (OBBs), and geodesic k-discrete oriented polytopes (k-DOPs). We also formulate k-circle coverage, a union of variable-radius caps [...] Read more.
In this article, we introduce analogues of classic Euclidean bounds, including spherical caps, geodesic axis-aligned bounding boxes (AABBs), geodesic oriented bounding boxes (OBBs), and geodesic k-discrete oriented polytopes (k-DOPs). We also formulate k-circle coverage, a union of variable-radius caps solved by a binary integer program over candidates generated from Discrete Global Grid System (DGGS)-based rasterization. As all constructions run directly on the spherical surface, S2, they preserve geodesic distances and avoid projection distortion. We benchmark these methods on seven country boundary polygons consisting of thousands of points, and report construction time, memory, tightness, and query throughput. Results show our analytic geodesic bounds deliver orders of magnitude improvements over exact tests, with trade-offs in tightness: spherical caps are fastest but loosest; geodesic OBBs are a strong balance; geodesic k-DOPs consistently have the tightest bounds. k-circle coverage has spherical cap query speed while also having locally adaptive fits; construction time increases with DGGS resolution. Altogether, these bounds specific to the sphere provide practical, conservative filters for globe-scale Digital Earth queries. Full article
19 pages, 1050 KB  
Article
Research on Fire Smoke Recognition Algorithm with Image Enhancement for Unconventional Scenarios in Under-Construction Nuclear Power Plants
by Tingren Wang, Guangwei Liu, Kai Yu and Baolin Yao
Fire 2026, 9(3), 128; https://doi.org/10.3390/fire9030128 - 17 Mar 2026
Abstract
Accurate identification of fire smoke is a key link in realizing early fire prevention and control. Traditional intelligent video and image processing technologies are significantly restricted by environmental factors, with weak anti-interference capabilities and limitations in distinguishing fire smoke, leading to a high [...] Read more.
Accurate identification of fire smoke is a key link in realizing early fire prevention and control. Traditional intelligent video and image processing technologies are significantly restricted by environmental factors, with weak anti-interference capabilities and limitations in distinguishing fire smoke, leading to a high false alarm rate of fires. To address this problem, this paper proposes an unconventional visual field smoke detection method based on image enhancement. The method innovatively improves the Retinex algorithm by integrating improved guided filtering, adaptive brightness correction, and CLAHE-WWGIF joint processing, which realizes targeted optimization for the unique interference factors of under-construction nuclear power plants such as water mist, low illumination, and equipment occlusion. First, an improved Retinex algorithm is used to process the image to improve the image brightness and contrast, retain edge details while avoiding halo artifacts, reduce the impact of noise, and optimize visual features. Then, the sample data set is integrated, and the YOLOv11 target detection algorithm is used to achieve accurate identification and positioning of smoke targets. Experimental data shows that the fire identification method achieves an accuracy rate of 93.6% and 92.3% for fire smoke identification in interference-prone scenarios such as dark nights and water mist, respectively, and the response time to fire smoke is only 1.8 s and 2.1 s. In practical on-site applications at nuclear power plant construction sites, the method is integrated into an “edge computing + distributed deployment” hardware system, which realizes real-time smoke detection in core areas such as nuclear islands and conventional islands with a false alarm rate of less than 5% and a detection delay of ≤300 ms, meeting the ultra-strict safety monitoring requirements of nuclear power projects. Experiments show that this method can be effectively applied to smoke detection scenarios under unconventional visual fields, accurately identify smoke, provide reliable technical support for fire smoke identification under unconventional visual fields, significantly reduce the false alarm rate of fire detection, and provide technical support for the safety of under-construction nuclear power plants. Full article
(This article belongs to the Special Issue Fire Risk Management and Emergency Prevention)
Show Figures

Figure 1

27 pages, 8038 KB  
Article
Adaptive Measurement Noise Covariance Estimation for GNSS/INS Tightly Coupled Integration Using a Linear-Attention Transformer with Residual Sparse Denoising and Channel Attentions
by Ning Wang and Fanming Liu
Information 2026, 17(3), 294; https://doi.org/10.3390/info17030294 - 17 Mar 2026
Abstract
Tightly coupled GNSS/INS is a widely adopted architecture for UAVs and ground vehicles. In this study, a Kalman-filter-based fusion framework integrates inertial data with satellite observables, including pseudorange and Doppler-derived range rate, to sustain precise navigation when GNSS quality degrades. A key bottleneck [...] Read more.
Tightly coupled GNSS/INS is a widely adopted architecture for UAVs and ground vehicles. In this study, a Kalman-filter-based fusion framework integrates inertial data with satellite observables, including pseudorange and Doppler-derived range rate, to sustain precise navigation when GNSS quality degrades. A key bottleneck is that many pipelines rely on fixed or overly simplified measurement-noise covariance models, which cannot track the nonstationary statistics of real observations. To address this issue, we develop an adaptive covariance estimator built on a Transformer enhanced with three modules: a Linear-Attention layer, a Residual Sparse Denoising Autoencoder (R-SDAE), and a lightweight residual channel-attention block (LRCAM). The estimator predicts the measurement-noise covariance online. R-SDAE distills sparse, outlier-resistant features from noisy ephemeris; LRCAM reweights informative channels via residual gating; and Linear Attention preserves long-range spatiotemporal dependencies while reducing attention cost from O(N2) to O(N). A predictive factor further modulates the covariance for improved efficiency and adaptability. Experimental results on real road-test data show that the proposed method achieves sub-meter positioning accuracy in open-sky conditions and preserves meter-level accuracy with improved robustness under GNSS-degraded urban scenarios, outperforming the compared adaptive-filtering baselines and neural covariance estimators and thereby demonstrating superior positioning accuracy and stability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 4658 KB  
Article
LUCIDiT: A Lean Urban Comfort Intelligent Digital Twin for Quick Mean Radiant Temperature Assessment
by Michele Baia, Giacomo Pierucci and Carla Balocco
Atmosphere 2026, 17(3), 305; https://doi.org/10.3390/atmos17030305 - 17 Mar 2026
Abstract
The intensification of Global Warming and Urban Heat Island phenomena necessitates advanced, computationally effective tools for evaluating outdoor thermal comfort and microclimatic dynamics by means of Mean Radiant Temperature assessment. However, existing high-resolution physical models often suffer from prohibitive computational costs. This research [...] Read more.
The intensification of Global Warming and Urban Heat Island phenomena necessitates advanced, computationally effective tools for evaluating outdoor thermal comfort and microclimatic dynamics by means of Mean Radiant Temperature assessment. However, existing high-resolution physical models often suffer from prohibitive computational costs. This research proposes LUCIDiT (Lean Urban Comfort Intelligent Digital Twin), a physically based modeling framework implemented for a quick mean radiant temperature assessment inside complex urban morphologies. The method integrates a simplified balance of mutual radiative heat exchanges with recursive time-series filtering to account for the thermal inertia of different urban materials, alongside greenery heat exchange due to evapotranspiration. This architecture creates an operational urban comfort digital twin that reduces computational times by orders of magnitude for large-scale mappings, without sacrificing physical accuracy. Validation against drone-acquired thermographic data and the established Urban Multi-scale Environmental Predictor model demonstrates high reliability and coherence with the real physical phenomena and context. The application to an urban pilot site in Florence reveals that strategic interventions, such as substituting impervious surfaces with irrigated greenery and arboreal canopies, can mitigate radiant loads by up to 20 °C. Findings show that the proposed urban comfort digital twin can be a robust, scalable instrument for designing evidence-based climate adaptation strategies and quick testing mitigation scenarios to enhance urban resilience. Full article
Show Figures

Figure 1

25 pages, 2748 KB  
Article
Development and Modeling of an Advanced Power Supply System for Electrostatic Precipitators to Improve Environmental Efficiency
by Askar Abdykadyrov, Amandyk Tuleshov, Nurzhigit Smailov, Zhandos Dosbayev, Sunggat Marxuly, Yerlan Sarsenbayev, Beket Muratbekuly and Nurlan Kystaubayev
Designs 2026, 10(2), 34; https://doi.org/10.3390/designs10020034 - 17 Mar 2026
Abstract
This study presents the engineering design and system-level modeling of a high-frequency power supply architecture for electrostatic precipitators intended to improve particulate removal efficiency and operational stability. Atmospheric air pollution by fine particulate matter (PM2.5) remains one of the most critical challenges in [...] Read more.
This study presents the engineering design and system-level modeling of a high-frequency power supply architecture for electrostatic precipitators intended to improve particulate removal efficiency and operational stability. Atmospheric air pollution by fine particulate matter (PM2.5) remains one of the most critical challenges in environmental protection and public health. Although electrostatic precipitators (ESPs) are widely used for industrial gas cleaning, the efficiency and stability of conventional 50 Hz power supplies are limited under conditions of strongly nonlinear corona discharge and high-resistivity dust. This paper presents the development and investigation of an advanced high-frequency power supply system for electrostatic precipitators based on a coupled electrical–electrophysical mathematical model. The work follows an engineering design methodology that integrates converter topology selection, electrophysical modeling of corona discharge, and control-oriented system optimization. The proposed model provides a unified description of electric field formation, space charge accumulation, ion transport, and particle motion in the corona discharge region. The simulation results show that in the operating voltage range of 10–100 kV, the electric field strength reaches (2–5)·106 V/m, the ion concentration stabilizes in the range of 1013–1015 m−3, and the particle drift velocity increases from approximately 0.05 to 0.3 m/s, leading to an increase in collection efficiency from about 55% to 93%. It is demonstrated that the proposed system ensures stable output voltage regulation within ±2.5–5% even under strongly nonlinear load conditions. The use of an LC output filter (C = 1–10 nF, L = 10–100 mH) reduces the voltage ripple from about 14% to 1.4–4.8% and significantly improves the transient response. In addition, adaptive adjustment of the pulse repetition frequency in the range of 10–200 kHz makes it possible to reduce energy consumption by 12–18% while simultaneously increasing the collection efficiency by 8–15%. The obtained results confirm that the proposed high-frequency power supply architecture provides a physically well-founded and energy-efficient solution for improving the environmental performance and operational stability of electrostatic precipitators. Full article
(This article belongs to the Section Energy System Design)
Show Figures

Figure 1

26 pages, 4255 KB  
Article
The Filtering-Based Multi-Innovation Hierarchical Fractional Least Mean Square Algorithm for Parameter Estimation of Bilinear-in-Parameter Autoregressive System
by Yan-Cheng Zhu, Huai-Yu Wu, Hui Qi, Zhi-Huan Chen, Zhen-Hua Zhu and Mian Hu
Fractal Fract. 2026, 10(3), 197; https://doi.org/10.3390/fractalfract10030197 - 17 Mar 2026
Abstract
This paper mainly considers the fractional parameter identification algorithms of the bilinear-in-parameter autoregressive (AR-BIP) system. The data filtering technique is introduced to improve the parameter estimation accuracy of the AR-BIP system, which involves using a filter to filter the data of the identification [...] Read more.
This paper mainly considers the fractional parameter identification algorithms of the bilinear-in-parameter autoregressive (AR-BIP) system. The data filtering technique is introduced to improve the parameter estimation accuracy of the AR-BIP system, which involves using a filter to filter the data of the identification model. The filtering-based hierarchical fractional least mean square algorithm (F-HFLMS) and the filtering-based multi-innovation hierarchical fractional least mean square algorithm (F-MHFLMS) are proposed for effective and accurate parameter estimation of the AR-BIP system. Using the multi-innovation theory and expanding the scalar innovation into the innovation vector, the F-MHFLMS could take full advantage of the input and output data information of the system. The performance of the F-MHFLMS algorithm is compared with the F-HFLMS strategy for the AR-BIP system using the values of the mean square error (MSE) and the average predicted output error. The effectiveness and accuracy of F-HFLMS and F-MHFLMS algorithms are demonstrated under the numerical experimentation based on different noise variances, fractional orders and innovation lengths. Compared with the F-HFLMS algorithm, the F-MHFLMS algorithm can acquire more accurate and robust parameter estimation. Full article
(This article belongs to the Section Numerical and Computational Methods)
Show Figures

Figure 1

24 pages, 2763 KB  
Article
Dynamic Hierarchical Fusion for Space Multi-Target Passive Tracking with Limited Field-of-View
by Jizhe Wang, Di Zhou, Runle Du and Jiaqi Liu
Aerospace 2026, 13(3), 282; https://doi.org/10.3390/aerospace13030282 - 17 Mar 2026
Abstract
Space-based multi-target passive tracking is critical for space situational awareness, but faces severe challenges due to the limited field-of-view (FoV) and directional ambiguity of onboard sensors. These constraints often lead to target loss, poor observability, and decreased estimation accuracy. To address these issues, [...] Read more.
Space-based multi-target passive tracking is critical for space situational awareness, but faces severe challenges due to the limited field-of-view (FoV) and directional ambiguity of onboard sensors. These constraints often lead to target loss, poor observability, and decreased estimation accuracy. To address these issues, different fusion architectures have been explored. While centralized measurement-level fusion offers superior accuracy for estimating target states, distributed estimation-level fusion provides greater reliability for estimating the number of targets. To adaptively leverage these two complementary strengths, a dynamic hierarchical fusion method through real-time optimization of the fusion topology is proposed. Specifically, at each decision epoch, sensor nodes are dynamically partitioned into local fusion nodes (LFNs) and detection-only nodes (DONs). Each LFN receives measurements from selected DONs and executes an iterated-correction Gaussian-mixture probability hypothesis density filter. Subsequently, LFNs share and fuse their estimates using the intensity-dependent arithmetic average fusion. This dynamic process is achieved by applying a sensor management scheme based on partially observable Markov decision process (POMDP). To ensure accurate cardinality estimation, the reward function in POMDP utilizes the posterior expected number of targets. The resultant optimization is efficiently solved using a binary particle swarm optimization algorithm. Numerical and hardware-in-the-loop simulations demonstrate the effectiveness of the proposed method in balancing the accuracy of target number and state estimation. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

30 pages, 1713 KB  
Article
Safe-Calibrated TCN–Transformer Transfer Learning for Reliable Battery SoH Estimation Under Lab-to-Field Domain Shift
by Kumbirayi Nyachionjeka and Ehab H. E. Bayoumi
World Electr. Veh. J. 2026, 17(3), 149; https://doi.org/10.3390/wevj17030149 - 17 Mar 2026
Abstract
Battery state-of-health (SoH) estimation is central to transportation electrification because it conditions safety limits, warranty accounting, power capability management, and long-horizon fleet optimization. Although deep temporal architectures can achieve high laboratory accuracy, field deployment is frequently limited by laboratory (Lab)-to-field (L2F) domain shift [...] Read more.
Battery state-of-health (SoH) estimation is central to transportation electrification because it conditions safety limits, warranty accounting, power capability management, and long-horizon fleet optimization. Although deep temporal architectures can achieve high laboratory accuracy, field deployment is frequently limited by laboratory (Lab)-to-field (L2F) domain shift that alters input statistics, feature definitions, and noise regimes. Under such a shift, predictors may remain strongly monotonic, preserving degradation ordering and become operationally unreliable due to systematic output distortion (e.g., compression/warping of the SoH scale). A deployment-complete L2F transfer learning pipeline is presented, built around a gated Temporal Convolutional Network (TCN)–Transformer fusion backbone, domain-specific adapters and heads, alignment-regularized fine-tuning, and row-level inference via sliding-window overlap averaging. To address the dominant deployment failure mode, a Safe Calibration stage robustly filters calibration pairs and selects among candidate calibrators under a strict do-no-harm criterion. On an unseen deployment stream (2154 labeled rows), overlap-averaged raw inference achieves MAE = 0.0439, RMSE = 0.0501, and R2 = 0.7451, consistent with mid-to-high SoH range compression, while Safe Calibration (Isotonic-Balanced selected) corrects nonlinear scaling without violating monotonic structure, improving to MAE = 0.0188, RMSE = 0.0252, and R2 = 0.9357 to obtain a complete understanding of the challenges due to domain shifts, evaluation is extended to include other architecture baselines such as TCN-only, Transformer-only, Gated Recurrent Unit (GRU), and Long Short-Term Memory (LSTM), and a Ridge regression baseline. Also added is explicit alignment and calibration ablations that include CORAL off/on, that is, none vs. Safe-Global vs. Context-Aware under identical leakage-safe splits and the same overlap-averaged deployment inference operator. This work goes beyond peak-score reporting and looks at the robustness of a pipeline under domain shift, which is quantified across four random seeds and multiple deployment streams, with uncertainty summarized via mean ± std and bootstrap confidence intervals for Mean of Absolute value of Errors (MAE)/Root of the Mean of the Square of Errors (RMSE) computed from per-example absolute errors. Full article
(This article belongs to the Section Storage Systems)
Show Figures

Figure 1

18 pages, 1620 KB  
Article
Adaptive Knowledge Tracing with Dynamic Memory and Reinforcement Learning
by Li Li, Zheng Duan, Zhi Zhou and Lian Liu
Sensors 2026, 26(6), 1878; https://doi.org/10.3390/s26061878 - 17 Mar 2026
Abstract
Accurately assessing students’ knowledge states and dynamically adapting instructional interactions to their cognitive levels are fundamental to optimizing personalized learning. However, conventional knowledge tracing (KT) approaches are constrained by three critical limitations: data sparsity undermines prediction robustness, the neglect of forgetting behavior misrepresents [...] Read more.
Accurately assessing students’ knowledge states and dynamically adapting instructional interactions to their cognitive levels are fundamental to optimizing personalized learning. However, conventional knowledge tracing (KT) approaches are constrained by three critical limitations: data sparsity undermines prediction robustness, the neglect of forgetting behavior misrepresents real learning processes, and static knowledge-state modeling fails to capture learners’ dynamic cognitive changes. To overcome these shortcomings, this study proposes DRAKT (Dynamic Reinforcement learning-based Adaptive Knowledge Tracing), a novel model that introduces two key innovations: (1) a Q-learning-based knowledge-state adjustment mechanism, which dynamically updates mastery levels via a reward structure integrated with the Ebbinghaus forgetting curve; and (2) a dynamic memory update module that combines a gated recurrent unit (GRU) with attention-based filtering to capture long-term learning dependencies and suppress irrelevant memory traces. Experiments conducted on three public ASSISTments datasets (2009, 2012, and 2017) demonstrate that DRAKT consistently outperforms state-of-the-art baselines. On ASSISTments2017 and ASSISTments2009, DRAKT achieves AUC scores of 82.08% and 81.47%, respectively, surpassing the second-best model (GKT) by 2.75–6.57 percentage points in AUC and 4.77–5.75 percentage points in accuracy. In practice, DRAKT offers a reliable technical foundation for enabling personalized learning-path recommendation and real-time cognitive adaptation in intelligent educational systems. Full article
Show Figures

Figure 1

18 pages, 2493 KB  
Article
Improved Kernel Correlation Filtering Algorithm Integrating Scale Adaptation and Occlusion Redetection
by Tianbo Liu, Yuya Wang, Hong Sun and Shuai Yuan
Appl. Sci. 2026, 16(6), 2843; https://doi.org/10.3390/app16062843 - 16 Mar 2026
Abstract
To address the limitations of the Kernelized Correlation Filter (KCF) in handling scale variation and occlusion during visual tracking, this paper proposes a scale-adaptive and occlusion-robust KCF-based tracking method. The proposed approach integrates the Histogram of Oriented Gradients (HOGs) and Color Name (CN) [...] Read more.
To address the limitations of the Kernelized Correlation Filter (KCF) in handling scale variation and occlusion during visual tracking, this paper proposes a scale-adaptive and occlusion-robust KCF-based tracking method. The proposed approach integrates the Histogram of Oriented Gradients (HOGs) and Color Name (CN) features to fully exploit pixel-level information, thereby improving the accuracy of target localization. On this basis, a sub-region-based scale adaptation mechanism is introduced. Specifically, the target is partitioned into multiple sub-regions, and the KCF classifier is applied to each sub-region to estimate its center position. The relative displacement among these sub-region centers is then utilized to estimate target scale variation, enabling adaptive scale tracking. In addition, an occlusion-aware mechanism is designed to enhance robustness under occlusion. During tracking, occlusion detection is performed, and once occlusion is detected, template updating is suspended. Oriented FAST and Rotated BRIEF (ORB) features extracted from the template are subsequently matched with features from subsequent frames to re-acquire the target. Experimental results on the OTB2013 and OTB2015 benchmarks demonstrate that the proposed method achieves competitive precision and success rates compared with the baseline KCF and other representative trackers, while satisfying real-time tracking requirements using only CPU resources, indicating its practical applicability in resource-constrained environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1694 KB  
Article
Tool-Health Digital Twin for CNC Predictive Maintenance via Innovation-Adaptive Sensor Fusion and Uncertainty-Aware Prognostics
by Zhuming Cao, Lihua Chen, Chunhui Li, Laifa Zhu and Zhengjian Deng
Machines 2026, 14(3), 335; https://doi.org/10.3390/machines14030335 - 16 Mar 2026
Abstract
A tool-health digital twin for CNC predictive maintenance is developed and operationalised as a fusion-and-state-estimation core that produces a latent tool-health trajectory (wear level and wear-rate dynamics) from multi-rate sensor streams for diagnosis and remaining useful life (RUL) forecasting under strict edge latency [...] Read more.
A tool-health digital twin for CNC predictive maintenance is developed and operationalised as a fusion-and-state-estimation core that produces a latent tool-health trajectory (wear level and wear-rate dynamics) from multi-rate sensor streams for diagnosis and remaining useful life (RUL) forecasting under strict edge latency constraints. The scope is tool-health–informed maintenance decisions (condition-based tool replacement/scheduling), rather than a comprehensive maintenance twin for all CNC subsystems. Multi-rate vibration, spindle-current, and temperature signals are synchronized and windowed, and a linear state-space model with Kalman filtering and innovation-guided adaptive noise estimation stabilizes the latent health state across operating-regime changes. The fused state is then used by compact sequence learners, an LSTM for edge feasibility, and a compact Transformer as a higher-accuracy comparison, to output fault categories and RUL estimates. Predictive uncertainty is quantified via a Monte Carlo dropout and linked to reliability-aware actions through a simple alarm/defer/schedule policy, while SHAP provides feature-level interpretability. On a CNC testbed, fusion improves fault F1 from 0.811 to 0.892 and PR-AUC from 0.867 to 0.918 while reducing RUL RMSE from 10.4 to 8.1 cycles; the compact Transformer reaches 0.903 F1 and 7.9-cycle RMSE at higher inference time. The end-to-end pipeline remains within a ≤100 ms breakdown, maintains in-band innovation statistics, supports rehearsal-based updates under drift, and is additionally evaluated on external tool-wear and turbofan datasets. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

23 pages, 2885 KB  
Article
AI-Controlled Modular Decoy Generation for Reconstruction-Resistant Hybrid and Multi-Cloud Storage Systems
by Munir Ahmed and Jiann-Shiun Yuan
Electronics 2026, 15(6), 1231; https://doi.org/10.3390/electronics15061231 - 16 Mar 2026
Abstract
Although cloud storage is widely trusted by users and enterprises, externally stored encrypted and fragmented data remain vulnerable to reconstruction and inference attacks following partial exposure. Existing decoy-based defenses often rely on static configurations or randomly generated artifacts that can be filtered during [...] Read more.
Although cloud storage is widely trusted by users and enterprises, externally stored encrypted and fragmented data remain vulnerable to reconstruction and inference attacks following partial exposure. Existing decoy-based defenses often rely on static configurations or randomly generated artifacts that can be filtered during adversarial analysis. This paper presents an Artificial Intelligence (AI)-controlled modular decoy generation method to enhance reconstruction resistance in distributed storage systems. The method operates as a system-agnostic post-fragmentation layer and does not require modification of encryption or storage architecture. Given encrypted fragments as input, decoys are generated using a supervised Extreme Gradient Boosting (XGBoost) regression model that adapts decoy quantity based on system telemetry and resource conditions. Decoys maintain statistical alignment with real encrypted fragments in size and Shannon entropy characteristics. To address scalability, the method is evaluated across small, medium, and large deployments comprising up to 413 externally exposed fragments and compared against fixed-ratio (10%, 20%) and randomized baselines. Experimental evaluation demonstrates increased adversarial uncertainty without altering legitimate reconstruction procedures or encryption mechanisms. Kolmogorov–Smirnov analysis indicates no statistically significant difference between AI-generated decoys and real fragments, whereas baseline decoys produce significant deviations in size and entropy distributions, supporting reconstruction resistance at scale in multi-cloud environments. Full article
Show Figures

Figure 1

22 pages, 41698 KB  
Article
Contrastive Learning in Stock Keeping Unit Image Recognition
by Wiktor Kępiński and Grzegorz Sarwas
Appl. Sci. 2026, 16(6), 2810; https://doi.org/10.3390/app16062810 - 14 Mar 2026
Abstract
Self-supervised contrastive learning has become an effective approach for visual representation learning when large-scale annotation is impractical. In this study, we evaluate three widely used methods—SimCLR, MoCo v2, and BYOL—for large-scale stock keeping unit (SKU) recognition in retail environments. Experiments are conducted on [...] Read more.
Self-supervised contrastive learning has become an effective approach for visual representation learning when large-scale annotation is impractical. In this study, we evaluate three widely used methods—SimCLR, MoCo v2, and BYOL—for large-scale stock keeping unit (SKU) recognition in retail environments. Experiments are conducted on the RP2K benchmark and a domain-specific in-house dataset (InSKU) using both linear probing and full fine-tuning. Under the original RP2K configuration with extended self-supervised pre-training, SimCLR achieves the highest Top-1 accuracy under linear evaluation (94.98%). In contrast, BYOL attains the highest performance under full fine-tuning (99.22% Top-1 accuracy). After filtering and deduplicating the dataset to reduce class imbalance and near-duplicate samples, MoCo v2 achieves competitive, and in some cases superior, linear performance under a reduced training budget. Cross-domain evaluation on InSKU indicates that SimCLR generalises more effectively under frozen-encoder constraints, whereas BYOL and MoCo v2 require full adaptation. These results highlight the sensitivity of contrastive representations to dataset composition, optimisation regime, and domain shift, providing practical guidance for deployment in dynamic retail settings. Full article
Show Figures

Figure 1

Back to TopTop