Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (216)

Search Parameters:
Keywords = appearance-temporal features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1198 KB  
Article
GSMTNet: Dual-Stream Video Anomaly Detection via Gated Spatio-Temporal Graph and Multi-Scale Temporal Learning
by Di Jiang, Huicheng Lai, Guxue Gao, Dan Ma and Liejun Wang
Electronics 2026, 15(6), 1200; https://doi.org/10.3390/electronics15061200 - 13 Mar 2026
Abstract
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the [...] Read more.
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the dynamic interactions between human pose and video appearance. To address this, we propose GSMTNet, a dual-stream heterogeneous unsupervised network integrating gated spatio-temporal graph convolution and multi-scale temporal learning. First, we introduce a dynamic graph structure learning module, which leverages gated spatio-temporal graph convolutions with manifold transformations to model latent spatial relationships via human pose graphs. This is coupled with a normalizing flow-based density estimation module to model the probability distribution of normal samples in a latent space. Second, we design a hybrid dilated temporal module that employs multi-scale temporal feature learning to simultaneously capture long- and short-term dependencies, thereby enhancing the separability between normal patterns and potential deviations. Finally, we propose a dual-stream fusion module to hierarchically integrate features learned from pose graphs and raw video sequences, followed by a prediction head that computes anomaly scores from the fused features. Extensive experiments demonstrate state-of-the-art performance, achieving 86.81% AUC on ShanghaiTech and 70.43% on UBnormal, outperforming existing methods in rare anomaly scenarios. Full article
Show Figures

Figure 1

23 pages, 2397 KB  
Article
Video Anomaly Detection Through Spatial–Temporal Feature Relocalization and Calibrated Trajectory Modeling
by Jie Xu, Chenglizhao Chen, Xinyu Liu, Mengke Song and Huaye Zhang
Electronics 2026, 15(6), 1199; https://doi.org/10.3390/electronics15061199 - 13 Mar 2026
Abstract
To address the limitations of existing video anomaly detection methods that overly rely on pixel-space reconstruction and are sensitive to background noise and object scale variations, a self-supervised contrastive learning approach that integrates spatial–temporal feature relocalization with camera-calibrated trajectory modeling is proposed. The [...] Read more.
To address the limitations of existing video anomaly detection methods that overly rely on pixel-space reconstruction and are sensitive to background noise and object scale variations, a self-supervised contrastive learning approach that integrates spatial–temporal feature relocalization with camera-calibrated trajectory modeling is proposed. The proposed method takes spatial–temporal feature relocalization as the core task and constructs a feature-level contrastive learning mechanism to guide the model to focus on discriminative local appearance variations and global temporal semantic evolution. While suppressing background interference and scale-related noise, the method enhances the modeling of fine-grained appearance anomalies and global action-related temporal anomalies. Furthermore, camera calibration is introduced to recover continuous object trajectories in physical space, and a temporal aggregation module is designed to jointly model object motion patterns in pixel space and physical space, thereby improving the model’s ability to perceive complex anomalous behaviors. Experimental results on multiple public video anomaly detection benchmarks demonstrate that the proposed method consistently outperforms existing approaches, validating its effectiveness and generalization capability. Full article
Show Figures

Figure 1

23 pages, 13360 KB  
Article
Lumina-4DGS: Illumination-Robust Four-Dimensional Gaussian Splatting for Dynamic Scene Reconstruction
by Xiaoqiang Wang, Qing Wang, Yang Sun and Shengyi Liu
Sensors 2026, 26(5), 1650; https://doi.org/10.3390/s26051650 - 5 Mar 2026
Viewed by 196
Abstract
High-fidelity 4D reconstruction of dynamic scenes is pivotal for immersive simulation yet remains challenging due to the photometric inconsistencies inherent in multi-view sensor arrays. Standard 3D Gaussian Splatting (3DGS) strictly adheres to the brightness constancy assumption, failing to distinguish between intrinsic scene radiance [...] Read more.
High-fidelity 4D reconstruction of dynamic scenes is pivotal for immersive simulation yet remains challenging due to the photometric inconsistencies inherent in multi-view sensor arrays. Standard 3D Gaussian Splatting (3DGS) strictly adheres to the brightness constancy assumption, failing to distinguish between intrinsic scene radiance and transient brightness shifts caused by independent auto-exposure (AE), auto-white-balance (AWB), and non-linear ISP processing. This misalignment often forces the optimization process to compensate for spectral discrepancies through incorrect geometric deformation, resulting in severe temporal flickering and spatial floating artifacts. To address these limitations, we present Lumina-4DGS, a robust framework that harmonizes spatiotemporal geometry modeling with a hierarchical exposure compensation strategy. Our approach explicitly decouples photometric variations into two levels: a Global Exposure Affine Module that neutralizes sensor-specific AE/AWB fluctuations and a Multi-Scale Bilateral Grid that residually corrects spatially varying non-linearities, such as vignetting, using luminance-based guidance. Crucially, to prevent these powerful appearance modules from masking geometric flaws, we introduce a novel SSIM-Gated Optimization mechanism. This strategy dynamically gates the gradient flow to the exposure modules based on structural similarity. By ensuring that photometric enhancement is only activated when the underlying geometry is structurally reliable, we effectively prioritize geometric accuracy over photometric overfitting. Extensive experiments validate the quantitative superiority of Lumina-4DGS. On the Waymo Open Dataset, our method achieves a state-of-the-art Full Image PSNR of 31.12 dB while minimizing geometric errors to a Depth RMSE of 1.89 m and Chamfer Distance of 0.215 m. Furthermore, on our highly challenging self-collected surround-view dataset featuring severe unconstrained illumination shifts, Lumina-4DGS yields a significant 2.13 dB PSNR improvement over recent driving-scene baselines. These results confirm that our framework achieves photorealistic, exposure-invariant novel view synthesis while maintaining superior geometric consistency across heterogeneous camera inputs. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

24 pages, 10647 KB  
Article
Spatio-Temporal Feature Fusion for Anti-UAV Detection: Integrating Inter-Frame Dynamics and Appearance
by Yake Zhang, Xiaoxi Fu, Yunfeng Zhou, Xiaojun Guo, Bei Sun, Yinglong Wang and Yongping Zhai
Sensors 2026, 26(5), 1492; https://doi.org/10.3390/s26051492 - 27 Feb 2026
Viewed by 208
Abstract
In order to improve the detection capability of low-slow-small UAV targets in complex backgrounds, this paper introduces a novel method that combines spatio-temporal information, which includes (1) an improved YOLO detector for small UAV detection, (2) a motion target detection module, and (3) [...] Read more.
In order to improve the detection capability of low-slow-small UAV targets in complex backgrounds, this paper introduces a novel method that combines spatio-temporal information, which includes (1) an improved YOLO detector for small UAV detection, (2) a motion target detection module, and (3) an integrated combination strategy for static and dynamic judgment. We firstly provided an improved YOLOv11 static detection method by combining SPD Conv, BiFPN and a detect header for high-resolution layers, and then designed a dynamic target-detection algorithm which helps the YOLO method capture minor movement features, finally introducing a fusing strategy of static detection and dynamic judgment. The experimental results on small UAV datasets, including various sky, mountain and building backgrounds, have shown that the proposed approach increases Precision, Recall, and mAP50 by 12.1%, 29.5%, and 29.6%, respectively, compared with the baseline YOLO11 detector. The proposed MSM-YOLO achieves Precision, Recall, and mAP50 of 94%, 92%, and 86.3%, enabling the effective detection of small UAV targets in complex scenarios. Moreover, the ablation experiments also proved the effectiveness of each module. The proposed method was further deployed in a redesigned RK3588 embedded system, achieving 100 fps after optimized process, and it has shown effectiveness and practicality in further air-to-air UAV detection applications. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

18 pages, 3416 KB  
Article
Early Drowsiness Detection via Second-Order Derivative Analysis of Heart Rate Variability: A Non-Contact ECG Approach with Machine Learning
by Fabrice Vaussenat, Abhiroop Bhattacharya, Julie Payette, Alireza Saidi, Victor Bellemin, Geordi-Gabriel Renaud-Dumoulin, Sylvain G. Cloutier and Ghyslain Gagnon
Sensors 2026, 26(4), 1348; https://doi.org/10.3390/s26041348 - 20 Feb 2026
Viewed by 272
Abstract
Drowsy driving contributes to roughly 20% of traffic fatalities, yet most detection systems rely on behavioral cues that appear only after impairment has set in. Here we ask whether first and second derivatives of heart rate variability (HRV) can detect pre-crash states earlier [...] Read more.
Drowsy driving contributes to roughly 20% of traffic fatalities, yet most detection systems rely on behavioral cues that appear only after impairment has set in. Here we ask whether first and second derivatives of heart rate variability (HRV) can detect pre-crash states earlier than conventional approaches. Twenty-five participants completed 49 driving simulator sessions while we recorded cardiac activity through capacitive ECG electrodes embedded in the seat backrest—a non-contact method that avoids the privacy concerns of camera-based monitoring. To prevent circular evaluation, ground truth labels were based solely on crash proximity rather than HRV-derived scores. The combined HRV feature set (conventional metrics plus derivatives) achieved AUC = 0.863 for pre-crash prediction; derivatives alone reached only AUC = 0.573, indicating their value as complementary rather than standalone features. Driving performance indicators remained the strongest predictors (AUC = 0.999). Temporally, derivative-based detection preceded behavioral manifestations by 5–8 min and crash events by 6.8 ± 2.3 min. Across 1591 crashes and 6.78 million data points, we found that HRV derivatives capture physiological changes that precede overt impairment, though their utility depends on integration with other feature types. Full article
(This article belongs to the Special Issue Sensor for Biomedical and Machine Learning Applications)
Show Figures

Figure 1

23 pages, 2371 KB  
Article
Machine-Learning Crop-Type Mapping Sensitivity to Feature Selection and Hyperparameter Tuning
by Mayra Perez-Flores, Frédéric Satgé, Jorge Molina-Carpio, Renaud Hostache, Ramiro Pillco-Zolá, Diego Tola, Elvis Uscamayta-Ferrano, Lautaro Bustillos, Marie-Paule Bonnet and Celine Duwig
Remote Sens. 2026, 18(4), 563; https://doi.org/10.3390/rs18040563 - 11 Feb 2026
Viewed by 284
Abstract
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ [...] Read more.
To improve crop yields and incomes, farmers consistently adapt their practices to climate and market fluctuations, resulting in highly variable crop field distribution and coverage in space and time. As these dynamics illustrate farmers’ challenges, up-to-date crop-type mapping is essential for understanding farmers’ needs and supporting their adoption of sustainable practices. With global coverage and frequent temporal observations, remote sensing data are generally integrated into machine learning models to monitor crop dynamics. Unlike physical-based models that rely on straightforward use, implementing machine learning models requires extensive user interaction. In this context, this study assesses how sensitive the models’ outputs are to feature selection and hyperparameter tuning, as both processes rely on user judgment. To achieve this, Sentinel-1 (S1) and Sentinel-2 (S2) features are integrated into five distinct models (Random Forest (RF), Support Vector Machine (SVM), Light Gradient Boosting (LGB), Histogram-based Gradient Boosting (HGB), and Extreme Gradient Boosting (XGB)), considering several features selection (Variance Inflation Factor (VIF) and Sequential Feature Selector (SFS)) and hyperparameter tuning (Grid-Search) setup. Results show that the preprocess modeling feature selection (VIF) discards the features that the wrapped method (SFS) keeps, resulting in less reliable crop-type mapping. Additionally, hyperparameter tuning appears to be sensitive to the input features, and considering it after any feature selection improved the crop-type mapping. In this context a three-step nested modeling setup, including first hyperparameter tuning, followed by a wrapped feature selection (SFS) and additional hyperparameter tuning, leads to the most reliable model outputs. For the study region, LGB and XGB (SVM) are the most (least) suitable models for crop-type mapping, and model reliability improves when integrating S1 and S2 features rather than considering S1 or S2 alone. Finally, crop-type maps are derived across different regions and time periods to highlight the benefits of the proposed method for monitoring crop dynamics in space and time. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry (Third Edition))
Show Figures

Figure 1

51 pages, 5486 KB  
Article
Deception Detection from Five-Channel Wearable EEG on LieWaves: A Reproducible Baseline for Subject-Dependent and Subject-Independent Evaluation
by Șerban-Teodor Nicolescu, Felix-Constantin Adochiei, Florin-Ciprian Argatu, Bogdan-Adrian Enache and George-Călin Serițan
Sensors 2026, 26(3), 1027; https://doi.org/10.3390/s26031027 - 4 Feb 2026
Viewed by 325
Abstract
Deception detection with low-channel wearable EEG requires protocols that generalize across people while remaining practical for portable devices. Using the public LieWaves dataset (27 subjects recorded with a five-channel Emotiv Insight headset), we evaluate to what extent five-channel head-mounted EEG can support lie–truth [...] Read more.
Deception detection with low-channel wearable EEG requires protocols that generalize across people while remaining practical for portable devices. Using the public LieWaves dataset (27 subjects recorded with a five-channel Emotiv Insight headset), we evaluate to what extent five-channel head-mounted EEG can support lie–truth discrimination under both subject-independent and subject-dependent evaluations. For the subject-independent setting, we train a compact Residual Network with Squeeze-and-Excitation blocks (ResNet-SE) model on raw overlapping windows with focal loss, light data augmentation, and grouped cross-validation by subject; out-of-fold window probabilities are averaged per session and converted to labels using a single decision threshold estimated from the cross-validated session scores. For the subject-dependent setting, we adopt an overlapping short-window Residual Temporal Convolutional Network with Squeeze-and-Excitation and Attention (Res-TCN-SE-Attention) model that fuses raw EEG with discrete wavelet transform (DWT)-based spectral and handcrafted band-power and Hjorth features, using an 80/10/10 split at the recording/session level (stratified by session label), so that all windows from a given session are assigned to a single subset; because each subject contributes two sessions, the same subject may still appear across subsets via different sessions. The subject-independent model attains 66.70% session-level accuracy with an AUC of 0.58 on unseen subjects, underscoring the difficulty of person-independent generalization from low-channel wearable EEG. Because practical deployment requires generalization to previously unseen individuals, we treat the subject-independent evaluation as the primary estimate of real-world generalization. In contrast, the subject-dependent pipeline reaches 99.94% window-level accuracy under the overlapping sliding-window (OSW) setting with a session-disjoint split (no session contributes windows to more than one subset). This near-ceiling performance reflects the optimistic nature of subject-dependent evaluation with highly overlapping windows, even when avoiding within-session train–test overlap, and should not be interpreted as a meaningful indicator of deception-detection capability under realistic deployment constraints. These results suggest limited, above-chance separability between lie and truth sessions in LieWaves using a five-channel wearable EEG under the studied protocol; however, performance remains far from deployment-ready and is strongly shaped by evaluation design. Explicit reporting of both protocols, together with clear rules for windowing, aggregation, and threshold selection, supports more reproducible and comparable benchmarking. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

23 pages, 2302 KB  
Article
Learnable Feature Disentanglement with Temporal-Complemented Motion Enhancement for Micro-Expression Recognition
by Yu Qian, Shucheng Huang and Kai Qu
Entropy 2026, 28(2), 180; https://doi.org/10.3390/e28020180 - 4 Feb 2026
Viewed by 330
Abstract
Micro-expressions (MEs) are involuntary facial movements that reveal genuine emotions, holding significant value in fields like deception detection and psychological diagnosis. However, micro-expression recognition (MER) is fundamentally challenged by the entanglement of subtle emotional motions with identity-specific features. Traditional methods, such as those [...] Read more.
Micro-expressions (MEs) are involuntary facial movements that reveal genuine emotions, holding significant value in fields like deception detection and psychological diagnosis. However, micro-expression recognition (MER) is fundamentally challenged by the entanglement of subtle emotional motions with identity-specific features. Traditional methods, such as those based on Robust Principal Component Analysis (RPCA), attempt to separate identity and motion components through fixed preprocessing and coarse decomposition. However, these methods can inadvertently remove subtle emotional cues and are disconnected from subsequent module training, limiting the discriminative power of features. Inspired by the Bruce–Young model of facial cognition, which suggests that facial identity and expression are processed via independent neural routes, we recognize the need for a more dynamic, learnable disentanglement paradigm for MER. We propose LFD-TCMEN, a novel network that introduces an end-to-end learnable feature disentanglement framework. The network is synergistically optimized by a multi-task objective unifying orthogonality, reconstruction, consistency, cycle, identity, and classification losses. Specifically, the Disentangle Representation Learning (DRL) module adaptively isolates pure motion patterns from subject-specific appearance, overcoming the limitations of static preprocessing, while the Temporal-Complemented Motion Enhancement (TCME) module integrates purified motion representations—highlighting subtle facial muscle activations—with optical flow dynamics to comprehensively model the spatiotemporal evolution of MEs. Extensive experiments on CAS(ME)3 and DFME benchmarks demonstrate that our method achieves state-of-the-art cross-subject performance, validating the efficacy of the proposed learnable disentanglement and synergistic optimization. Full article
Show Figures

Figure 1

20 pages, 4296 KB  
Article
Occlusion-Aware Multi-Object Tracking in Vineyards via SAM-Based Visibility Modeling
by Yanan Wang, Hagsong Kim, Muhammad Fayaz, Lien Minh Dang, Hyeonjoon Moon and Kang-Won Lee
Electronics 2026, 15(3), 621; https://doi.org/10.3390/electronics15030621 - 1 Feb 2026
Viewed by 292
Abstract
Multi-object tracking (MOT) in vineyard environments remains challenging due to frequent and long-term occlusions caused by dense foliage, overlapping grape clusters, and complex plant structures. These characteristics often result in identity switches and fragmented trajectories when using conventional tracking methods. This paper proposes [...] Read more.
Multi-object tracking (MOT) in vineyard environments remains challenging due to frequent and long-term occlusions caused by dense foliage, overlapping grape clusters, and complex plant structures. These characteristics often result in identity switches and fragmented trajectories when using conventional tracking methods. This paper proposes OATSAM-Track, an occlusion-aware multi-object tracking framework designed for vineyard fruit monitoring. The framework integrates lightweight MobileSAM-assisted instance segmentation to estimate target visibility and occlusion severity. Occlusion-state reasoning is further incorporated into temporal association, appearance memory updating, and identity recovery. An adaptive temporal memory mechanism selectively updates appearance features according to predicted occlusion states, reducing identity drift under partial and severe occlusions. To facilitate occlusion-aware evaluation, an extended vineyard multi-object tracking dataset (GrapeOcclusionMOTS) with SAM-refined instance masks and fine-grained occlusion annotations is constructed. The experimental results demonstrate that OATSAM-Track improves identity consistency and tracking robustness compared to representative baseline trackers, particularly under medium and severe occlusion scenarios. These results indicate that explicit occlusion modeling is beneficial for reliable fruit monitoring in precision agriculture. Full article
Show Figures

Figure 1

33 pages, 10879 KB  
Article
Explainable AI-Enhanced Ensemble Protocol Using Gradient-Boosted Models for Zero-False-Alarm Seizure Detection from EEG
by Abdul Rehman and Sungchul Mun
Sensors 2026, 26(3), 863; https://doi.org/10.3390/s26030863 - 28 Jan 2026
Viewed by 446
Abstract
Epilepsy affects over 50 million people worldwide, yet automated seizure detection systems either achieve moderate sensitivity with excessive false alarms or rely on uninterpretable deep networks. This study presents a patient-independent EEG-based seizure detection framework that achieved zero false alarms in 24 h [...] Read more.
Epilepsy affects over 50 million people worldwide, yet automated seizure detection systems either achieve moderate sensitivity with excessive false alarms or rely on uninterpretable deep networks. This study presents a patient-independent EEG-based seizure detection framework that achieved zero false alarms in 24 h with 95% sensitivity in a retrospective evaluation on a CHB–MIT pediatric cohort (n = 6 seizure-positive patients). The pipeline extracts 27 time-, frequency-, and nonlinear-domain features from 5 s windows and trains five ensemble classifiers (XGBoost, CatBoost, LightGBM, Extra Trees, Random Forest) using strict leave-one-subject-out cross-validation. All models achieved segment-level AUC ≥ 0.99. Under zero-false-alarm constraints, XGBoost attained perfect specificity with 0.922 sensitivity. SHAP and LIME analyses suggested candidate EEG biomarkers that appear consistent with known ictal signatures, including temporo-parietal theta-band power, amplitude variability (IQR, RMS), and Hjorth activity. External validation on the Siena Scalp EEG Database (12 adult patients, 37 seizures) demonstrated cross-dataset generalization with 95% event-level sensitivity (Extra Trees) and AUC of 0.86 (Random Forest). Temporal lobe channels dominated feature importance in both datasets, confirming consistent biomarker identification across pediatric and adult populations. These findings demonstrate that calibrated gradient-boosted ensembles using interpretable EEG features achieve clinically safe seizure detection with cross-dataset generalizability. Full article
Show Figures

Figure 1

28 pages, 2206 KB  
Article
Cross-Modal Temporal Graph Transformers for Explainable NFT Valuation and Information-Centric Risk Forecasting in Web3 Markets
by Fang Lin, Yitong Yang and Jianjun He
Information 2026, 17(2), 112; https://doi.org/10.3390/info17020112 - 23 Jan 2026
Viewed by 337
Abstract
NFT prices are shaped by heterogeneous signals including visual appearance, textual narratives, transaction trajectories, and on-chain interactions, yet existing studies often model these factors in isolation and rarely unify multimodal alignment, temporal non-stationarity, and heterogeneous relational dependencies in a leakage-safe forecasting setting. We [...] Read more.
NFT prices are shaped by heterogeneous signals including visual appearance, textual narratives, transaction trajectories, and on-chain interactions, yet existing studies often model these factors in isolation and rarely unify multimodal alignment, temporal non-stationarity, and heterogeneous relational dependencies in a leakage-safe forecasting setting. We propose MM-Temporal-Graph, a cross-modal temporal graph transformer framework for explainable NFT valuation and information-centric risk forecasting. The model encodes image, text, transaction time series, and blockchain behavioral features, constructs a heterogeneous NFT interaction graph (co-transaction, shared creator, wallet relation, and price co-movement), and jointly performs relation-aware graph attention and global temporal–structural transformer reasoning with an adaptive fusion gate. A contrastive multimodal alignment objective improves robustness under market drift, while a risk-aware regularizer and a multi-source risk index enable early warning and interpretable attribution across modalities, time segments, and relational neighborhoods. On MultiNFT-T, MM-Temporal-Graph improves MAE from 0.162 to 0.153 and R2 from 0.823 to 0.841 over the strongest multimodal graph baseline, and achieves 87.4% early risk detection accuracy. These results support accurate, robust, and explainable NFT valuation and proactive risk monitoring in Web3 markets. Full article
Show Figures

Figure 1

22 pages, 2756 KB  
Article
DACL-Net: A Dual-Branch Attention-Based CNN-LSTM Network for DOA Estimation
by Wenjie Xu and Shichao Yi
Sensors 2026, 26(2), 743; https://doi.org/10.3390/s26020743 - 22 Jan 2026
Viewed by 229
Abstract
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. [...] Read more.
While deep learning methods are increasingly applied in the field of DOA estimation, existing approaches generally feed the real and imaginary parts of the covariance matrix directly into neural networks without optimizing the input features, which prevents classical attention mechanisms from improving accuracy. This paper proposes a spatio-temporal fusion model named DACL-Net for DOA estimation. The spatial branch applies a two-dimensional Fourier transform (2D-FT) to the covariance matrix, causing angles to appear as peaks in the magnitude spectrum. This operation transforms the original covariance matrix into a dark image with bright spots, enabling the convolutional neural network (CNN) to focus on the bright-spot components via an attention module. Additionally, a spectrum attention mechanism (SAM) is introduced to enhance the extraction of temporal features in the time branch. The model learns simultaneously from two data branches and finally outputs DOA results through a linear layer. Simulation results demonstrate that DACL-Net outperforms existing algorithms in terms of accuracy, achieving an RMSE of 0.04° at an SNR of 0 dB. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

15 pages, 1900 KB  
Article
Exploratory Analysis of Coagulation and Fibrinolysis Trajectories After IL-6 Antagonist Therapy in COVID-19: A Case Series
by Emőke Henrietta Kovács, Máté Rottler, Zoltán Ruszkai, Csanád Geréd, Tamás Kiss, Margit Csata, Barbara Réger, Rita Jakabfi-Csepregi, István Papp, Caner Turan, Péter Hegyi, János Fazakas, Zsolt Molnár and Krisztián Tánczos
Biomedicines 2026, 14(1), 254; https://doi.org/10.3390/biomedicines14010254 - 22 Jan 2026
Cited by 1 | Viewed by 451
Abstract
Background/Objectives: Severe COVID-19 is marked by IL-6-driven inflammation, endothelial injury, and dysregulated coagulation. Although IL-6 antagonists improve clinical outcomes, their effects on the temporal evolution of coagulation and fibrinolysis remain insufficiently defined. This study characterizes inflammatory, endothelial, coagulation, and fibrinolytic trajectories following [...] Read more.
Background/Objectives: Severe COVID-19 is marked by IL-6-driven inflammation, endothelial injury, and dysregulated coagulation. Although IL-6 antagonists improve clinical outcomes, their effects on the temporal evolution of coagulation and fibrinolysis remain insufficiently defined. This study characterizes inflammatory, endothelial, coagulation, and fibrinolytic trajectories following IL-6 receptor blockade in critically ill COVID-19 patients. Methods: In this prospective, exploratory multicenter case series (ClinicalTrials.gov NCT05218369), 15 ICU patients with PCR- or antigen-confirmed COVID-19 received tocilizumab per protocol. Serial sampling at five timepoints (T0–T4) included routine laboratories, comprehensive viscoelastic hemostatic assays (ClotPro®), and ELISA-based endothelial and fibrinolytic biomarkers. Analyses were primarily descriptive, emphasizing temporal patterns through boxplots; paired Wilcoxon tests with FDR correction contextualized within-patient changes. Results: Patients exhibited marked inflammation, hyperfibrinogenemia, endothelial activation, and delayed fibrinolysis at baseline. IL-6 blockade induced rapid suppression of CRP and PCT, progressive declines in fibrinogen, and modest platelet increases. In contrast, vWF antigen and activity further increased, indicating persistent endothelial dysfunction. Viscoelastic testing showed preserved thrombin generation and sustained high clot firmness, while biochemical markers (rising PAI-1, modest PAP increase, and progressively increasing D-dimer) and VHA indices suggested ongoing antifibrinolytic activity despite resolution of systemic inflammation. Conclusions: IL-6 antagonism was associated with rapid attenuation of systemic inflammation but was not accompanied by normalization of endothelial activation or fibrinolytic resistance. The observed hemostatic profile was consistent with attenuation of inflammation-associated coagulation features, while endothelial and prothrombotic alterations appeared to persist during follow-up, warranting further investigation in larger controlled studies. Full article
Show Figures

Figure 1

20 pages, 8055 KB  
Article
Research on an Underwater Visual Enhancement Method Based on Adaptive Parameter Optimization in a Multi-Operator Framework
by Zhiyong Yang, Shengze Yang, Yuxuan Fu and Hao Jiang
Sensors 2026, 26(2), 668; https://doi.org/10.3390/s26020668 - 19 Jan 2026
Viewed by 255
Abstract
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that [...] Read more.
Underwater images often suffer from luminance attenuation, structural degradation, and color distortion due to light absorption and scattering in water. The variations in illumination and color distribution across different water bodies further increase the uncertainty of these degradations, making traditional enhancement methods that rely on fixed parameters, such as underwater dark channel prior (UDCP) and histogram equalization (HE), unstable in such scenarios. To address these challenges, this paper proposes a multi-operator underwater image enhancement framework with adaptive parameter optimization. To achieve luminance compensation, structural detail enhancement, and color restoration, a collaborative enhancement pipeline was constructed using contrast-limited adaptive histogram equalization (CLAHE) with highlight protection, texture-gated and threshold-constrained unsharp masking (USM), and mild saturation compensation. Building upon this pipeline, an adaptive multi-operator parameter optimization strategy was developed, where a unified scoring function jointly considers feature gains, geometric consistency of feature matches, image quality metrics, and latency constraints to dynamically adjust the CLAHE clip limit, USM gain, and Gaussian scale under varying water conditions. Subjective visual comparisons and quantitative experiments were conducted on several public underwater datasets. Compared with conventional enhancement methods, the proposed approach achieved superior structural clarity and natural color appearance on the EUVP and UIEB datasets, and obtained higher quality metrics on the RUIE dataset (Average Gradient (AG) = 0.5922, Underwater Image Quality Measure (UIQM) = 2.095). On the UVE38K dataset, the proposed adaptive optimization method improved the oriented FAST and rotated BRIEF (ORB) feature counts by 12.5%, inlier matches by 9.3%, and UIQM by 3.9% over the fixed-parameter baseline, while the adjacent-frame matching visualization and stability metrics such as inlier ratio further verified the geometric consistency and temporal stability of the enhanced features. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 2099 KB  
Case Report
Dual Genetic Diagnosis of Prader–Willi Syndrome and TMC1-Related Severe Congenital Hearing Loss: Diagnostic Challenges and Cochlear Implant Outcomes
by Pinelopi Samara, Michail Athanasopoulos, Evangelia Koudoumnaki, Nikolaos Markatos and Ioannis Athanasopoulos
Diagnostics 2026, 16(2), 300; https://doi.org/10.3390/diagnostics16020300 - 17 Jan 2026
Viewed by 398
Abstract
Background and Clinical Significance: Prader–Willi syndrome (PWS) is an imprinting disorder not typically associated with severe congenital sensorineural hearing loss (SNHL). When profound SNHL is present in an infant with a known syndrome, an independent monogenic etiology should be considered. We report the [...] Read more.
Background and Clinical Significance: Prader–Willi syndrome (PWS) is an imprinting disorder not typically associated with severe congenital sensorineural hearing loss (SNHL). When profound SNHL is present in an infant with a known syndrome, an independent monogenic etiology should be considered. We report the first molecularly confirmed case of PWS co-occurring with biallelic pathogenic TMC1 variants causing congenital SNHL, outlining diagnostic challenges, cochlear implant (CI) outcomes, and implications for blended phenotypes. Case Presentation: A male infant with PWS due to a paternal 15q11.2–q13 deletion failed newborn hearing screening. Diagnostic auditory brainstem response and auditory steady-state response confirmed bilateral severe-to-profound SNHL. Temporal bone CT/MRI were normal. Comprehensive genetic testing identified compound heterozygous TMC1 variants consistent with autosomal recessive DFNB7/11 hearing loss, plus two variants of uncertain significance in SERPINB6 and EPS8L2. Sequential bilateral cochlear implantation was performed (left ear at 14 months, right at 20 months), followed by auditory–verbal therapy. Over four years, the child showed steady improvements in hearing and early-speech development. Conclusions: Early genomic evaluation is essential when clinical features appear atypical for a known syndrome. Identifying TMC1-related deafness enabled timely cochlear implantation and measurable gains. This case highlights that severe congenital SNHL in a syndromic infant may reflect a distinct monogenic disorder rather than phenotypic expansion of the primary syndrome, emphasizing the importance of recognizing blended phenotypes to guide precision-care strategies in rare disorders. Full article
Show Figures

Figure 1

Back to TopTop