Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (171)

Search Parameters:
Keywords = incremental information extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1509 KB  
Article
ICTD: Combination of Improved CNN–Transformer and Enhanced Deep Canonical Correlation Analysis for Eye-Movement Emotion Classification
by Cong Zhang, Xisheng Li, Jiannan Chi, Ming Cao, Qingfeng Gu and Jiahui Liu
Brain Sci. 2026, 16(3), 330; https://doi.org/10.3390/brainsci16030330 - 19 Mar 2026
Viewed by 216
Abstract
Background/Objectives: Emotion classification based on eye-movement features has become a widely adopted approach due to the simplicity of data acquisition and the strong association between ocular responses and emotional states. However, several challenges remain with regard to existing emotion recognition methods, including [...] Read more.
Background/Objectives: Emotion classification based on eye-movement features has become a widely adopted approach due to the simplicity of data acquisition and the strong association between ocular responses and emotional states. However, several challenges remain with regard to existing emotion recognition methods, including the relatively weak correlation between eye-movement features and emotional labels and the fact that the key features are not prominently presented. Methods: To address abovelimitations, this study proposes an improved CNN-transformer combined with enhanced deep canonical correlation analysis network (ICTD). The proposed method first performs preprocessing and reconstruction of raw eye-movement signals to extract informative features. Subsequently, convolutional neural networks (CNNs) and transformer architectures are employed to capture local and global feature, respectively. In addition, an incremental feature feedforward network is incorporated to enhance the transformer, enabling the model to assign higher importance to salient feature information. Finally, the extracted representations are processed through deep canonical correlation analysis based on cosine similarity in order to generate classification outcomes. Results: Experiments conducted on the SEED-IV, SEED-V, and eSEE-d datasets demonstrate that the proposed ICTD framework consistently outperforms baseline approaches and attains optimal classification results. (1) On the eSEE-d dataset, the results of three-category arousal and valence classification reach 81.8% and 85.2%, respectively; (2) on the SEED-IV dataset, the emotion four-category classification result reaches 91.2%; (3) finally, on the SEED-V dataset, the emotion five-category classification result reaches 85.1%. Conclusions: The proposed ICTD framework effectively improves feature representation and classification performance, showing strong potential for practical emotion recognition and physiological signal analysis. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

17 pages, 1568 KB  
Article
Traffic-Oriented Three-Dimensional Vehicle Reconstruction Using Fixed Roadside Monocular Camera Sensors
by Chu Zhang, Yuxin Zhang, Liangbin Li and Xianhua Cai
Sensors 2026, 26(4), 1324; https://doi.org/10.3390/s26041324 - 18 Feb 2026
Viewed by 299
Abstract
Fixed roadside monocular cameras are widely used as low-cost sensing devices in intelligent transportation systems; however, extracting reliable three-dimensional (3D) information from such sensors remains challenging due to limited baselines, long observation distances, and moving vehicles. This paper presents a traffic-oriented 3D vehicle [...] Read more.
Fixed roadside monocular cameras are widely used as low-cost sensing devices in intelligent transportation systems; however, extracting reliable three-dimensional (3D) information from such sensors remains challenging due to limited baselines, long observation distances, and moving vehicles. This paper presents a traffic-oriented 3D vehicle reconstruction framework based on monocular image sequences captured by fixed roadside camera sensors. Semantic and non-semantic vehicle feature points are jointly exploited to balance structural consistency and surface completeness, and a feature-map-consistency-based optimization strategy is introduced to refine feature point localization and reduce reprojection errors. In addition, an optimized incremental Structure-from-Motion (SfM) pipeline incorporating traffic-aware initialization, keyframe selection, and local bundle adjustment is developed to improve reconstruction efficiency. Experiments on real-world traffic surveillance videos show that the proposed method reduces the mean reprojection error by 13.6% and shortens reconstruction time by 43.9% compared with widely used incremental SfM systems. Full article
(This article belongs to the Collection 3D Imaging and Sensing System)
Show Figures

Figure 1

26 pages, 1731 KB  
Article
Time-Varying Linkages Between Survey-Based Financial Risk Tolerance and Stock Market Dynamics: Signal Decomposition and Regime-Switching Evidence
by Wookjae Heo
Mathematics 2026, 14(4), 667; https://doi.org/10.3390/math14040667 - 13 Feb 2026
Viewed by 263
Abstract
This study examines how aggregate financial risk tolerance (FRT), measured from repeated survey responses, co-evolves with stock-market dynamics over time. The observed FRT index is treated as a noisy preference signal containing both gradual drift and episodic deviations, and its market relevance is [...] Read more.
This study examines how aggregate financial risk tolerance (FRT), measured from repeated survey responses, co-evolves with stock-market dynamics over time. The observed FRT index is treated as a noisy preference signal containing both gradual drift and episodic deviations, and its market relevance is evaluated under time variation, frequency components, and stress regimes. Using monthly data that align the survey-based FRT index with market returns and risk measures, a three-part econometric design is implemented. First, a time-varying parameter VAR (TVP-VAR) characterizes bidirectional, non-constant linkages between FRT and market outcomes. Second, signal-extraction methods decompose FRT into a smooth “normal” component and a high-frequency “abnormal” component (with robustness to alternative filters) to test whether short-run deviations contain distinct information for volatility and downside risk. Third, a Markov-switching specification assesses state dependence by testing whether the FRT–market relationship differs between low-stress and high-stress regimes. Across specifications, the FRT–market linkage is strongly state dependent: the sign and magnitude of FRT effects drift over time and differ across regimes, with high-frequency FRT deviations aligning more closely with risk dynamics than the smooth component. Predictive validation is provided via out-of-sample forecasting of next-month market risk using elastic net and gradient boosting relative to an AR(1) benchmark; explainability analysis (SHAP) indicates that abnormal FRT contributes incremental predictive content beyond standard market-state variables. Overall, the framework offers a mathematically transparent approach to modeling survey-based preference signals in markets and supports regime-aware forecasting and risk-management applications. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning in Real-Life Processes)
Show Figures

Figure 1

17 pages, 1091 KB  
Article
ASD Recognition Through Weighted Integration of Landmark-Based Handcrafted and Pixel-Based Deep Learning Features
by Asahi Sekine, Abu Saleh Musa Miah, Koki Hirooka, Najmul Hassan, Md. Al Mehedi Hasan, Yuichi Okuyama, Yoichi Tomioka and Jungpil Shin
Computers 2026, 15(2), 124; https://doi.org/10.3390/computers15020124 - 13 Feb 2026
Viewed by 533
Abstract
Autism Spectrum Disorder (ASD) is a neurological condition that affects communication and social interaction skills, with individuals experiencing a range of challenges that often require specialized care. Automated systems for recognizing ASD face significant challenges due to the complexity of identifying distinguishing features [...] Read more.
Autism Spectrum Disorder (ASD) is a neurological condition that affects communication and social interaction skills, with individuals experiencing a range of challenges that often require specialized care. Automated systems for recognizing ASD face significant challenges due to the complexity of identifying distinguishing features from facial images. This study proposes an incremental advancement in ASD recognition by introducing a dual-stream model that combines handcrafted facial-landmark features with deep learning-based pixel-level features. The model processes images through two distinct streams to capture complementary aspects of facial information. In the first stream, facial landmarks are extracted using MediaPipe (v0.10.21),with a focus on 137 symmetric landmarks. The face’s position is adjusted using in-plane rotation based on eye-corner angles, and geometric features along with 52 blendshape features are processed through Dense layers. In the second stream, RGB image features are extracted using pre-trained CNNs (e.g., ResNet50V2, DenseNet121, InceptionV3) enhanced with Squeeze-and-Excitation (SE) blocks, followed by feature refinement through Global Average Pooling (GAP) and DenseNet layers. The outputs from both streams are fused using weighted concatenation through a softmax gate, followed by further feature refinement for classification. This hybrid approach significantly improves the ability to distinguish between ASD and non-ASD faces, demonstrating the benefits of combining geometric and pixel-based features. The model achieved an accuracy of 96.43% on the Kaggle dataset and 97.83% on the YTUIA dataset. Statistical hypothesis testing further confirms that the proposed approach provides a statistically meaningful advantage over strong baselines, particularly in terms of classification correctness and robustness across datasets. While these results are promising, they show incremental improvements over existing methods, and future work will focus on optimizing performance to exceed current benchmarks. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain (3rd Edition))
Show Figures

Figure 1

19 pages, 3571 KB  
Article
Few-Shot Class-Incremental SAR Target Recognition Based on Dynamic Task-Adaptive Classifier
by Dan Li, Feng Zhao, Yong Li and Wei Cheng
Remote Sens. 2026, 18(3), 527; https://doi.org/10.3390/rs18030527 - 6 Feb 2026
Viewed by 369
Abstract
Current synthetic aperture radar automatic target recognition (SAR ATR) tasks face challenges including limited training samples and poor generalization capability to novel classes. To address these issues, few-shot class-incremental learning (FSCIL) has emerged as a promising research direction. Few-shot learning facilitates the expedited [...] Read more.
Current synthetic aperture radar automatic target recognition (SAR ATR) tasks face challenges including limited training samples and poor generalization capability to novel classes. To address these issues, few-shot class-incremental learning (FSCIL) has emerged as a promising research direction. Few-shot learning facilitates the expedited adaptation to novel tasks utilizing a limited number of labeled samples, whereas incremental learning concentrates on the continuous refinement of the model as new categories are incorporated without eradicating previously learned knowledge. Although both methodologies present potential resolutions to the challenges of sample scarcity and class evolution in SAR target recognition, they are not without their own set of difficulties. Fine-tuning with emerging classes can perturb the feature distribution of established classes, culminating in catastrophic forgetting, while training exclusively on a handful of new samples can induce bias towards older classes, leading to distribution collapse and overfitting. To surmount these limitations and satisfy practical application requirements, we propose a Few-Shot Class-Incremental SAR Target Recognition method based on a Dynamic Task-Adaptive Classifier (DTAC). This approach underscores task adaptability through a feature extraction module, a task information encoding module, and a classifier generation module. The feature extraction module discerns both target-specific and task-specific characteristics, while the task information encoding module modulates the network parameters of the classifier generation module based on pertinent task information, thereby improving adaptability. Our innovative classifier generation module, honed with task-specific insights, dynamically assembles classifiers tailored to the current task, effectively accommodating a variety of scenarios and novel class samples. Our extensive experiments on SAR datasets demonstrate that our proposed method generally outperforms the baselines in few-shot class incremental SAR target recognition. Full article
Show Figures

Figure 1

22 pages, 9313 KB  
Article
Road-Type-Specific Streetscape Renewal Effects on Urban Beauty Perception: A Spatiotemporal SHAP Analysis Using Historical Street Views
by Wenhan Li, Yinzhe Li, Lingling Zhang, Jiahui Gao, Shanshan Xie and Yan Feng
Buildings 2026, 16(3), 653; https://doi.org/10.3390/buildings16030653 - 4 Feb 2026
Viewed by 304
Abstract
Amid China’s shift from a model of urban “incremental expansion” to one focused on “stock optimization”, the renewal of streetscapes has taken center stage as a critical approach to improving the human experience within urban environments. However, empirical insight into how visual interventions [...] Read more.
Amid China’s shift from a model of urban “incremental expansion” to one focused on “stock optimization”, the renewal of streetscapes has taken center stage as a critical approach to improving the human experience within urban environments. However, empirical insight into how visual interventions affect aesthetic perception across different road types remains notably limited. This study addresses that gap through a spatiotemporal investigation of Zhengzhou’s streetscape transformations between 2017 and 2022. Major roads were categorized into four functional types—freeway, under-freeway, regular road, and tunnel—to better capture perceptual variation. Leveraging a Fully Convolutional Network (FCN), we extracted nine visual components from historical street views and paired them with crowd-sourced “beauty” ratings from the MIT Place Pulse 2.0 dataset. Statistical analyses, including paired t-tests and Kernel Density Estimation (KDE), indicated marked improvements in perceived beauty following renewal, with the exception of tunnel segments. Through Random Forest (RF) regression and SHapley Additive exPlanations (SHAP) interpretation, greening emerged as the most influential driver of aesthetic enhancement—most prominently on regular roads (SHAP = 2.246). The impact of renewal was found to be context-specific: green belts were most effective in under-freeway areas (SHAP = +0.8), while improvements to pavement (SHAP = +0.97) and street vitality were key for regular roads. Notably, SHAP analysis revealed non-linear relationships, such as diminishing perceptual returns when green coverage exceeded certain thresholds. These findings inform a “visual renewal–perceptual response” framework, offering data-driven guidance for adaptive, human-centered upgrades in high-density urban settings. Full article
(This article belongs to the Special Issue Advanced Study on Urban Environment by Big Data Analytics)
Show Figures

Figure 1

33 pages, 5039 KB  
Article
Sub-Hourly Multi-Horizon Quantile Forecasting of Photovoltaic Power Using Meteorological Data and a HybridCNN–STTransformer
by Guldana Taganova, Alma Zakirova, Assel Abdildayeva, Bakhyt Nurbekov, Zhanar Akhayeva and Talgat Azykanov
Algorithms 2026, 19(2), 123; https://doi.org/10.3390/a19020123 - 3 Feb 2026
Viewed by 377
Abstract
The rapid deployment of photovoltaic generation increases uncertainty in power-system operation and strengthens the need for ultra-short-term forecasts with reliable uncertainty estimates. Point-forecasting approaches alone are often insufficient for dispatch and reserve decisions because they do not quantify risk. This study investigates probabilistic [...] Read more.
The rapid deployment of photovoltaic generation increases uncertainty in power-system operation and strengthens the need for ultra-short-term forecasts with reliable uncertainty estimates. Point-forecasting approaches alone are often insufficient for dispatch and reserve decisions because they do not quantify risk. This study investigates probabilistic forecasting of short-horizon solar generation using quantile regression on a public dataset of solar output and meteorological variables. This study proposes a hybrid attention–convolution model that combines an attention-based encoder to capture long-range temporal dependencies with a causal temporal convolution module that extracts fast local fluctuations using only past information, preventing information leakage. The two representations are fused and decoded jointly across multiple future horizons to produce consistent quantile trajectories. Experiments against representative machine-learning and deep-learning baselines show improved probabilistic accuracy and competitive central forecasts, while illustrating an important sharpness–calibration trade-off relevant to risk-aware grid operation. Key novelties include a multi-horizon quantile formulation at 15 min resolution for one-hour-ahead PV increments, a HybridCNN–STTransformer that fuses causal temporal convolutions with Transformer attention, and a horizon-token decoder that models inter-horizon dependencies to produce consistent multi-step quantile trajectories; reliability/sharpness diagnostics and post hoc calibration are discussed for operational risk-aware use. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 6530 KB  
Article
Multi-Center Prototype Feature Distribution Reconstruction for Class-Incremental SAR Target Recognition
by Ke Zhang, Bin Wu, Peng Li, Zhi Kang and Lin Zhang
Sensors 2026, 26(3), 979; https://doi.org/10.3390/s26030979 - 3 Feb 2026
Viewed by 299
Abstract
In practical applications of deep learning-based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, new target categories emerge continuously. This requires the systems to learn incrementally—acquiring new knowledge while retaining previously learned information. To mitigate catastrophic forgetting in Class-Incremental Learning (CIL), this [...] Read more.
In practical applications of deep learning-based Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, new target categories emerge continuously. This requires the systems to learn incrementally—acquiring new knowledge while retaining previously learned information. To mitigate catastrophic forgetting in Class-Incremental Learning (CIL), this paper proposes a CIL method for SAR ATR named Multi-center Prototype Feature Distribution Reconstruction (MPFR). It has two core components. First, a Multi-scale Hybrid Attention feature extractor is designed. Trained via a feature space optimization strategy, it fuses and extracts discriminative features from both SAR amplitude images and Attribute Scattering Center data, while preserving feature space capacity for new classes. Second, each class is represented by multiple prototypes to capture complex feature distributions. Old class knowledge is retained by modeling their feature distributions through parameterized Gaussian diffusion, alleviating feature confusion in incremental phases. Experiments on public SAR datasets show MPFR achieves superior performance compared to existing approaches, including recent SAR-specific CIL methods. Ablation studies validate each component’s contribution, confirming MPFR’s effectiveness in addressing CIL for SAR ATR without storing historical raw data. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

25 pages, 3222 KB  
Article
Progressive Attention-Enhanced EfficientNet–UNet for Robust Water-Body Mapping from Satellite Imagery
by Mohamed Ezz, Alaa S. Alaerjan, Ayman Mohamed Mostafa, Noureldin Laban and Hind H. Zeyada
Sensors 2026, 26(3), 963; https://doi.org/10.3390/s26030963 - 2 Feb 2026
Viewed by 415
Abstract
The sustainable management of water resources and the development of climate-resilient infrastructure depend on the precise identification of water bodies in satellite imagery. This paper presents a novel deep learning architecture that integrates a convolutional block attention module (CBAM) into a modified EfficientNet–UNet [...] Read more.
The sustainable management of water resources and the development of climate-resilient infrastructure depend on the precise identification of water bodies in satellite imagery. This paper presents a novel deep learning architecture that integrates a convolutional block attention module (CBAM) into a modified EfficientNet–UNet backbone. This integration allows the model to prioritize informative features and spatial areas. The model robustness is ensured through a rigorous training regimen featuring five-fold cross-validation, dynamic test-time augmentation, and optimization with the Lovász loss function. The final model achieved the following values on the independent test set: precision = 90.67%, sensitivity = 86.96%, specificity = 96.18%, accuracy = 93.42%, Dice score = 88.78%, and IoU = 79.82%. These results demonstrate improvement over conventional segmentation pipelines, highlighting the effectiveness of attention mechanisms in extracting complex water-body patterns and boundaries. The key contributions of this paper include the following: (i) adaptation of CBAM within a UNet-style architecture tailored for remote sensing water-body extraction; (ii) a rigorous ablation study detailing the incremental impact of decoder complexity, attention integration, and loss function choice; and (iii) validation of a high-fidelity, computationally efficient model ready for deployment in large-scale water-resource and ecosystem-monitoring systems. Our findings show that attention-guided segmentation networks provide a robust pathway toward high-fidelity and sustainable water-body mapping. Full article
Show Figures

Figure 1

20 pages, 827 KB  
Article
Mood in the Market: Forecasting IPO Activity with Music Sentiment and LSTM
by Qinxu Ding, Chong Guan and Yinghui Yu
FinTech 2026, 5(1), 12; https://doi.org/10.3390/fintech5010012 - 2 Feb 2026
Viewed by 649
Abstract
We examine whether aggregate “music mood” derived from globally popular songs can help forecast primary equity issuance. We build a Friday-anchored weekly panel that merges SEC EDGAR counts of priced Initial Public Offerings (IPOs) with features from the Spotify Daily Top 200 (audio [...] Read more.
We examine whether aggregate “music mood” derived from globally popular songs can help forecast primary equity issuance. We build a Friday-anchored weekly panel that merges SEC EDGAR counts of priced Initial Public Offerings (IPOs) with features from the Spotify Daily Top 200 (audio descriptors such as valence, energy, danceability, tempo, loudness, etc.) and Genius-scraped lyrics. We extract lyric sentiment by tokenizing Genius-scraped lyrics and aggregating lexicon-based affect scores (valence and arousal) into popularity-weighted weekly indices. To address sparsity and regime shifts in issuance, we train a leakage-safe Long Short-Term Memory (LSTM) network on a smoothed target—the forward 4-week sum of IPOs—and obtain next-week forecasts by dividing the predicted sum by 4. On a chronological holdout, a single LSTM with look-back K = 8 outperforms strong baselines—reducing MAE by 13.9%, RMSE by 15.9%, and mean Poisson deviance by 27.6% relative to the best baseline in each metric. Furthermore, we adopt SHapley Additive exPlanations (SHAP) to explain our LSTM model, showing that IPO persistence remains the dominant driver, but music and lyrics covariates contribute incremental and robust signal. These results suggest that aggregate music sentiment contains economically meaningful information about near-term IPO activity. Full article
Show Figures

Graphical abstract

22 pages, 3309 KB  
Article
Simultaneous Incremental Map-Prediction-Driven UAV Trajectory Planning for Unknown Environment Exploration
by Jianing Tang, Guoran Jiang, Jingkai Yang and Sida Zhou
Aerospace 2026, 13(2), 139; https://doi.org/10.3390/aerospace13020139 - 30 Jan 2026
Viewed by 335
Abstract
Efficient autonomous exploration in unknown environments is a core challenge for Unmanned Aerial Vehicle (UAV) applications in unstructured settings. The primary challenges are exploration speed, coverage efficiency, and the autonomous, efficient, and obstacle-/threat-avoiding global guidance of UAV under local observational information. This paper [...] Read more.
Efficient autonomous exploration in unknown environments is a core challenge for Unmanned Aerial Vehicle (UAV) applications in unstructured settings. The primary challenges are exploration speed, coverage efficiency, and the autonomous, efficient, and obstacle-/threat-avoiding global guidance of UAV under local observational information. This paper proposes an autonomous exploration method driven by simultaneous incremental map prediction and the fusion of global frontier information to enhance the exploration efficiency of UAVs in unknown unstructured environments. Based on generative deep learning, we introduce an incremental map prediction method for 3D unstructured mountainous terrain, enabling the simultaneous acquisition of map predictions and their uncertainty estimates. Map prediction and trajectory planning are conducted concurrently: by utilizing the simultaneously predicted 3D map and its confidence (i.e., the uncertainty estimates), an overlap analysis is conducted between the flyable areas in the predicted map and the high-confidence regions. Dynamic guidance subspaces are generated by extracting global frontier points, within which shortest-time optimization is adopted for trajectory planning to maximize information gain and coverage per step. Experimental results demonstrate that compared to classical methods, our proposed approach achieves significant performance improvements in key metrics, including map coverage rate, total exploration time, and average path length. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

23 pages, 8146 KB  
Article
A Cattle Behavior Recognition Method Based on Graph Neural Network Compression on the Edge
by Hongbo Liu, Ping Song, Xiaoping Xin, Yuping Rong, Junyao Gao, Zhuoming Wang and Yinglong Zhang
Animals 2026, 16(3), 430; https://doi.org/10.3390/ani16030430 - 29 Jan 2026
Viewed by 396
Abstract
Cattle behavior is closely related to their health status, and monitoring cattle behavior using intelligent devices can assist herders in achieving precise and scientific livestock management. Current behavior recognition algorithms are typically executed on server platforms, resulting in increased power consumption due to [...] Read more.
Cattle behavior is closely related to their health status, and monitoring cattle behavior using intelligent devices can assist herders in achieving precise and scientific livestock management. Current behavior recognition algorithms are typically executed on server platforms, resulting in increased power consumption due to data transmission from edge devices and hindering real-time computation. An edge-based cattle behavior recognition method via Graph Neural Network (GNN) compression is proposed in this paper. Firstly, this paper proposes a wearable device that integrates data acquisition and model inference. This device achieves low-power edge inference function through a high-performance embedded microcontroller. Secondly, a sequential residual model tailored for single-frame data based on Inertial Measurement Unit (IMU) and displacement information is proposed. The model incrementally extracts deep features through two Residual Blocks (Resblocks), enabling effective cattle behavior classification. Finally, a compression method based on GNNs is introduced to adapt edge devices’ limited storage and computational resources. The method adopts GNNs as the backbone of the Actor–Critic model to autonomously search for an optimal pruning strategy under Floating-Point Operations (FLOPs) constraints. The experimental results demonstrate the effectiveness of the proposed method in cattle behavior classification. Moreover, enabling real-time inference on edge devices significantly reduces computational latency and power consumption, thereby highlighting the proposed method’s advantages for low-power, long-term operation. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

20 pages, 1516 KB  
Article
Fast NOx Emission Factor Accounting for Hybrid Electric Vehicles with Dictionary Learning-Based Incremental Dimensionality Reduction
by Hao Chen, Jianan Chen, Feiyang Zhao and Wenbin Yu
Energies 2026, 19(3), 680; https://doi.org/10.3390/en19030680 - 28 Jan 2026
Viewed by 208
Abstract
Amid the growing global environmental challenges, precise and efficient vehicle emission management plays a critical role in achieving energy-saving and emission reduction goals. At the same time, the rapid development of connected vehicles and autonomous driving technologies has generated a large amount of [...] Read more.
Amid the growing global environmental challenges, precise and efficient vehicle emission management plays a critical role in achieving energy-saving and emission reduction goals. At the same time, the rapid development of connected vehicles and autonomous driving technologies has generated a large amount of high-dimensional vehicle operation data. This not only provides a rich data foundation for refined emission accounting but also raises higher demands for the construction of accounting models. Therefore, this study aims to develop an accurate and efficient emission accounting model to contribute to the precise nitrogen oxide (NOx) emission accounting for hybrid electric vehicles (HEVs). A systematic approach is proposed that combines incremental dimensionality reduction with advanced regression algorithms to achieve refined and efficient emission accounting based on multiple variables. Specifically, the dimensionality of the real driving emission (RDE) data is first reduced using the feature selection and t-distributed stochastic neighbor embedding (t-SNE) feature extraction method to capture key parameter information and reduce subsequent computational complexity. Next, an incremental dimensionality reduction method based on dictionary learning is employed to efficiently embed new data into a low-dimensional space through straightforward matrix operations. Given the computational cost of the dictionary learning training process, this study introduces the FISTA (Fast Iterative Shrinkage-Thresholding Algorithm) for accelerated iterative optimization and enhances the computational efficiency through parameter optimization, while maintaining the accuracy of dictionary learning. Subsequently, an NOx emission factor correction factor prediction model is trained using the low-dimensional data obtained from t-SNE embeddings, enabling direct computation of the corresponding correction factor when presented with new incremental low-dimensional embeddings. Finally, validation on independent HEV datasets shows that parameter K improves to 1 ± 0.05 and R2 increases up to 0.990, laying a foundation for constructing an emission accounting model with broad applicability based on multiple variables. Full article
(This article belongs to the Collection State of the Art Electric Vehicle Technology in China)
Show Figures

Figure 1

21 pages, 1342 KB  
Article
TSCL-LwF: A Cross-Subject Emotion Recognition Model via Multi-Scale CNN and Incremental Learning Strategy
by Chunting Wan, Xing Tang, Cong Hu, Juan Yang, Shaorong Zhang and Dongyi Chen
Brain Sci. 2026, 16(1), 84; https://doi.org/10.3390/brainsci16010084 - 9 Jan 2026
Viewed by 544
Abstract
Background/Objectives: Wearable affective human–computer interaction increasingly relies on sparse-channel EEG signals to ensure comfort and practicality in real-life scenarios. However, the limited information provided by sparse-channel EEG, together with pronounced inter-subject variability, makes reliable cross-subject emotion recognition particularly challenging. Methods: To [...] Read more.
Background/Objectives: Wearable affective human–computer interaction increasingly relies on sparse-channel EEG signals to ensure comfort and practicality in real-life scenarios. However, the limited information provided by sparse-channel EEG, together with pronounced inter-subject variability, makes reliable cross-subject emotion recognition particularly challenging. Methods: To address these challenges, we propose a cross-subject emotion recognition model, termed TSCL-LwF, based on sparse-channel EEG. It combines a multi-scale convolutional network (TSCL) and an incremental learning strategy with Learning without Forgetting (LwF). Specifically, the TSCL is utilized to capture the spatio-temporal characteristics of sparse-channel EEG, which employs diverse receptive fields of convolutional networks to extract and fuse the interaction information within the local prefrontal area. The incremental learning strategy with LwF introduces a limited set of labeled target domain data and incorporates the knowledge distillation loss to retain the source domain knowledge while enabling rapid target domain adaptation. Results: Experiments on the DEAP dataset show that the proposed TSCL-LwF achieves accuracy of 77.26% for valence classification and 80.12% for arousal classification. Moreover, it also exhibits superior accuracy when evaluated on the self-collected dataset EPPVR. Conclusions: The successful implementation of cross-subject emotion recognition based on a sparse-channel EEG will facilitate the development of wearable EEG technologies with practical applications. Full article
Show Figures

Figure 1

15 pages, 2695 KB  
Article
Opportunistic Osteoporosis Screening in Breast Cancer Using AI-Derived Vertebral BMD from Routine CT: Validation Against QCT and Multivariable Diagnostic Modeling
by Jiayi Pu, Wenqin Zhou, Miao Wei, Wen Li, Yan Xiao, Jia Xie and Fajin Lv
J. Clin. Med. 2026, 15(2), 512; https://doi.org/10.3390/jcm15020512 - 8 Jan 2026
Viewed by 472
Abstract
Background/Objectives: Breast cancer survivors face elevated risk of treatment-related bone loss, yet routine bone health assessment remains underutilized. Opportunistic bone density extraction from routine CT may address this gap. This study validated AI-derived vertebral bone mineral density (AI-vBMD) from non-contrast thoracoabdominal CT [...] Read more.
Background/Objectives: Breast cancer survivors face elevated risk of treatment-related bone loss, yet routine bone health assessment remains underutilized. Opportunistic bone density extraction from routine CT may address this gap. This study validated AI-derived vertebral bone mineral density (AI-vBMD) from non-contrast thoracoabdominal CT for osteoporosis screening and assessed its diagnostic value beyond clinical variables. Methods: This retrospective study included 332 breast cancer patients; AI-vBMD was successfully extracted in 325 (98%). Quantitative CT (QCT) served as reference standard. Agreement between AI-vBMD and QCT-vBMD was assessed using Pearson correlation, Bland–Altman analysis, and weighted kappa for QCT-defined osteoporosis (<80 mg/cm3). Nested logistic regression models compared a clinical model with and without AI-vBMD. Discrimination [area under the curve (AUC)], calibration, and clinical utility [decision-curve analysis (DCA)] were evaluated. Results: AI-vBMD showed strong correlation with QCT-vBMD (r = 0.98, p < 0.001), minimal bias (mean difference +1.82 mg/cm3), and excellent agreement for osteoporosis classification (weighted κ = 0.90). AI-vBMD alone achieved excellent discrimination for osteoporosis (AUC = 0.986). Integrating AI-vBMD into the clinical model yielded significantly higher diagnostic performance (AUC 0.988 vs. 0.879; p < 0.001) and demonstrated superior net benefit across relevant decision thresholds. Conclusions: AI-derived vertebral BMD from routine CT serves as a reliable QCT-aligned imaging biomarker for opportunistic osteoporosis assessment in breast cancer patients and adds significant incremental diagnostic value beyond clinical information alone. Full article
Show Figures

Figure 1

Back to TopTop