Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (376)

Search Parameters:
Keywords = spectral–spatial–temporal features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 11926 KB  
Article
STC-DeepLAINet: A Transformer-GCN Hybrid Deep Learning Network for Large-Scale LAI Inversion by Integrating Spatio-Temporal Correlations
by Huijing Wu, Ting Tian, Qingling Geng and Hongwei Li
Remote Sens. 2025, 17(24), 4047; https://doi.org/10.3390/rs17244047 - 17 Dec 2025
Abstract
Leaf area index (LAI) is a pivotal biophysical parameter linking vegetation physiological processes and macro-ecological functions. Accurate large-scale LAI estimation is indispensable for agricultural management, climate change research, and ecosystem modeling. However, existing methods fail to efficiently extract integrated spatial-spectral-temporal features and lack [...] Read more.
Leaf area index (LAI) is a pivotal biophysical parameter linking vegetation physiological processes and macro-ecological functions. Accurate large-scale LAI estimation is indispensable for agricultural management, climate change research, and ecosystem modeling. However, existing methods fail to efficiently extract integrated spatial-spectral-temporal features and lack targeted modeling of spatio-temporal dependencies, compromising the accuracy of LAI products. To address this gap, we propose STC-DeepLAINet, a Transformer-GCN hybrid deep learning architecture integrating spatio-temporal correlations via the following three synergistic modules: (1) a 3D convolutional neural networks (CNNs)-based spectral-spatial embedding module capturing intrinsic correlations between multi-spectral bands and local spatial features; (2) a spatio-temporal correlation-aware module that models temporal dynamics (by “time periods”) and spatial heterogeneity (by “spatial slices”) simultaneously; (3) a spatio-temporal pattern memory attention module that retrieves historically similar spatio-temporal patterns via an attention-based mechanism to improve inversion accuracy. Experimental results demonstrate that STC-DeepLAINet outperforms eight state-of-the-art methods (including traditional machine learning and deep learning networks) in a 500 m resolution LAI inversion task over China. Validated against ground-based measurements, it achieves a coefficient of determination (R2) of 0.827 and a root mean square error (RMSE) of 0.718, outperforming the GLASS LAI product. Furthermore, STC-DeepLAINet effectively captures LAI variability across typical vegetation types (e.g., forests and croplands). This work establishes an operational solution for generating large-scale high-precision LAI products, which can provide reliable data support for agricultural yield estimation and ecosystem carbon cycle simulation, while offering a new methodological reference for spatio-temporal correlation modeling in remote sensing inversion. Full article
Show Figures

Figure 1

16 pages, 2128 KB  
Article
Robust Motor Imagery–Brain–Computer Interface Classification in Signal Degradation: A Multi-Window Ensemble Approach
by Dong-Geun Lee and Seung-Bo Lee
Biomimetics 2025, 10(12), 832; https://doi.org/10.3390/biomimetics10120832 - 12 Dec 2025
Viewed by 208
Abstract
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. [...] Read more.
Electroencephalography (EEG)-based brain–computer interface (BCI) mimics the brain’s intrinsic information-processing mechanisms by translating neural oscillations into actionable commands. In motor imagery (MI) BCI, imagined movements evoke characteristic patterns over the sensorimotor cortex, forming a biomimetic channel through which internal motor intentions are decoded. However, this biomimetic interaction is highly vulnerable to signal degradation, particularly in mobile or low-resource environments where low sampling frequencies obscure these MI-related oscillations. To address this limitation, we propose a robust MI classification framework that integrates spatial, spectral, and temporal dynamics through a filter bank common spatial pattern with time segmentation (FBCSP-TS). This framework classifies motor imagery tasks into four classes (left hand, right hand, foot, and tongue), segments EEG signals into overlapping time domains, and extracts frequency-specific spatial features across multiple subbands. Segment-level predictions are combined via soft voting, reflecting the brain’s distributed integration of information and enhancing resilience to transient noise and localized artifacts. Experiments performed on BCI Competition IV datasets 2a (250 Hz) and 1 (100 Hz) demonstrate that FBCSP-TS outperforms CSP and FBCSP. A paired t-test confirms that accuracy at 110 Hz is not significantly different from that at 250 Hz (p < 0.05), supporting the robustness of the proposed framework. Optimal temporal parameters (window length = 3.5 s, moving length = 0.5 s) further stabilize transient-signal capture and improve SNR. External validation yielded a mean accuracy of 0.809 ± 0.092 and Cohen’s kappa of 0.619 ± 0.184, confirming strong generalizability. By preserving MI-relevant neural patterns under degraded conditions, this framework advances practical, biomimetic BCI suitable for wearable and real-world deployment. Full article
Show Figures

Graphical abstract

26 pages, 16103 KB  
Article
Integrating Phenological Features with Time Series Transformer for Accurate Rice Field Mapping in Fragmented and Cloud-Prone Areas
by Tiantian Xu, Peng Cai, Hangan Wei, Huili He and Hao Wang
Sensors 2025, 25(24), 7488; https://doi.org/10.3390/s25247488 - 9 Dec 2025
Viewed by 258
Abstract
Accurate identification and monitoring of rice cultivation areas are essential for food security and sustainable agricultural development. However, regions with frequent cloud cover, high rainfall, and fragmented fields often face challenges due to the absence of temporal features caused by cloud and rain [...] Read more.
Accurate identification and monitoring of rice cultivation areas are essential for food security and sustainable agricultural development. However, regions with frequent cloud cover, high rainfall, and fragmented fields often face challenges due to the absence of temporal features caused by cloud and rain interference, as well as spectral confusion from scattered plots, which hampers rice recognition accuracy. To address these issues, this study employs a Satellite Image Time Series Transformer (SITS-Former) model, enhanced with the integration of diverse phenological features to improve rice phenology representation and enable precise rice identification. The methodology constructs a rice phenological feature set that combines temporal, spatial, and spectral information. Through its self-attention mechanism, the model effectively captures growth dynamics, while multi-scale convolutional modules help suppress interference from non-rice land covers. The study utilized Sentinel-2 satellite data to analyze rice distribution in Wuxi City. The results demonstrated an overall classification accuracy of 0.967, with the estimated planting area matching 91.74% of official statistics. Compared to traditional rice distribution analysis methods, such as Random Forest, this approach outperforms in both accuracy and detailed presentation. It effectively addresses the challenge of identifying fragmented rice fields in regions with persistent cloud cover and heavy rainfall, providing accurate mapping of cultivated areas in difficult climatic conditions while offering valuable baseline data for yield assessments. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

24 pages, 10480 KB  
Article
Detecting Abandoned Cropland in Monsoon-Influenced Regions Using HLS Imagery and Interpretable Machine Learning
by Sinyoung Park, Sanae Kang, Byungmook Hwang and Dongwook W. Ko
Agronomy 2025, 15(12), 2702; https://doi.org/10.3390/agronomy15122702 - 24 Nov 2025
Viewed by 408
Abstract
Abandoned cropland has been expanding due to complex socio-economic factors such as urbanization, demographic shifts, and declining agricultural profitability. As abandoned cropland simultaneously brings ecological, environmental, and social risks and benefits, quantitative monitoring is essential to assess its overall impact. Satellite image-based spatial [...] Read more.
Abandoned cropland has been expanding due to complex socio-economic factors such as urbanization, demographic shifts, and declining agricultural profitability. As abandoned cropland simultaneously brings ecological, environmental, and social risks and benefits, quantitative monitoring is essential to assess its overall impact. Satellite image-based spatial data are suitable for identifying spectral characteristics related to crop phenology, and recent research has advanced in detecting large-scale abandoned cropland through changes in time-series spectral characteristics. However, frequent cloud covers and highly fragmented croplands, which vary across regions and climatic conditions, still pose significant challenges for satellite-based detection. This study combined Harmonized Landsat and Sentinel-2 (HLS) imagery, offering high temporal (2–3 days) and spatial (30 m) resolution, with the eXtreme Gradient Boosting (XGBoost) algorithm to capture seasonal spectral variations among rice paddy, upland fields, and abandoned croplands. An XGBoost model with a Balanced Bagging Classifier (BBC) was used to mitigate class imbalance. The model achieved an accuracy of 0.84, Cohens kappa 0.71, and F2 score 0.84. SHapley Additive exPlanations (SHAP) analysis identified major features such as NIR (May–June), SWIR2 (January), MCARI (September), and BSI (January–April), reflecting phenological differences among cropland types. Overall, this study establishes a robust framework for large-scale cropland monitoring that can be adapted to different regional and climatic settings. Full article
Show Figures

Figure 1

19 pages, 4048 KB  
Article
Transformer Attention-Guided Dual-Path Framework for Bearing Fault Diagnosis
by Saif Ullah, Wasim Zaman and Jong-Myon Kim
Appl. Sci. 2025, 15(23), 12431; https://doi.org/10.3390/app152312431 - 23 Nov 2025
Viewed by 499
Abstract
Reliable bearing fault diagnosis plays an important role in maintaining the safety and performance of rotating machinery in industrial systems. Although deep learning models have achieved remarkable success in this field, their dependence on a single feature-extraction approach often restricts the diversity of [...] Read more.
Reliable bearing fault diagnosis plays an important role in maintaining the safety and performance of rotating machinery in industrial systems. Although deep learning models have achieved remarkable success in this field, their dependence on a single feature-extraction approach often restricts the diversity of learned representations and limits diagnostic accuracy. To overcome this limitation, this study proposes an attention-guided dual-path framework that integrates spatial and time–frequency feature learning with transformer-based classification for precise fault identification. In the proposed framework, vibration signals collected from an experimental bearing test rig are simultaneously processed through two complementary pipelines: one converts the signals into two-dimensional matrix images to extract spatial features, while the other transforms them into continuous wavelet transform (CWT) scalograms to capture fine-grained temporal and spectral information. The extracted features are fused through a lightweight transformer encoder with an attention mechanism that dynamically emphasizes the most informative representations. This fusion enables the model to effectively capture cross-domain dependencies and enhance discriminative capability. Experimental validation on an industrial vibration dataset demonstrates that the proposed model achieves 99.87% classification accuracy, outperforming conventional CNN and transformer-based approaches. The results confirm that integrating multi-domain features with attention-driven fusion significantly improves the robustness and generalization of deep learning models for intelligent bearing fault diagnosis. Full article
Show Figures

Figure 1

27 pages, 4718 KB  
Article
Data Augmentation and Interpolation Improves Machine Learning-Based Pasture Biomass Estimation from Sentinel-2 Imagery
by Blessing N. Azubuike, Anna Chlingaryan, Martin Correa-Luna, Cameron E. F. Clark and Sergio C. Garcia
Remote Sens. 2025, 17(23), 3787; https://doi.org/10.3390/rs17233787 - 21 Nov 2025
Viewed by 571
Abstract
Accurate pasture biomass (PB) estimation is critical for tactical grazing management, yet traditional satellite-derived vegetation indices such as Normalised Difference Vegetation Index (NDVI) saturate when canopy density exceeds about 3 t DM ha−1. This limits predictive accuracy because the spectral signal [...] Read more.
Accurate pasture biomass (PB) estimation is critical for tactical grazing management, yet traditional satellite-derived vegetation indices such as Normalised Difference Vegetation Index (NDVI) saturate when canopy density exceeds about 3 t DM ha−1. This limits predictive accuracy because the spectral signal plateaus under dense vegetation, masking further biomass increases. To address this limitation, this study integrated multiple data sources to improve PB estimation in dairy systems. The dataset combined Sentinel-2 spectral bands, rising plate-meter (RPM) PB measurements, daily weather data, and paddock management features. A total of 3161 paired RPM–satellite observations were collected from 80 paddocks across 16 New South Wales dairy farms between November 2021 and July 2024. Eight regression algorithms and four predictor configurations were evaluated using robust cross-validation, including an 80:20 farm/paddock-stratified train–test-set split. The XGBoost model using full-band reflectance and concurrent weather data achieved strong baseline performance (R2 = 0.63; MAE = 243 kg DM ha−1) on non-interpolated data, outperforming NDVI-based models. To address temporal gaps between field readings and satellite imagery, Multiquadric interpolation was applied to RPM data, adding roughly 30% new observations. This enhanced dataset improved test performance to R2 = 0.70 and MAE = 216 kg DM ha−1, with gains maintained on external validations (R2 = 0.41/0.48; MAE = 267/235 kg DM ha−1). A progressive training strategy, which refreshed model parameters with seasonally aligned data, further reduced errors by 30% compared to static models and sustained performance even when farms or seasons were excluded. This fortified Sentinel-2 modelling workflow, combining RPM interpolation and progressive calibration, achieved accuracy comparable to the commercial Pasture.io platform (R2 = 0.66; MAE = 240 kg DM ha−1) which uses satellite imagery with higher temporal and spatial resolution, demonstrating potential for automated recalibration and near real-time, paddock-level decision support in pasture-based dairy systems. Full article
Show Figures

Figure 1

20 pages, 1411 KB  
Article
A Hybrid AI Framework for Integrated Predictive Maintenance and Mineral Quality Assessment in Mining
by Wanji Mwale, Zhixiang Liu and Kavimbi Chipusu
Appl. Sci. 2025, 15(22), 12222; https://doi.org/10.3390/app152212222 - 18 Nov 2025
Viewed by 557
Abstract
In the mining industry, operational efficiency, equipment reliability, and mineral quality assessment are paramount for cost-effective and sustainable production. Traditional approaches often address equipment maintenance and quality control as separate challenges, leading to suboptimal operational synergy. This paper proposes a novel artificial intelligence [...] Read more.
In the mining industry, operational efficiency, equipment reliability, and mineral quality assessment are paramount for cost-effective and sustainable production. Traditional approaches often address equipment maintenance and quality control as separate challenges, leading to suboptimal operational synergy. This paper proposes a novel artificial intelligence (AI) framework that integrates predictive maintenance with real-time mineral quality assessment through advanced sensor fusion and deep learning. Our model leverages a hybrid architecture, combining Convolutional Neural Networks (CNNs) for analyzing visual and spectral data of iron ore with Long Short-Term Memory (LSTM) networks for processing temporal sensor data (vibration, thermal, acoustic) from critical equipment like crushers and conveyors. A dedicated fusion layer synthesizes these spatial and temporal features to simultaneously predict equipment failure probability and classify mineral quality. Validated on a real-world dataset from active iron ore mines, the system demonstrates a significant 20–30% reduction in projected maintenance downtime and a 15% improvement in mineral classification accuracy compared to baseline models while achieving real-time inference speeds of less than 10 milliseconds. This work underscores the transformative potential of unified AI-driven systems in enhancing the intelligence, resilience, and productivity of modern mining operations. Full article
Show Figures

Figure 1

44 pages, 10199 KB  
Article
Predictive Benthic Habitat Mapping Reveals Significant Loss of Zostera marina in the Puck Lagoon, Baltic Sea, over Six Decades
by Łukasz Janowski, Anna Barańska, Krzysztof Załęski, Maria Kubacka, Monika Michałek, Anna Tarała, Michał Niemkiewicz and Juliusz Gajewski
Remote Sens. 2025, 17(22), 3725; https://doi.org/10.3390/rs17223725 - 15 Nov 2025
Viewed by 557
Abstract
This research presents a comprehensive analysis of the spatial extent and temporal change in benthic habitats within the Puck Lagoon in the southern Baltic Sea, utilizing integrated machine learning classification and multi-sourced remote sensing. Object-based image analysis was integrated with Random Forest, Support [...] Read more.
This research presents a comprehensive analysis of the spatial extent and temporal change in benthic habitats within the Puck Lagoon in the southern Baltic Sea, utilizing integrated machine learning classification and multi-sourced remote sensing. Object-based image analysis was integrated with Random Forest, Support Vector Machine, and K-Nearest Neighbors algorithms for benthic habitat classification based on airborne bathymetric LiDAR (ALB), multibeam echosounder (MBES), satellite bathymetry, and high-resolution aerial photography. Ground-truth data collected by 2023 field surveys were supplemented with long temporal datasets (2010–2023) for seagrass meadow analysis. Boruta feature selection showed that geomorphometric variables (aspect, slope, and terrain ruggedness index) and optical features (ALB intensity and spectral bands) were the most significant discriminators in each classification case. Binary classification models were more effective (93.3% accuracy in the presence/absence of Zostera marina) compared to advanced multi-class models (43.3% for EUNIS Level 4/5), which identified the inherent equilibrium between ecological complexity and map validity. Change detection between contemporary and 1957 habitat data revealed extensive Zostera marina loss, with 84.1–99.0% cover reduction across modeling frameworks. Seagrass coverage declined from 61.15% of the study area to just 9.70% or 0.63%, depending on the model. Seasonal mismatch may inflate loss estimates by 5–15%, but even adjusted values (70–94%) indicate severe ecosystem degradation. Spatial exchange components exhibited patterns of habitat change, whereas net losses in total were many orders of magnitude larger than any redistribution in space. These findings recorded the most severe seagrass habitat destruction ever described within Baltic Sea ecosystems and emphasize the imperative for conservation action at the landscape level. The methodology framework provides a reproducible model for analogous change detection analysis in shallow nearshore habitats, creating critical baselines to inform restoration planning and biodiversity conservation activities. It also demonstrated both the capabilities and limitations of automatic techniques for habitat monitoring. Full article
Show Figures

Figure 1

29 pages, 16051 KB  
Article
Research on fMRI Image Generation from EEG Signals Based on Diffusion Models
by Xiaoming Sun, Yutong Sun, Junxia Chen, Bochao Su, Tuo Nie and Ke Shui
Electronics 2025, 14(22), 4432; https://doi.org/10.3390/electronics14224432 - 13 Nov 2025
Viewed by 623
Abstract
Amidrapid advances in intelligent medicine, decoding brain activity from electroencephalogram (EEG) signals has emerged as a critical technical frontier for brain–computer interfaces and medical AI systems. Given the inherent spatial resolution limitations of an EEG, researchers frequently integrate functional magnetic resonance imaging (fMRI) [...] Read more.
Amidrapid advances in intelligent medicine, decoding brain activity from electroencephalogram (EEG) signals has emerged as a critical technical frontier for brain–computer interfaces and medical AI systems. Given the inherent spatial resolution limitations of an EEG, researchers frequently integrate functional magnetic resonance imaging (fMRI) to enhance neural activity representation. However, fMRI acquisition is inherently complex. Consequently, efforts increasingly focus on cross-modal transformation methods that map EEG signals to fMRI data, thereby extending EEG applications in neural mechanism studies. The central challenge remains generating high-fidelity fMRI images from EEG signals. To address this, we propose a diffusion model-based framework for cross-modal EEG-to-fMRI generation. To address pronounced noise contamination in electroencephalographic (EEG) signals acquired via simultaneous recording systems and temporal misalignments between EEGs and functional magnetic resonance imaging (fMRI), we first apply Fourier transforms to EEG signals and perform dimensionality expansion. This constructs a spatiotemporally aligned EEG–fMRI paired dataset. Building on this foundation, we design an EEG encoder integrating a multi-layer recursive spectral attention mechanism with a residual architecture.In response to the limited dynamic mapping capabilities and suboptimal image quality prevalent in existing cross-modal generation research, we propose a diffusion-model-driven EEG-to-fMRI generation algorithm. This framework unifies the EEG feature encoder and a cross-modal interaction module within an end-to-end denoising U-Net architecture. By leveraging the diffusion process, EEG-derived features serve as conditional priors to guide fMRI reconstruction, enabling high-fidelity cross-modal image generation. Empirical evaluations on the resting-state NODDI dataset and the task-based XP-2 dataset demonstrate that our EEG encoder significantly enhances cross-modal representational congruence, providing robust semantic features for fMRI synthesis. Furthermore, the proposed cross-modal generative model achieves marked improvements in structural similarity, the root mean square error, and the peak signal-to-noise ratio in generated fMRI images, effectively resolving the nonlinear mapping challenge inherent in EEG–fMRI data. Full article
Show Figures

Figure 1

18 pages, 3175 KB  
Article
AudioFakeNet: A Model for Reliable Speaker Verification in Deepfake Audio
by Samia Dilbar, Muhammad Ali Qureshi, Serosh Karim Noon and Abdul Mannan
Algorithms 2025, 18(11), 716; https://doi.org/10.3390/a18110716 - 13 Nov 2025
Viewed by 698
Abstract
Deepfake audio refers to the generation of voice recordings using deep neural networks that replicate a specific individual’s voice, often for deceptive or fraud purposes. Although this has been an area of research for quite some time, deepfakes still pose substantial challenges for [...] Read more.
Deepfake audio refers to the generation of voice recordings using deep neural networks that replicate a specific individual’s voice, often for deceptive or fraud purposes. Although this has been an area of research for quite some time, deepfakes still pose substantial challenges for reliable true speaker authentication. To address the issue, we propose AudioFakeNet, a hybrid deep learning architecture that use Convolutional Neural Networks (CNNs) along with Long Short-Term Memory (LSTM) units, and Multi-Head Attention (MHA) mechanisms for robust deepfake detection. CNN extracts spatial and spectral features, LSTM captures temporal dependencies, and MHA enhances to focus on informative audio segments. The model is trained using Mel-Frequency Cepstral Coefficients (MFCCs) from the publicly available dataset and was validated on self-collected dataset, ensuring reproducibility. Performance comparisons with state-of-the-art machine learning and deep learning models show that our proposed AudioFakeNet achieves higher accuracy, better generalization, and lower Equal Error Rate (EER). Its modular design allows for broader adaptability in fake-audio detection tasks, offering significant potential across diverse speech synthesis applications. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 5996 KB  
Article
Comparative Analysis of Machine Learning Algorithms for Object-Based Crop Classification Using Multispectral Imagery
by Madjebi Collela Be, Antsa Sarobidy Randrianantenaina, James E. Kanneh, Yingchun Han, Yaping Lei, Xiaoyu Zhi, Shiwu Xiong, Yahui Jiao, Shilong Shang, Yunzhen Ma, Beifang Yang, Lin Tao and Yabing Li
Drones 2025, 9(11), 763; https://doi.org/10.3390/drones9110763 - 5 Nov 2025
Viewed by 906
Abstract
Unmanned Aerial Vehicles (UAVs) offer enhanced spatial and temporal resolution for agricultural remote sensing, surpassing traditional satellite-based methods. Given the abundance of evolving machine-learning methods for crop recognition, this study evaluates and compares five machine learning algorithms (ML) and tests an Ensemble Learning [...] Read more.
Unmanned Aerial Vehicles (UAVs) offer enhanced spatial and temporal resolution for agricultural remote sensing, surpassing traditional satellite-based methods. Given the abundance of evolving machine-learning methods for crop recognition, this study evaluates and compares five machine learning algorithms (ML) and tests an Ensemble Learning method as a sixth approach, integrated with object-based image analysis (OBIA) for crop-type classification using UAV multispectral imagery, aiming to identify the most effective model and produce a classification map based on the best-performing method. Image segmentation was built using eCognition software, and spectral, index, and gray level co-occurrence matrix (GLCM) features were extracted from the segmented object. A machine learning model integrating multiple classification algorithms (SVM, ANN, RF, XGBoost, KNN, Ensemble Learning) with automated hyperparameter optimization was developed and executed in Google Colab using Python 3.10. All classifiers achieved accuracies exceeding 80% and Area Under the Curve (AUC) values above 0.9. SVM and ANN are the best classifiers, with the same value of accuracy (94%), followed by XGBoost (93%), RF (92%), and KNN (89%). The Ensemble Learning method (SVM + ANN) as a sixth approach outperformed all single models, with an accuracy value of 95%. Cotton, maize, peanut, and soybean were classified with the highest accuracy, with index and GLCM features contributing most significantly, followed by spectral features. The integration of high-resolution UAV imagery with ML and OBIA demonstrates strong potential for automated crop-type classification, offering valuable support for precision agriculture applications. Full article
(This article belongs to the Special Issue UAS in Smart Agriculture: 2nd Edition)
Show Figures

Figure 1

50 pages, 16753 KB  
Article
Spectral Energy of High-Speed Over-Expanded Nozzle Flows at Different Pressure Ratios
by Manish Tripathi, Sławomir Dykas, Mirosław Majkut, Krystian Smołka, Kamil Skoczylas and Andrzej Boguslawski
Energies 2025, 18(21), 5813; https://doi.org/10.3390/en18215813 - 4 Nov 2025
Viewed by 498
Abstract
This paper addresses the long-standing question of understanding the origin and evolution of low-frequency unsteadiness interactions associated with shock waves impinging on a turbulent boundary layer in transonic flow (Mach: 1.1 to 1.3). To that end, high-speed experiments in a blowdown open-channel [...] Read more.
This paper addresses the long-standing question of understanding the origin and evolution of low-frequency unsteadiness interactions associated with shock waves impinging on a turbulent boundary layer in transonic flow (Mach: 1.1 to 1.3). To that end, high-speed experiments in a blowdown open-channel wind tunnel have been performed across a convergent–divergent nozzle for different expansion ratios (PR = 1.44, 1.6, and 1.81). Quantitative evaluation of the underlying spectral energy content has been obtained by processing time-resolved pressure transducer data and Schlieren images using the following spectral analysis methods: Fast Fourier Transform (FFT), Continuous Wavelet Transform (CWT), as well as coherence and time-lag evaluations. The images demonstrated the presence of increased normal shock-wave impact for PR = 1.44, whereas the latter were linked with increased oblique λ-foot impact. Hence, significant disparities associated with the overall stability, location, and amplitude of the shock waves, as well as quantitative assertions related to spectral energy segregation, have been inferred. A subsequent detailed spectral analysis revealed the presence of multiple discrete frequency peaks (magnitude and frequency of the peaks increasing with PR), with the lower peaks linked with large-scale shock-wave interactions and higher peaks associated with shear-layer instabilities and turbulence. Wavelet transform using the Morlet function illustrates the presence of varying intermittency, modulation in the temporal and frequency scales for different spectral events, and a pseudo-periodic spectral energy pulsation alternating between two frequency-specific events. Spectral analysis of the pixel densities related to different regions, called spatial FFT, highlights the increased influence of the feedback mechanism and coupled turbulence interactions for higher PR. Collation of the subsequent coherence analysis with the previous results underscores that lower PR is linked with shock-separation dynamics being tightly coupled, whereas at higher PR values, global instabilities, vortex shedding, and high-frequency shear-layer effects govern the overall interactions, redistributing the spectral energy across a wider spectral range. Complementing these experiments, time-resolved numerical simulations based on a transient 3D RANS framework were performed. The simulations successfully reproduced the main features of the shock motion, including the downstream migration of the mean position, the reduction in oscillation amplitude with increasing PR, and the division of the spectra into distinct frequency regions. This confirms that the adopted 3D RANS approach provides a suitable predictive framework for capturing the essential unsteady dynamics of shock–boundary layer interactions across both temporal and spatial scales. This novel combination of synchronized Schlieren imaging with pressure transducer data, followed by application of advanced spectral analysis techniques, FFT, CWT, spatial FFT, coherence analysis, and numerical evaluations, linked image-derived propagation and coherence results directly to wall pressure dynamics, providing critical insights into how PR variation governs the spectral energy content and shock-wave oscillation behavior for nozzles. Thus, for low PR flows dominated by normal shock structure, global instability of the separation zone governs the overall oscillations, whereas higher PR, linked with dominant λ-foot structure, demonstrates increased feedback from the shear-layer oscillations, separation region breathing, as well as global instabilities. It is envisaged that epistemic understanding related to the spectral dynamics of low-frequency oscillations at different PR values derived from this study could be useful for future nozzle design modifications aimed at achieving optimal nozzle performance. The study could further assist the implementation of appropriate flow control strategies to alleviate these instabilities and improve thrust performance. Full article
Show Figures

Figure 1

31 pages, 7049 KB  
Article
Objective Emotion Assessment Using a Triple Attention Network for an EEG-Based Brain–Computer Interface
by Lihua Zhang, Xin Zhang, Xiu Zhang, Changyi Yu and Xuguang Liu
Brain Sci. 2025, 15(11), 1167; https://doi.org/10.3390/brainsci15111167 - 29 Oct 2025
Viewed by 661
Abstract
Background: The assessment of emotion recognition holds growing significance in research on the brain–computer interface and human–computer interaction. Among diverse physiological signals, electroencephalography (EEG) occupies a pivotal position in affective computing due to its exceptional temporal resolution and non-invasive acquisition. However, EEG signals [...] Read more.
Background: The assessment of emotion recognition holds growing significance in research on the brain–computer interface and human–computer interaction. Among diverse physiological signals, electroencephalography (EEG) occupies a pivotal position in affective computing due to its exceptional temporal resolution and non-invasive acquisition. However, EEG signals are inherently complex, characterized by substantial noise contamination and high variability, posing considerable challenges to accurate assessment. Methods: To tackle these challenges, we propose a Triple Attention Network (TANet), a triple-attention EEG emotion recognition framework that integrates Conformer, Convolutional Block Attention Module (CBAM), and Mutual Cross-Modal Attention (MCA). The Conformer component captures temporal feature dependencies, CBAM refines spatial channel representations, and MCA performs cross-modal fusion of differential entropy and power spectral density features. Results: We evaluated TANet on two benchmark EEG emotion datasets, DEAP and SEED. On SEED, using a subject-specific cross-validation protocol, the model reached an average accuracy of 98.51 ± 1.40%. On DEAP, we deliberately adopted a segment-level splitting paradigm—in line with influential state-of-the-art methods—to ensure a direct and fair comparison of model architecture under an identical evaluation protocol. This approach, designed specifically to assess fine-grained within-trial pattern discrimination rather than cross-subject generalization, yielded accuracies of 99.69 ± 0.15% and 99.67 ± 0.13% for the valence and arousal dimensions, respectively. Compared with existing benchmark approaches under similar evaluation protocols, TANet delivers substantially better results, underscoring the strong complementary effects of its attention mechanisms in improving EEG-based emotion recognition performance. Conclusions: This work provides both theoretical insights into multi-dimensional attention for physiological signal processing and practical guidance for developing high-performance, robust EEG emotion assessment systems. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

19 pages, 4856 KB  
Article
Evaluation of Vegetation Restoration Effectiveness in the Jvhugeng Mining Area of the Muli Coalfield Based on Sentinel-2 and Gaofen Data
by Linxue Ju, Lei Chen, Junxing Liu, Sen Jiao, Yanxu Zhang, Zhonglin Ji and Caiya Yue
Land 2025, 14(11), 2151; https://doi.org/10.3390/land14112151 - 29 Oct 2025
Viewed by 420
Abstract
To address the serious ecological problems caused by long-term mining in the Muli Coalfield, a three-year ecological restoration project was initiated in 2020. The Jvhugeng mining area was the largest and most ecologically damaged area in the Muli Coalfield. Vegetation restoration is the [...] Read more.
To address the serious ecological problems caused by long-term mining in the Muli Coalfield, a three-year ecological restoration project was initiated in 2020. The Jvhugeng mining area was the largest and most ecologically damaged area in the Muli Coalfield. Vegetation restoration is the core of mine ecological restoration. Scientific evaluation of the vegetation restoration status in the Jvhugeng mining area is significant for comprehensively revealing ecological restoration effectiveness in the Muli Coalfield. Based on Sentinel-2’s spectral and temporal advantages and GF-1/GF-6’s high spatial resolution in detailed portrayal, fractional vegetation cover (FVC) and landscape pattern index were determined separately. Thus, the vegetation restoration effectiveness and spatiotemporal dynamics of the Jvhugeng mining area from 2020 to 2023 were evaluated in terms of structural and functional dimensions. The results show that, from 2020 to 2023, vegetation cover extent (varying from 8.77 km2 in 2020 to a peak of 17.93 km2 in 2022 and then decreasing to 13.48 km2 in 2023) and FVC (from 0.33 in 2020 to about 0.50 during 2021–2023) first increased sharply and then fluctuated. Vegetation regions with both high FVC and dominant landscape features also presented the characteristics of rapid expansion and then fluctuation. Vegetation restoration demonstrated significant effectiveness, with the natural ecological environment restored to some extent and remaining stable. Newly vegetated regions had high FVC and significant landscape pattern characteristics. However, vegetation cover expansion also led to further fragmentation and morphological complexity of vegetation landscape patterns in the study area. The results can provide a basis for quantitatively assessing ecological restoration effectiveness in the Jvhugeng mining area and even the Muli Coalfield. This can also provide a dual-source data synergy technical reference for dynamic monitoring and effective evaluation of vegetation restoration in other mining areas. Full article
(This article belongs to the Section Land Use, Impact Assessment and Sustainability)
Show Figures

Figure 1

24 pages, 13390 KB  
Article
Performance of Acoustic, Electro-Acoustic and Optical Sensors in Precise Waveform Analysis of a Plucked and Struck Guitar String
by Jan Jasiński, Marek Pluta, Roman Trojanowski, Julia Grygiel and Jerzy Wiciak
Sensors 2025, 25(21), 6514; https://doi.org/10.3390/s25216514 - 22 Oct 2025
Viewed by 719
Abstract
This study presents a comparative performance analysis of three sensor technologies—microphone, magnetic pickup, and laser Doppler vibrometer—for capturing string vibration under varied excitation conditions: striking, plectrum plucking, and wire plucking. Two different magnetic pickups are included in the comparison. Measurements were taken at [...] Read more.
This study presents a comparative performance analysis of three sensor technologies—microphone, magnetic pickup, and laser Doppler vibrometer—for capturing string vibration under varied excitation conditions: striking, plectrum plucking, and wire plucking. Two different magnetic pickups are included in the comparison. Measurements were taken at multiple excitation levels on a simplified electric guitar mounted on a stable platform with repeatable excitation mechanisms. The analysis focuses on each sensor’s capacity to resolve fine-scale waveform features during the initial attack while also taking into account its capability to measure general changes in instrument dynamics and timbre. We evaluate their ability to distinguish vibro-acoustic phenomena resulting from changes in excitation method and strength as well as measurement location. Our findings highlight the significant influence of sensor choice on observable string vibration. While the microphone captures the overall radiated sound, it lacks the required spatial selectivity and offers poor SNR performance 34 dB lower then other methods. Magnetic pickups enable precise string-specific measurements, offering a compelling balance of accuracy and cost-effectiveness. Results show that their low-pass frequency characteristic limits temporal fidelity and must be accounted for when analysing general sound timbre. Laser Doppler vibrometers provide superior micro-temporal fidelity, which can have critical implications for physical modeling, instrument design, and advanced audio signal processing, but have severe practical limitations. Critically, we demonstrate that the required optical target, even when weighing as little as 0.1% of the string’s mass, alters the string’s vibratory characteristics by influencing RMS energy and spectral content. Full article
(This article belongs to the Special Issue Deep Learning for Perception and Recognition: Method and Applications)
Show Figures

Figure 1

Back to TopTop