Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (147)

Search Parameters:
Keywords = random subspace

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 911 KB  
Article
Migraine and Epilepsy Discrimination Using DTCWT and Random Subspace Ensemble Classifier
by Tuba Nur Subasi and Abdulhamit Subasi
Mach. Learn. Knowl. Extr. 2026, 8(2), 35; https://doi.org/10.3390/make8020035 - 4 Feb 2026
Viewed by 232
Abstract
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, [...] Read more.
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, which may be subjective and insufficient for clear differentiation. To address this challenge, this study introduces an automated EEG classification framework combining Dual Tree Complex Wavelet Transform (DTCWT) for feature extraction with a Random Subspace Ensemble Classifier for multi-class discrimination. EEG data recorded under photic and nonphotic stimulation were analyzed to capture both temporal and frequency characteristics. DTCWT proved effective in modeling the non-stationary nature of EEG signals and extracting condition-specific features, while the ensemble classifier improved generalization by training multiple models on diverse feature subsets. The proposed system achieved an average accuracy of 99.50%, along with strong F-measure, AUC, and Kappa scores. Notably, although previous studies suggest heightened EEG activity in migraine patients during flash stimulation, findings here indicate that flash stimulation alone does not reliably distinguish migraine from epilepsy. Overall, this research highlights the promise of advanced signal processing and machine learning techniques in enhancing diagnostic precision for complex neurological disorders. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

30 pages, 1510 KB  
Article
An Improved Mantis Search Algorithm for Solving Optimization Problems
by Yanjiao Wang and Tongchao Dou
Biomimetics 2026, 11(2), 105; https://doi.org/10.3390/biomimetics11020105 - 2 Feb 2026
Viewed by 251
Abstract
The traditional mantis search algorithm (MSA) suffers from limitations such as slow convergence and a high likelihood of converging to local optima in complex optimization scenarios. This paper proposes an improved mantis search algorithm (IMSA) to overcome these issues. An adaptive probability conversion [...] Read more.
The traditional mantis search algorithm (MSA) suffers from limitations such as slow convergence and a high likelihood of converging to local optima in complex optimization scenarios. This paper proposes an improved mantis search algorithm (IMSA) to overcome these issues. An adaptive probability conversion factor is designed, which adaptively controls the proportion of individuals entering the search phase and the attack phase so that the algorithm can smoothly transition from large-scale global exploration to local fine search. In the search phase, a probability update strategy based on both subspace and full space is designed, significantly improving the adaptability of the algorithm to complex problems by dynamically adjusting the search range. The elite population screening mechanism, based on Euclidean distance and fitness double criteria, is introduced to provide dual guidance for the evolution direction of the algorithm. In the attack stage, the base vector adaptive probability selection mechanism is designed, and the algorithm’s pertinence in different optimization stages is enhanced by dynamically adjusting the base vector selection strategy. Finally, in the stage of sexual cannibalism, the directed random disturbance update method of inferior individuals is adopted, and the population is directly introduced through the non-greedy replacement strategy, which effectively overcomes the loss of population diversity. The experimental results of 29 test functions on the CEC2017 test set demonstrate that the IMSA exhibits significant advantages in convergence speed, calculation accuracy, and stability compared to the original MSA and the five best meta-heuristic algorithms. Full article
(This article belongs to the Section Biological Optimisation and Management)
Show Figures

Figure 1

12 pages, 1025 KB  
Article
Enhancing Whisper Fine-Tuning with Discrete Wavelet Transform-Based LoRA Initialization
by Liang Lan, Molin Fang, Yuxuan Chen, Daliang Wang and Wenyong Wang
Electronics 2026, 15(3), 586; https://doi.org/10.3390/electronics15030586 - 29 Jan 2026
Viewed by 181
Abstract
In low-resource automatic speech recognition (ASR) scenarios, parameter-efficient fine-tuning (PEFT) has become a crucial approach for adapting large pre-trained speech models. Although low-rank adaptation (LoRA) offers clear advantages in efficiency, stability, and deployment friendliness, its performance remains constrained because random initialization fails to [...] Read more.
In low-resource automatic speech recognition (ASR) scenarios, parameter-efficient fine-tuning (PEFT) has become a crucial approach for adapting large pre-trained speech models. Although low-rank adaptation (LoRA) offers clear advantages in efficiency, stability, and deployment friendliness, its performance remains constrained because random initialization fails to capture the time–frequency structural characteristics of speech signals. To address this limitation, this work proposes a structured initialization mechanism that integrates LoRA with the discrete wavelet transform (DWT). By combining wavelet-based initialization, a multi-scale fusion mechanism, and a residual strategy, the proposed method constructs a low-rank adaptation subspace that better aligns with the local time–frequency properties of speech signals. Discrete Wavelet Transform-Based LoRA Initialization (DWTLoRA) enables LoRA modules to incorporate prior modeling of speech dynamics at the start of fine-tuning, substantially reducing the search space of ineffective directions during early training and improving convergence speed, training stability, and recognition accuracy under low-resource conditions. Experimental results on Sichuan dialect speech recognition based on the Whisper architecture demonstrate that the proposed DWTLoRA initialization outperforms standard LoRA and several PEFT baseline methods in terms of character error rate (CER) and training efficiency, confirming the critical role of signal-structure-aware initialization in low-resource ASR. Full article
Show Figures

Figure 1

26 pages, 2823 KB  
Article
A Unified Online Assessment Framework for Pre-Fault and Post-Fault Dynamic Security
by Xin Li, Rongkun Shang, Qiao Zhao, Yaowei Zhang, Jingru Liu, Changjie Wu and Panfeng Guo
Energies 2026, 19(3), 673; https://doi.org/10.3390/en19030673 - 27 Jan 2026
Viewed by 237
Abstract
With the expansion of interconnection in power systems and the extensive adoption of phasor measurement units (PMUs), the secure operation of power systems has been increasingly covered in research. In this article, a unified online framework for pre-fault and post-fault dynamic security assessment [...] Read more.
With the expansion of interconnection in power systems and the extensive adoption of phasor measurement units (PMUs), the secure operation of power systems has been increasingly covered in research. In this article, a unified online framework for pre-fault and post-fault dynamic security assessment (DSA) is proposed. First, maximum mutual information (MIC) and the random subspace method (RSM) are employed to select the key variables and enhance the diversity of input data, serving as feature engineering. Then, a deep forest (DF) regressor and classifier are utilized respectively to predict security margin (SM) and security state (SS) during online pre-fault and post-fault DSA based on the selected variables. In pre-fault DSA, scenarios with high SM are identified as stable, while those with low SM are forwarded to post-fault DSA. In addition, a time self-adaptive scheme is employed to balance low response time and high prediction accuracy. This approach prevents the misclassification of unstable scenarios as stable by either outputting high-credibility predictions of unstable SS or deferring decisions on SS until the end of the decision-making period. The unified framework, tested on an IEEE 39-bus system and a practical 1648-bus system provided by the PSS/E version 35 software, demonstrates significantly improved assessment accuracy and response times. Specifically, it achieves an average response time (ART) of 2.66 cycles for the IEEE 39-bus system and 3.13 cycles for the 1648-bus system while maintaining an accuracy exceeding 98%, surpassing the performance of currently widely used deep learning models. Full article
Show Figures

Figure 1

23 pages, 1934 KB  
Article
Asymmetric Feature Weighting for Diversity-Enhanced Random Forests
by Ye Eun Kim, Seoung Yun Kim and Hyunjoong Kim
Symmetry 2026, 18(1), 73; https://doi.org/10.3390/sym18010073 - 1 Jan 2026
Viewed by 331
Abstract
Random Forest (RF) is one of the most widely used ensemble learning algorithms for classification and regression tasks. Its performance, however, depends not only on the accuracy of individual trees but also on the diversity among them. This study proposes a novel ensemble [...] Read more.
Random Forest (RF) is one of the most widely used ensemble learning algorithms for classification and regression tasks. Its performance, however, depends not only on the accuracy of individual trees but also on the diversity among them. This study proposes a novel ensemble method, Heterogeneous Random Forest (HRF), which enhances ensemble diversity through adaptive and asymmetric feature weighting. Unlike conventional RF that treats all features equally during tree construction, HRF dynamically reduces the sampling probability of features that have been frequently selected—particularly those appearing near the root nodes of previous trees. This mechanism discourages repetitive feature usage and encourages a more balanced and heterogeneous ensemble structure. Simulation studies demonstrate that HRF effectively mitigates feature selection bias, increases structural diversity, and improves classification accuracy, particularly on datasets with low noise ratios and diverse feature cardinalities. Comprehensive experiments on 52 benchmark datasets further confirm that HRF achieves the highest overall performance and significant accuracy gains compared to standard ensemble methods. These results highlight that asymmetric feature weighting provides a simple yet powerful mechanism for promoting diversity and enhancing generalization in ensemble learning. Full article
Show Figures

Figure 1

39 pages, 3961 KB  
Article
Traditional Machine Learning Outperforms EEGNet for Consumer-Grade EEG Emotion Recognition: A Comprehensive Evaluation with Cross-Dataset Validation
by Carlos Rodrigo Paredes Ocaranza, Bensheng Yun and Enrique Daniel Paredes Ocaranza
Sensors 2025, 25(23), 7262; https://doi.org/10.3390/s25237262 - 28 Nov 2025
Viewed by 1325
Abstract
Objective. Consumer-grade EEG devices have the potential for widespread brain–computer interface deployment but pose significant challenges for emotion recognition due to reduced spatial coverage and the variable signal quality encountered in uncontrolled deployment environments. While deep learning approaches have employed increasingly complex architectures, [...] Read more.
Objective. Consumer-grade EEG devices have the potential for widespread brain–computer interface deployment but pose significant challenges for emotion recognition due to reduced spatial coverage and the variable signal quality encountered in uncontrolled deployment environments. While deep learning approaches have employed increasingly complex architectures, their efficacy in noisy consumer-grade signals and cross-system generalizability remains unexplored. We present a comprehensive systematic comparison of EEGNet architecture, which has become a benchmark model for consumer-grade EEG analysis versus traditional machine learning, examining when and why domain-specific feature engineering outperforms end-to-end learning in resource constrained scenarios. Approach. We conducted comprehensive within-dataset evaluation using the DREAMER dataset (23 subjects, Emotiv EPOC 14-channel) and challenging cross-dataset validation (DREAMER→SEED-VII transfer). Traditional ML employed domain-specific feature engineering (statistical, frequency-domain, and connectivity features) with random forest classification. Deep learning employed both optimized and enhanced EEGNet architectures, specifically designed for low channel consumer EEG systems. For cross-dataset validation, we implemented progressive domain adaptation combining anatomical channel mapping, CORAL adaptation, and TCA subspace learning. Statistical validation included 345 comprehensive evaluations with fivefold cross-validation × 3 seeds × 23 subjects, Wilcoxon signed-rank tests, and Cohen’s d effect size calculations. Main results. Traditional ML achieved superior within-dataset performance (F1 = 0.945 ± 0.034 versus 0.567 for EEGNet architectures, p < 0.000001, Cohen’s d = 3.863, 67% improvement) across 345 evaluations. Cross-dataset validation demonstrated good performance (F1 = 0.619 versus 0.007) through systematic domain adaptation. Progressive improvements included anatomical channel mapping (5.8× improvement), CORAL domain adaptation (2.7× improvement), and TCA subspace learning (4.5× improvement). Feature analysis revealed inter-channel connectivity patterns contributed 61% of the discriminative power. Traditional ML demonstrated superior computational efficiency (95% faster training, 10× faster inference) and excellent stability (CV = 0.036). Fairness validation experiments supported the advantage of traditional ML in its ability to persist even with minimal feature engineering (F1 = 0.842 vs. 0.646 for enhanced EEGNet), and robustness analysis revealed that deep learning degrades more under consumer-grade noise conditions (17% vs. <1% degradation). Significance. These findings challenge the assumption that architectural complexity universally improves biosignal processing performance in consumer-grade applications. Through the comparison of traditional ML against the EEGNet consumer-grade architecture, we highlight the potential that domain-specific feature engineering and lightweight adaptation techniques can provide superior accuracy, stability, and practical deployment capabilities for consumer-grade EEG emotion recognition. While our empirical comparison focused on EEGNet, the underlying principles regarding data efficiency, noise robustness, and the value of domain expertise could extend to comparisons with other complex architectures facing similar constraints in further research. This comprehensive domain adaptation framework enables robust cross-system deployment, addressing critical gaps in real-world BCI applications. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

30 pages, 5334 KB  
Article
Fractal-Guided Token Pruning for Efficient Vision Transformers
by Seong Rok Kim and Minhyeok Lee
Fractal Fract. 2025, 9(12), 767; https://doi.org/10.3390/fractalfract9120767 - 25 Nov 2025
Viewed by 1630
Abstract
Vision Transformers achieve strong performance across computer vision tasks but suffer from quadratic computational complexity with respect to token count, limiting deployment in resource-constrained environments. Existing token pruning methods rely on attention scores to identify important tokens, but attention mechanisms capture query-specific relevance [...] Read more.
Vision Transformers achieve strong performance across computer vision tasks but suffer from quadratic computational complexity with respect to token count, limiting deployment in resource-constrained environments. Existing token pruning methods rely on attention scores to identify important tokens, but attention mechanisms capture query-specific relevance rather than intrinsic information content, potentially discarding tokens that carry information for subsequent layers or different downstream tasks. We propose fractal-guided token pruning, a method that leverages the correlation dimension Dcorr of token embeddings as a task-agnostic measure of geometric complexity. Our key insight is that tokens with high Dcorr span higher-dimensional manifolds in representation space, indicating complex patterns, while tokens with low Dcorr collapse to simpler structures representing redundant information. By computing a local Dcorr for each token and pruning those with the lowest values, our method retains geometrically complex tokens independent of attention-based relevance. The correlation dimension quantifies how token embeddings fill the representation space: embeddings from uniform background regions cluster tightly in low-dimensional subspaces (low Dcorr), while embeddings from complex textures or object boundaries spread across higher-dimensional manifolds (high Dcorr), reflecting their richer information content. Experiments on CIFAR-10 and CIFAR-100 with fine-tuned ViT-B/16 models show that fractal-guided pruning consistently outperforms random and norm-based pruning across all tested ratios. At forty percent pruning, fractal pruning maintains 92.26% accuracy on CIFAR-10 with only a 0.99 percentage point drop from the 93.25% baseline while achieving 1.17× speedup. Our approach provides a geometry-based criterion for token importance that complements attention-based methods and shows promising generalization between CIFAR-10 and CIFAR-100 datasets. Full article
Show Figures

Figure 1

18 pages, 308 KB  
Article
Comparative Analysis of Self-Labeled Algorithms for Predicting MOOC Dropout: A Case Study
by George Raftopoulos, Georgios Kostopoulos, Gregory Davrazos, Theodor Panagiotakopoulos, Sotiris Kotsiantis and Achilles Kameas
Appl. Sci. 2025, 15(22), 12025; https://doi.org/10.3390/app152212025 - 12 Nov 2025
Viewed by 513
Abstract
Massive Open Online Courses (MOOCs) have expanded global access to education but continue to struggle with high attrition rates. This study presents a comparative analysis of self-labeled Semi-Supervised Learning (SSL) algorithms for predicting learner dropout. Unlike traditional supervised models that rely solely on [...] Read more.
Massive Open Online Courses (MOOCs) have expanded global access to education but continue to struggle with high attrition rates. This study presents a comparative analysis of self-labeled Semi-Supervised Learning (SSL) algorithms for predicting learner dropout. Unlike traditional supervised models that rely solely on labeled data, self-labeled methods iteratively exploit both labeled and unlabeled instances, alleviating the scarcity of annotations in large-scale educational datasets. Using real-world MOOC data, ten self-labeled algorithms, including self-training, co-training, and tri-training variants, were evaluated across multiple labeled ratios. The experimental results show that ensemble-based methods, such as Co-training Random Forest, Co-Training by Committee, and Relevant Random subspace co-training, achieve predictive accuracy comparable to that fully supervised baselines even with as little as 4% labeled data. Beyond predictive performance, the findings highlight the scalability and cost-effectiveness of self-labeled SSL as a data-driven approach for enhancing learner retention in massive online learning. Full article
Show Figures

Figure 1

20 pages, 941 KB  
Article
Parameter Estimation of Weibull Distribution Using Constrained Search Space: An Application to Elevator Maintenance
by Khubab Ahmed, Huaqing Liu, Li Ke, Ray Tahir Mushtaq, Muhammad Zaman and Adnan Akhunzada
Machines 2025, 13(11), 1022; https://doi.org/10.3390/machines13111022 - 6 Nov 2025
Viewed by 621
Abstract
The Weibull distribution is widely used in reliability estimation across industries, but accurately identifying its parameters remains a challenging task. This research proposes an efficient method for estimating Weibull distribution parameters by combining the maximum likelihood method with optimization theory. First, the parameter [...] Read more.
The Weibull distribution is widely used in reliability estimation across industries, but accurately identifying its parameters remains a challenging task. This research proposes an efficient method for estimating Weibull distribution parameters by combining the maximum likelihood method with optimization theory. First, the parameter estimation problem is formulated as an optimization problem. A constrained search space partitioning framework is introduced, leveraging parameter-specific minimum and maximum bounds for the shape, location, and scale parameters. By dividing the search space into smaller subspaces for each parameter, the method constrains the search direction, significantly reducing estimation time. To address the local optima problem common in heuristic algorithms, a randomness operator is integrated into the optimization process. The proposed constrained search space partitioning framework is implemented using a conventional g-best version of the particle swarm optimization algorithm with historical fault data. Experimental results demonstrate that the proposed scheme outperforms state-of-the-art methods and conventional optimization-based approaches in terms of estimation accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Data-Driven Fault Diagnosis for Machines and Systems)
Show Figures

Figure 1

10 pages, 372 KB  
Article
A Randomized Q-OR Krylov Subspace Method for Solving Nonsymmetric Linear Systems
by Gérard Meurant
Mathematics 2025, 13(12), 1953; https://doi.org/10.3390/math13121953 - 12 Jun 2025
Viewed by 834
Abstract
The most popular iterative methods for solving nonsymmetric linear systems are Krylov methods. Recently, an optimal Quasi-ORthogonal (Q-OR) method was introduced, which yields the same residual norms as the Generalized Minimum Residual (GMRES) method, provided GMRES is not stagnating. In this paper, we [...] Read more.
The most popular iterative methods for solving nonsymmetric linear systems are Krylov methods. Recently, an optimal Quasi-ORthogonal (Q-OR) method was introduced, which yields the same residual norms as the Generalized Minimum Residual (GMRES) method, provided GMRES is not stagnating. In this paper, we study how to introduce matrix sketching in this algorithm. It allows us to reduce the dimension of the problem in one of the main steps of the algorithm. Full article
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing for Applied Mathematics)
Show Figures

Figure 1

13 pages, 10859 KB  
Article
A Lightning Very-High-Frequency Mapping DOA Method Based on L Array and 2D-MUSIC
by Chuansheng Wang, Nianwen Xiang, Zhaokun Li, Zengwei Lyu, Yu Yang and Huaifei Chen
Atmosphere 2025, 16(5), 486; https://doi.org/10.3390/atmos16050486 - 22 Apr 2025
Viewed by 1064
Abstract
Lightning Very-High-Frequency (VHF) radiation source mapping technology represents a pivotal advancement in the study of lightning discharge processes and their underlying physical mechanisms. This paper introduces a novel methodology for reconstructing lightning discharge channels by employing the Multiple Signal Classification (MUSIC) algorithm to [...] Read more.
Lightning Very-High-Frequency (VHF) radiation source mapping technology represents a pivotal advancement in the study of lightning discharge processes and their underlying physical mechanisms. This paper introduces a novel methodology for reconstructing lightning discharge channels by employing the Multiple Signal Classification (MUSIC) algorithm to estimate the Direction of Arrival (DOA) of lightning VHF radiation sources, specifically tailored for both non-uniform and uniform L-shaped arrays (2D-MUSIC). The proposed approach integrates the Random Sample Consensus (RANSAC) algorithm with 2D-MUSIC, thereby enhancing the precision and robustness of the reconstruction process. Initially, the array data are subjected to denoising via the Ensemble Empirical Mode Decomposition (EEMD) algorithm. Following this, the covariance matrix of the processed array data is decomposed to isolate the signal subspace, which corresponds to the signal components, and the noise subspace, which is orthogonal to the signal components. By exploiting the orthogonality between these subspaces, the method achieves an accurate estimation of the signal incidence direction, thereby facilitating the precise reconstruction of the lightning channel. To validate the feasibility of this method, comprehensive numerical simulations were conducted, revealing remarkable accuracy with elevation and azimuth angle errors both maintained below 1 degree. Furthermore, VHF non-uniform and uniform L-shaped lightning observation systems were established and deployed to analyze real lightning events occurring in 2021 and 2023. The empirical results demonstrate that the proposed method effectively reconstructs lightning channel structures across diverse L-shaped array configurations. This innovative approach significantly augments the capabilities of various broadband VHF arrays in radiation source imaging and makes a substantial contribution to the study of lightning development processes. The findings of this study underscore the potential of the proposed methodology to advance our understanding of lightning dynamics and enhance the accuracy of lightning channel reconstruction. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

27 pages, 6595 KB  
Article
Modeling Flood Susceptibility Utilizing Advanced Ensemble Machine Learning Techniques in the Marand Plain
by Ali Asghar Rostami, Mohammad Taghi Sattari, Halit Apaydin and Adam Milewski
Geosciences 2025, 15(3), 110; https://doi.org/10.3390/geosciences15030110 - 18 Mar 2025
Cited by 5 | Viewed by 2099
Abstract
Flooding is one of the most significant natural hazards in Iran, primarily due to the country’s arid and semi-arid climate, irregular rainfall patterns, and substantial changes in watershed conditions. These factors combine to make floods a frequent cause of disasters. In this case [...] Read more.
Flooding is one of the most significant natural hazards in Iran, primarily due to the country’s arid and semi-arid climate, irregular rainfall patterns, and substantial changes in watershed conditions. These factors combine to make floods a frequent cause of disasters. In this case study, flood susceptibility patterns in the Marand Plain, located in the East Azerbaijan Province in northwest Iran, were analyzed using five machine learning (ML) algorithms: M5P model tree, Random SubSpace (RSS), Random Forest (RF), Bagging, and Locally Weighted Linear (LWL). The modeling process incorporated twelve meteorological, hydrological, and geographical factors affecting floods at 485 identified flood-prone points. The data were analyzed using a geographic information system, with the dataset divided into 70% for training and 30% for testing to build and validate the models. An information gain ratio and multicollinearity analysis were employed to assess the influence of various factors on flood occurrence, and flood-related variables were classified using quantile classification. The frequency ratio method was used to evaluate the significance of each factor. Model performance was evaluated using statistical measures, including the Receiver Operating Characteristic (ROC) curve. All models demonstrated robust performance, with an area under the ROC curve (AUROC) exceeding 0.90. Among the models, the LWL algorithm delivered the most accurate predictions, followed by RF, M5P, Bagging, and RSS. The LWL-generated flood susceptibility map classified 9.79% of the study area as highly susceptible to flooding, 20.73% as high, 38.51% as moderate, 29.23% as low, and 1.74% as very low. The findings of this research provide valuable insights for government agencies, local authorities, and policymakers in designing strategies to mitigate flood-related risks. This study offers a practical framework for reducing the impact of future floods through informed decision-making and risk management strategies. Full article
Show Figures

Figure 1

15 pages, 6241 KB  
Article
Modal Parameter Identification of the Improved Random Decrement Technique-Stochastic Subspace Identification Method Under Non-Stationary Excitation
by Jinzhi Wu, Jie Hu, Ming Ma, Chengfei Zhang, Zenan Ma, Chunjuan Zhou and Guojun Sun
Appl. Sci. 2025, 15(3), 1398; https://doi.org/10.3390/app15031398 - 29 Jan 2025
Viewed by 1296
Abstract
Commonly used methods for identifying modal parameters under environmental excitations assume that the unknown environmental input is a stationary white noise sequence. For large-scale civil structures, actual environmental excitations, such as wind gusts and impact loads, cannot usually meet this condition, and exhibit [...] Read more.
Commonly used methods for identifying modal parameters under environmental excitations assume that the unknown environmental input is a stationary white noise sequence. For large-scale civil structures, actual environmental excitations, such as wind gusts and impact loads, cannot usually meet this condition, and exhibit obvious non-stationary and non-white-noise characteristics. The theoretical basis of the stochastic subspace method is the state-space equation in the time domain, while the state-space equation of the system is only applicable to linear systems. Therefore, under non-smooth excitation, this paper proposes a stochastic subspace method based on RDT. Firstly, this paper uses the random decrement technique of non-stationary excitation to obtain the free attenuation response of the response signal, and then uses the stochastic subspace identification (SSI) method to identify the modal parameters. This not only improves the signal-to-noise ratio of the signal, but also improves the computational efficiency significantly. A non-stationary excitation is applied to the spatial grid structure model, and the RDT-SSI method is used to identify the modal parameters. The identification results show that the proposed method can solve the problem of identifying structural modal parameters under non-stationary excitation. This method is applied to the actual health monitoring of stadium grids, and can also obtain better identification results in frequency, damping ratio, and vibration mode, while also significantly improving computational efficiency. Full article
Show Figures

Figure 1

26 pages, 850 KB  
Article
Forecasting Half-Hourly Electricity Prices Using a Mixed-Frequency Structural VAR Framework
by Gaurav Kapoor, Nuttanan Wichitaksorn, Mengheng Li and Wenjun Zhang
Econometrics 2025, 13(1), 2; https://doi.org/10.3390/econometrics13010002 - 8 Jan 2025
Cited by 1 | Viewed by 2195
Abstract
Electricity price forecasting has been a topic of significant interest since the deregulation of electricity markets worldwide. The New Zealand electricity market is run primarily on renewable fuels, and so weather metrics have a significant impact on electricity price and volatility. In this [...] Read more.
Electricity price forecasting has been a topic of significant interest since the deregulation of electricity markets worldwide. The New Zealand electricity market is run primarily on renewable fuels, and so weather metrics have a significant impact on electricity price and volatility. In this paper, we employ a mixed-frequency vector autoregression (MF-VAR) framework where we propose a VAR specification to the reverse unrestricted mixed-data sampling (RU-MIDAS) model, called RU-MIDAS-VAR, to provide point forecasts of half-hourly electricity prices using several weather variables and electricity demand. A key focus of this study is the use of variational Bayes as an estimation technique and its comparison with other well-known Bayesian estimation methods. We separate forecasts for peak and off-peak periods in a day since we are primarily concerned with forecasts for peak periods. Our forecasts, which include peak and off-peak data, show that weather variables and demand as regressors can replicate some key characteristics of electricity prices. We also find the MF-VAR and RU-MIDAS-VAR models achieve similar forecast results. Using the LASSO, adaptive LASSO, and random subspace regression as dimension-reduction and variable selection methods helps to improve forecasts where random subspace methods perform well for large parameter sets while the LASSO significantly improves our forecasting results in all scenarios. Full article
Show Figures

Figure 1

15 pages, 376 KB  
Article
Larger Size Subspace Codes with Low Communication Overhead
by Lingling Wu and Yongfeng Niu
Symmetry 2025, 17(1), 65; https://doi.org/10.3390/sym17010065 - 2 Jan 2025
Viewed by 545
Abstract
Köetter and Kschischang proposed a coding algorithm for network error correction based on subspace codes, which, however, has a high communication overhead (100%). This paper improves upon their coding algorithm and presents a coding algorithm for network error correction with lower communication overhead, [...] Read more.
Köetter and Kschischang proposed a coding algorithm for network error correction based on subspace codes, which, however, has a high communication overhead (100%). This paper improves upon their coding algorithm and presents a coding algorithm for network error correction with lower communication overhead, which is similar to the communication overhead of classical random network coding. In particular, we utilize the inherent symmetry in subspace codes to optimize the construction process, leading to a more efficient algorithm. At the same time, this paper also studies the construction problem of constant dimension subspace codes, utilizing parallel construction and multilevel construction. By exploiting the symmetry in these methods, we generalize previous results and derive new lower bounds for constant dimension subspace codes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop