Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,567)

Search Parameters:
Keywords = sparsity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 730 KB  
Article
Activity Detection and Channel Estimation Based on Correlated Hybrid Message Passing for Grant-Free Massive Random Access
by Xiaofeng Liu, Xinrui Gong and Xiao Fu
Entropy 2025, 27(11), 1111; https://doi.org/10.3390/e27111111 (registering DOI) - 28 Oct 2025
Abstract
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making [...] Read more.
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making accurate active user detection and channel estimation (AUDCE) crucial. In this paper, we investigate the joint AUDCE problem in wideband massive access systems. We develop an innovative channel prior model that captures the dual correlation structure of the channel using three state variables: active indication, channel supports, and channel values. By integrating Markov chains with coupled Gaussian distributions, the model effectively describes both the structural and numerical dependencies within the channel. We propose the correlated hybrid message passing (CHMP) algorithm based on Bethe free energy (BFE) minimization, which adaptively updates model parameters without requiring prior knowledge of user sparsity or channel priors. Simulation results show that the CHMP algorithm accurately detects active users and achieves precise channel estimation. Full article
(This article belongs to the Topic Advances in Sixth Generation and Beyond (6G&B))
31 pages, 34773 KB  
Article
Learning Domain-Invariant Representations for Event-Based Motion Segmentation: An Unsupervised Domain Adaptation Approach
by Mohammed Jeryo and Ahad Harati
J. Imaging 2025, 11(11), 377; https://doi.org/10.3390/jimaging11110377 (registering DOI) - 27 Oct 2025
Abstract
Event cameras provide microsecond temporal resolution, high dynamic range, and low latency by asynchronously capturing per-pixel luminance changes, thereby introducing a novel sensing paradigm. These advantages render them well-suited for high-speed applications such as autonomous vehicles and dynamic environments. Nevertheless, the sparsity of [...] Read more.
Event cameras provide microsecond temporal resolution, high dynamic range, and low latency by asynchronously capturing per-pixel luminance changes, thereby introducing a novel sensing paradigm. These advantages render them well-suited for high-speed applications such as autonomous vehicles and dynamic environments. Nevertheless, the sparsity of event data and the absence of dense annotations are significant obstacles to supervised learning for motion segmentation from event streams. Domain adaptation is also challenging due to the considerable domain shift in intensity images. To address these challenges, we propose a two-phase cross-modality adaptation framework that translates motion segmentation knowledge from labeled RGB-flow data to unlabeled event streams. A dual-branch encoder extracts modality-specific motion and appearance features from RGB and optical flow in the source domain. Using reconstruction networks, event voxel grids are converted into pseudo-image and pseudo-flow modalities in the target domain. These modalities are subsequently re-encoded using frozen RGB-trained encoders. Multi-level consistency losses are implemented on features, predictions, and outputs to enforce domain alignment. Our design enables the model to acquire domain-invariant, semantically rich features through the use of shallow architectures, thereby reducing training costs and facilitating real-time inference with a lightweight prediction path. The proposed architecture, alongside the utilized hybrid loss function, effectively bridges the domain and modality gap. We evaluate our method on two challenging benchmarks: EVIMO2, which incorporates real-world dynamics, high-speed motion, illumination variation, and multiple independently moving objects; and MOD++, which features complex object dynamics, collisions, and dense 1kHz supervision in synthetic scenes. The proposed UDA framework achieves 83.1% and 79.4% accuracy on EVIMO2 and MOD++, respectively, outperforming existing state-of-the-art approaches, such as EV-Transfer and SHOT, by up to 3.6%. Additionally, it is lighter and faster and also delivers enhanced mIoU and F1 Score. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

21 pages, 3949 KB  
Article
Non-Iterative Shrinkage-Thresholding-Reconstructed Compressive Acquisition Algorithm for High-Dynamic GNSS Signals
by Zhuang Ma, Mingliang Deng, Hui Huang, Xiaohong Wang and Qiang Liu
Aerospace 2025, 12(11), 958; https://doi.org/10.3390/aerospace12110958 (registering DOI) - 27 Oct 2025
Abstract
Owing to the intrinsic sparsity of GNSS signals in the correlation domain, compressed sensing (CS) is attractive for the rapid acquisition of high-dynamic GNSS signals. However, the compressed measurement-associated noise folding inherently amplifies the pre-measurement noise, leading to an inevitable degradation of acquisition [...] Read more.
Owing to the intrinsic sparsity of GNSS signals in the correlation domain, compressed sensing (CS) is attractive for the rapid acquisition of high-dynamic GNSS signals. However, the compressed measurement-associated noise folding inherently amplifies the pre-measurement noise, leading to an inevitable degradation of acquisition performance. In this paper, a novel CS-based GNSS signal acquisition algorithm is, for the first time, proposed with the efficient suppression of the amplified measurement noise and low computational complexities. The offline developed code phase and frequency bin-compressed matrices in the correlation domain are utilized to obtain a real-time observed matrix, from which the correlation matrix of the GNSS signal is rapidly reconstructed via a denoised back-projection and a non-iterative shrinkage-thresholding (NIST) operation. A detailed theoretical analysis and extensive numerical explorations are undertaken for the algorithm computational complexity, the achievable acquisition performance, and the algorithm performance robustness to various Doppler frequencies. It is shown that, compared with the classic orthogonal matching pursuit (OMP) reconstruction, the NIST reconstruction gives rise to a 3.3 dB improvement in detection sensitivity with a computational complexity increase of <10%. Moreover, the NIST-reconstructed CS acquisition algorithm outperforms the conventional CS acquisition algorithm with frequency serial search (FSS) in terms of both the acquisition performance and the computational complexity. In addition, a variation in the detection sensitivity is observed as low as 1.3 dB over a Doppler frequency range from 100 kHz to 200 kHz. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

22 pages, 1512 KB  
Article
A Data-Driven Multi-Granularity Attention Framework for Sentiment Recognition in News and User Reviews
by Wenjie Hong, Shaozu Ling, Siyuan Zhang, Yinke Huang, Yiyan Wang, Zhengyang Li, Xiangjun Dong and Yan Zhan
Appl. Sci. 2025, 15(21), 11424; https://doi.org/10.3390/app152111424 - 25 Oct 2025
Viewed by 204
Abstract
Sentiment analysis plays a crucial role in domains such as financial news, user reviews, and public opinion monitoring, yet existing approaches face challenges when dealing with long and domain-specific texts due to semantic dilution, insufficient context modeling, and dispersed emotional signals. To address [...] Read more.
Sentiment analysis plays a crucial role in domains such as financial news, user reviews, and public opinion monitoring, yet existing approaches face challenges when dealing with long and domain-specific texts due to semantic dilution, insufficient context modeling, and dispersed emotional signals. To address these issues, a multi-granularity attention-based sentiment analysis model built on a transformer backbone is proposed. The framework integrates sentence-level and document-level hierarchical modeling, a different-dimensional embedding strategy, and a cross-granularity contrastive fusion mechanism, thereby achieving unified representation and dynamic alignment of local and global emotional features. Static word embeddings combined with dynamic contextual embeddings enhance both semantic stability and context sensitivity, while the cross-granularity fusion module alleviates sparsity and dispersion of emotional cues in long texts, improving robustness and discriminability. Extensive experiments on multiple benchmark datasets demonstrate the effectiveness of the proposed model. On the Financial Forum Reviews dataset, it achieves an accuracy of 0.932, precision of 0.928, recall of 0.925, F1-score of 0.926, and AUC of 0.951, surpassing state-of-the-art baselines such as BERT and RoBERTa. On the Financial Product User Reviews dataset, the model obtains an accuracy of 0.902, precision of 0.898, recall of 0.894, and AUC of 0.921, showing significant improvements for short-text sentiment tasks. On the Financial News dataset, it achieves an accuracy of 0.874, precision of 0.869, recall of 0.864, and AUC of 0.895, highlighting its strong adaptability to professional and domain-specific texts. Ablation studies further confirm that the multi-granularity transformer structure, the different-dimensional embedding strategy, and the cross-granularity fusion module each contribute critically to overall performance improvements. Full article
Show Figures

Figure 1

25 pages, 739 KB  
Article
Cooperative Task Allocation for Unmanned Aerial Vehicle Swarm Using Multi-Objective Multi-Population Self-Adaptive Ant Lion Optimizer
by Chengze Li, Gengsong Li, Yi Liu, Qibin Zheng, Guoli Yang, Kun Liu and Xingchun Diao
Drones 2025, 9(11), 733; https://doi.org/10.3390/drones9110733 - 23 Oct 2025
Viewed by 240
Abstract
The rational allocation of tasks is a critical issue in enhancing the mission execution capability of unmanned aerial vehicle (UAV) swarms, which is difficult to solve exactly in polynomial time. Evolutionary-algorithm-based approaches are among the popular methods for addressing this problem. However, existing [...] Read more.
The rational allocation of tasks is a critical issue in enhancing the mission execution capability of unmanned aerial vehicle (UAV) swarms, which is difficult to solve exactly in polynomial time. Evolutionary-algorithm-based approaches are among the popular methods for addressing this problem. However, existing methods often suffer from insufficiently rigorous constraint settings and a focus on single-objective optimization. To address these limitations, this paper considers multiple types of constraints—including temporal constraints, time window constraints, and task integrity constraints—and establishes a model with optimization objectives comprising task reward, task execution cost, and task execution time. A multi-objective multi-population self-adaptive ant lion optimizer (MMSALO) is proposed to solve the problem. In MMSALO, a sparsity-based selection mechanism replaces roulette wheel selection, effectively enhancing the global search capability. A random boundary strategy is adopted to increase the randomness and diversity of ant movement around antlions, thereby improving population diversity. An adaptive position update strategy is employed to strengthen exploration in the early stages and exploitation in the later stages of the algorithm. Additionally, a preference-based elite selection mechanism is introduced to enhance optimization performance and improve the distribution of solutions. Finally, to handle complex multiple constraints, a double-layer encoding mechanism and an adaptive penalty strategy are implemented. Simulation experiments were conducted to validate the proposed algorithm. The results demonstrate that MMSALO exhibits superior performance in solving multi-task, multi-constraint task-allocation problems for UAV swarms. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

23 pages, 746 KB  
Article
Modeling Viewing Engagement in Long-Form Video Through the Lens of Expectation-Confirmation Theory
by Yingjie Chen and Jin Zhang
Appl. Sci. 2025, 15(20), 11252; https://doi.org/10.3390/app152011252 - 21 Oct 2025
Viewed by 226
Abstract
Existing long-form video recommendation systems primarily rely on rating prediction or click-through rate estimation. However, the former is constrained by data sparsity, while the latter fails to capture actual viewing experiences. The accumulation of mid-playback abandonment behaviors undermines platform stickiness and commercial value. [...] Read more.
Existing long-form video recommendation systems primarily rely on rating prediction or click-through rate estimation. However, the former is constrained by data sparsity, while the latter fails to capture actual viewing experiences. The accumulation of mid-playback abandonment behaviors undermines platform stickiness and commercial value. To address this issue, this paper seeks to improve viewing engagement. Grounded in Expectation-Confirmation Theory, this paper proposes the Long-Form Video Viewing Engagement Prediction (LVVEP) method. Specifically, LVVEP estimates user expectations from storyline semantics encoded by a pre-trained BERT model and refined via contrastive learning, weighted by historical engagement levels. Perceived experience is dynamically constructed using a GRU-based encoder enhanced with cross-attention and a neural tensor kernel, enabling the model to capture evolving preferences and fine-grained semantic interactions. The model parameters are optimized by jointly combining prediction loss with contrastive loss, achieving more accurate user viewing engagement predictions. Experiments conducted on real-world long-form video viewing records demonstrate that LVVEP outperforms baseline models, providing novel methodological contributions and empirical evidence to research on long-form video recommendation. The findings provide practical implications for optimizing platform management, improving operational efficiency, and enhancing the quality of information services in long-form video platforms. Full article
Show Figures

Figure 1

20 pages, 18957 KB  
Article
Multi-Modal Data Fusion for 3D Object Detection Using Dual-Attention Mechanism
by Mengying Han, Benlan Shen and Jiuhong Ruan
Sensors 2025, 25(20), 6360; https://doi.org/10.3390/s25206360 - 14 Oct 2025
Viewed by 533
Abstract
To address the issue of missing feature information for small objects caused by the sparsity and irregularity of point clouds, as well as the poor detection performance on small objects due to their weak feature representation, this paper proposes a multi-modal 3D object [...] Read more.
To address the issue of missing feature information for small objects caused by the sparsity and irregularity of point clouds, as well as the poor detection performance on small objects due to their weak feature representation, this paper proposes a multi-modal 3D object detection method based on an improved PointPillars framework. First, LiDAR point clouds are fused with camera images at the data level, incorporating 2D semantic information to enhance small-object feature representation. Second, a Pillar-wise Channel Attention (PCA) module is introduced to emphasize critical features before converting pillar features into pseudo-image representations. Additionally, a Spatial Attention Module (SAM) is embedded into the backbone network to enhance spatial feature representation. Experiments on the KITTI dataset show that, compared with the baseline PointPillars, the proposed method significantly improves small-object detection performance. Specifically, under the bird’s-eye view (BEV) evaluation metrics, the Average Precision (AP) for pedestrians and cyclists increases by 7.06% and 3.08%, respectively; under the 3D evaluation metrics, these improvements are 4.36% and 2.58%. Compared with existing methods, the improved model also achieves relatively higher accuracy in detecting small objects. Visualization results further demonstrate the enhanced detection capability of the proposed method for small objects with different difficulty levels. Overall, the proposed approach effectively improves 3D object detection performance, particularly for small objects, in complex autonomous driving scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

38 pages, 913 KB  
Article
Towards the Adoption of Recommender Systems in Online Education: A Framework and Implementation
by Alex Martínez-Martínez, Águeda Gómez-Cambronero, Raul Montoliu and Inmaculada Remolar
Big Data Cogn. Comput. 2025, 9(10), 259; https://doi.org/10.3390/bdcc9100259 - 14 Oct 2025
Viewed by 468
Abstract
The rapid expansion of online education has generated large volumes of learner interaction data, highlighting the need for intelligent systems capable of transforming this information into personalized guidance. Educational Recommender Systems (ERS) represent a key application of big data analytics and machine learning, [...] Read more.
The rapid expansion of online education has generated large volumes of learner interaction data, highlighting the need for intelligent systems capable of transforming this information into personalized guidance. Educational Recommender Systems (ERS) represent a key application of big data analytics and machine learning, offering adaptive learning pathways that respond to diverse student needs. For widespread adoption, these systems must align with pedagogical principles while ensuring transparency, interpretability, and seamless integration into Learning Management Systems (LMS). This paper introduces a comprehensive framework and implementation of an ERS designed for platforms such as Moodle. The system integrates big data processing pipelines to support scalability, real-time interaction, and multi-layered personalization, including data collection, preprocessing, recommendation generation, and retrieval. A detailed use case demonstrates its deployment in a real educational environment, underlining both technical feasibility and pedagogical value. Finally, the paper discusses challenges such as data sparsity, learner model complexity, and evaluation of effectiveness, offering directions for future research at the intersection of big data technologies and digital education. By bridging theoretical models with operational platforms, this work contributes to sustainable and data-driven personalization in online learning ecosystems. Full article
Show Figures

Figure 1

19 pages, 1396 KB  
Article
Sparse Keyword Data Analysis Using Bayesian Pattern Mining
by Sunghae Jun
Computers 2025, 14(10), 436; https://doi.org/10.3390/computers14100436 - 14 Oct 2025
Viewed by 291
Abstract
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address [...] Read more.
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address this issue, this study proposes a probabilistic framework called Bayesian Pattern Mining (BPM), which integrates Bayesian inference into association rule mining (ARM). The proposed method estimates both the expected values and credible intervals of interestingness measures such as confidence and lift, providing a probabilistic evaluation of keyword associations. Experiments conducted on 9436 quantum computing patent documents, from which 175 representative keywords were extracted, demonstrate that BPM yields more stable and interpretable associations than conventional ARM. By incorporating credible intervals, BPM reduces the risk of biased decisions under sparsity and enhances the reliability of keyword-based technology analysis, offering a rigorous approach for knowledge discovery in zero-inflated text data. Full article
Show Figures

Figure 1

19 pages, 2435 KB  
Article
A Lesion-Aware Patch Sampling Approach with EfficientNet3D-UNet for Robust Multiple Sclerosis Lesion Segmentation
by Hind Almaaz and Samia Dardouri
J. Imaging 2025, 11(10), 361; https://doi.org/10.3390/jimaging11100361 - 13 Oct 2025
Viewed by 325
Abstract
Accurate segmentation of multiple sclerosis (MS) lesions from 3D MRI scans is essential for diagnosis, disease monitoring, and treatment planning. However, this task remains challenging due to the sparsity, heterogeneity, and subtle appearance of lesions, as well as the difficulty in obtaining high-quality [...] Read more.
Accurate segmentation of multiple sclerosis (MS) lesions from 3D MRI scans is essential for diagnosis, disease monitoring, and treatment planning. However, this task remains challenging due to the sparsity, heterogeneity, and subtle appearance of lesions, as well as the difficulty in obtaining high-quality annotations. In this study, we propose Efficient-Net3D-UNet, a deep learning framework that combines compound-scaled MBConv3D blocks with a lesion-aware patch sampling strategy to improve volumetric segmentation performance across multi-modal MRI sequences (FLAIR, T1, and T2). The model was evaluated against a conventional 3D U-Net baseline using standard metrics including Dice similarity coefficient, precision, recall, accuracy, and specificity. On a held-out test set, EfficientNet3D-UNet achieved a Dice score of 48.39%, precision of 49.76%, and recall of 55.41%, outperforming the baseline 3D U-Net, which obtained a Dice score of 31.28%, precision of 32.48%, and recall of 43.04%. Both models reached an overall accuracy of 99.14%. Notably, EfficientNet3D-UNet also demonstrated faster convergence and reduced overfitting during training. These results highlight the potential of EfficientNet3D-UNet as a robust and computationally efficient solution for automated MS lesion segmentation, offering promising applicability in real-world clinical settings. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

24 pages, 7771 KB  
Article
Cross-Domain OTFS Detection via Delay–Doppler Decoupling: Reduced-Complexity Design and Performance Analysis
by Mengmeng Liu, Shuangyang Li, Baoming Bai and Giuseppe Caire
Entropy 2025, 27(10), 1062; https://doi.org/10.3390/e27101062 - 13 Oct 2025
Viewed by 261
Abstract
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference [...] Read more.
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference among samples in adjacent time slots, while the Doppler becomes a phase term that does not affect the channel sparsity. This investigation indicates that the effects of delay and Doppler can be decoupled and treated separately. This “band-limited” matrix structure further motivates us to apply a reduced-size linear minimum mean square error (LMMSE) filter to eliminate the effect of delay in the time domain, while exploiting the cross-domain iteration for minimizing the effect of Doppler by noticing that the time and Doppler are a Fourier dual pair. Furthermore, we apply eigenvalue decomposition to the reduced-size LMMSE estimator, which makes the computational complexity independent of the number of cross-domain iterations, thus significantly reducing the computational complexity. The bias evolution and variance evolution are derived to evaluate the average MSE performance of the proposed scheme, which shows that the proposed estimators suffer from only negligible estimation bias in both time and DD domains. Particularly, the state (MSE) evolution is compared with bounds to verify the effectiveness of the proposed scheme. Simulation results demonstrate that the proposed scheme achieves almost the same error performance as the optimal detection, but only requires a reduced complexity. Full article
Show Figures

Figure 1

17 pages, 1106 KB  
Article
Calibrated Global Logit Fusion (CGLF) for Fetal Health Classification Using Cardiotocographic Data
by Mehret Ephrem Abraha and Juntae Kim
Electronics 2025, 14(20), 4013; https://doi.org/10.3390/electronics14204013 - 13 Oct 2025
Viewed by 242
Abstract
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their [...] Read more.
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their class probabilities through global logit blending followed by per-class vector temperature calibration. Class imbalance is addressed with SMOTE–Tomek for TabNet and one XGBoost stream (XGB–A), and class-weighted training for a second stream (XGB–B). To prevent information leakage, all preprocessing, resampling, and weighting are fitted only on the training split within each outer fold. Out-of-fold (OOF) predictions from the outer-train split are then used to optimize blend weights and fit calibration parameters, which are subsequently applied once to the corresponding held-out outer-test fold. Our calibration-guided logit fusion (CGLF) matches top-tier discrimination on the public Fetal Health dataset while producing more reliable probability estimates than strong standalone baselines. Under nested cross-validation, CGLF delivers comparable AUROC and overall accuracy to the best tree-based model, with visibly improved calibration and slightly lower balanced accuracy in some splits. We also provide interpretability and overfitting checks via TabNet sparsity, feature stability analysis, and sufficiency (k95) curves. Finally, threshold tuning under a balanced-accuracy floor preserves sensitivity to pathological cases, aligning operating points with risk-aware obstetric decision support. Overall, CGLF is a calibration-centric, leakage-controlled CTG pipeline that is interpretable and suited to threshold-based clinical deployment. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

24 pages, 829 KB  
Article
Transformer with Adaptive Sparse Self-Attention for Short-Term Photovoltaic Power Generation Forecasting
by Xingfa Zi, Feiyi Liu, Mingyang Liu and Yang Wang
Electronics 2025, 14(20), 3981; https://doi.org/10.3390/electronics14203981 - 11 Oct 2025
Viewed by 250
Abstract
Accurate short-term photovoltaic (PV) power generation forecasting is critical for the stable integration of renewable energy into the grid. This study proposes a Transformer model enhanced with an adaptive sparse self-attention (ASSA) mechanism for PV power forecasting. The ASSA framework employs a dual-branch [...] Read more.
Accurate short-term photovoltaic (PV) power generation forecasting is critical for the stable integration of renewable energy into the grid. This study proposes a Transformer model enhanced with an adaptive sparse self-attention (ASSA) mechanism for PV power forecasting. The ASSA framework employs a dual-branch attention structure that combines sparse and dense attention paths with adaptive weighting to effectively filter noise while preserving essential spatiotemporal features. This design addresses the critical issues of computational redundancy and noise amplification in standard self-attention by adaptively filtering irrelevant interactions while maintaining global dependencies in Transformer-based PV forecasting. In addition, a deep feedforward network and a feature refinement feedforward network (FRFN) adapted from the ASSA–Transformer are incorporated to further improve feature extraction. The proposed algorithms are evaluated using time-series data from the Desert Knowledge Australia Solar Centre (DKASC), with input features including temperature, relative humidity, and other environmental variables. Comprehensive experiments demonstrate that the ASSA models’ accuracy in short-term PV power forecasting increases with longer forecast horizons. For 1 h ahead forecasts, it achieves an R2 of 0.9115, outperforming all other models. Under challenging rainfall conditions, the model maintains a high prediction accuracy, with an R2 of 0.7463, a mean absolute error of 0.4416, and a root mean square error of 0.6767, surpassing all compared models. The ASSA attention mechanism enhances the accuracy and stability in short-term PV power forecasting with minimal computational overhead, increasing the training time by only 1.2% compared to that for the standard Transformer. Full article
Show Figures

Figure 1

18 pages, 867 KB  
Article
Multi-Form Information Embedding Deep Neural Network for User Preference Mining
by Xuna Wang
Mathematics 2025, 13(20), 3241; https://doi.org/10.3390/math13203241 - 10 Oct 2025
Viewed by 361
Abstract
User preference mining uses rating data, item content or comments to learn additional knowledge to support the prediction task. For the use of rating data, the usual approach is to take rating matrix as data source, and collaborative filtering as the algorithm to [...] Read more.
User preference mining uses rating data, item content or comments to learn additional knowledge to support the prediction task. For the use of rating data, the usual approach is to take rating matrix as data source, and collaborative filtering as the algorithm to predict user preferences. Item content and comments are usually used in sentiment analysis or as auxiliary information for other algorithms. However, factors such as data sparsity, category diversity, and numerical processing requirements for aspect sentiment analysis affect model performance. This paper proposes a hybrid method, which uses the deep neural network as the basic structure, considers the complementarity of text and numeric data, and integrates the numeric and text embedding into the model. In the construction of text-based embedding, extracts the text summary of each text-based review, and uses the Doc2vec to convert the text summary into multi-dimensional vector. Experiments on two Amazon product datasets show that the proposed model consistently outperforms other baseline models, achieving an average reduction of 15.72% in RMSE, 24.13% in MAE, and 28.91% in MSE. These results confirm the effectiveness of our proposed method for learning user preferences. Full article
Show Figures

Figure 1

17 pages, 2169 KB  
Article
Identification of Missouri Precipitation Zones by Complex Wavelet Analysis
by Jason J. Senter and Anthony R. Lupo
Meteorology 2025, 4(4), 29; https://doi.org/10.3390/meteorology4040029 - 10 Oct 2025
Viewed by 246
Abstract
Understanding the intricate dynamics of precipitation patterns is essential for effective water resource management and climate adaptation in Missouri. Existing analyses of Missouri’s climate variability lack the spatial granularity needed to capture nuanced variations across climate divisions. The Missouri historical agricultural weather database, [...] Read more.
Understanding the intricate dynamics of precipitation patterns is essential for effective water resource management and climate adaptation in Missouri. Existing analyses of Missouri’s climate variability lack the spatial granularity needed to capture nuanced variations across climate divisions. The Missouri historical agricultural weather database, an open-source tool that contains key weather measurements gathered at Mesonet stations across the state, is beginning to fill in the data sparsity gaps. The aim of this study is to identify core patterns associated with ENSO in the global wavelet output. Using a continuous wavelet transform analysis on data from 32 stations (2000–2024), we identified significant precipitation cycles. Where previous studies used just four Automated Surface Observing Systems (ASOSs) located at airports across Missouri to characterize climate variability, this study uses an additional 28 from the Missouri Mesonet. The use of a global wavelet power spectrum analysis reveals that precipitation patterns, with the exception of southeast Missouri, have a distinct annual cycle. Furthermore, separating the stations based on the significance of their ENSO (El Niño–Southern Oscillation) signal results in the identification of three precipitation zones: an annual, ENSO, and residual zone. This spatial data analysis reveals that the Missouri climate division boundaries broadly capture the three precipitation zones found in this study. Additionally, the results suggest a corridor in central Missouri where precipitation is particularly sensitive to an ENSO signal. These findings provide critical insights for improved water resource management and climate adaptation strategies. Full article
(This article belongs to the Special Issue Early Career Scientists' (ECS) Contributions to Meteorology (2025))
Show Figures

Figure 1

Back to TopTop