Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,143)

Search Parameters:
Keywords = deep features extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 771 KB  
Review
Recent Advances and Application of Machine Learning for Protein–Protein Interaction Prediction in Rice: Challenges and Future Perspectives
by Sarah Bernard Merumba, Habiba Omar Ahmed, Dong Fu and Pingfang Yang
Proteomes 2025, 13(4), 54; https://doi.org/10.3390/proteomes13040054 (registering DOI) - 27 Oct 2025
Abstract
Protein–protein interactions (PPIs) are significant in understanding the complex molecular processes of plant growth, disease resistance, and stress responses. Machine learning (ML) has recently emerged as a powerful tool that can predict and analyze PPIs, offering complementary insights into traditional experimental approaches. It [...] Read more.
Protein–protein interactions (PPIs) are significant in understanding the complex molecular processes of plant growth, disease resistance, and stress responses. Machine learning (ML) has recently emerged as a powerful tool that can predict and analyze PPIs, offering complementary insights into traditional experimental approaches. It also accounts for proteoforms, distinct molecular variants of proteins arising from alternative splicing, or genetic variations and modifications, which can significantly influence PPI dynamics and specificity in rice. This review presents a comprehensive summary of ML-based methods for PPI predictions in rice (Oryza sativa) based on recent developments in algorithmic innovation, feature extraction processes, and computational resources. We present applications of these models in the discovery of candidate genes, unknown protein annotations, identification of plant–pathogen interactions, and precision breeding. Case studies demonstrate the utility of ML-based methods in improving rice resistance to abiotic and biotic stresses. Additionally, this review highlights key challenges like data limits, model generalizability, and future directions like multi-omics, deep learning and artificial intelligence (AI). This review provides a roadmap for researchers aiming to use ML to generate predictive and mechanistic insights on rice PPI networks, hence helping to achieve enhanced crop improvement programs. Full article
(This article belongs to the Special Issue Plant Genomics and Proteomics)
14 pages, 555 KB  
Article
A Symmetric Multiscale Detail-Guided Attention Network for Cardiac MR Image Semantic Segmentation
by Hengqi Hu, Bin Fang, Bin Duo, Xuekai Wei, Jielu Yan, Weizhi Xian and Dongfen Li
Symmetry 2025, 17(11), 1807; https://doi.org/10.3390/sym17111807 (registering DOI) - 27 Oct 2025
Abstract
Cardiac medical image segmentation can advance healthcare and embedded vision systems. In this paper, a symmetric semantic segmentation architecture for cardiac magnetic resonance (MR) images based on a symmetric multiscale detail-guided attention network is presented. Detailed information and multiscale attention maps can be [...] Read more.
Cardiac medical image segmentation can advance healthcare and embedded vision systems. In this paper, a symmetric semantic segmentation architecture for cardiac magnetic resonance (MR) images based on a symmetric multiscale detail-guided attention network is presented. Detailed information and multiscale attention maps can be exploited more efficiently in this model. A symmetric encoder and decoder are used to generate high-dimensional semantic feature maps and segmentation masks, respectively. First, a series of densely connected residual blocks is introduced for extracting high-dimensional semantic features. Second, an asymmetric detail-guided module is proposed. In this module, a feature pyramid is used to extract detailed information and generate detailed feature maps as part of the detail guidance of the model during the training phase, which are used to extract deep features of multiscale information and calculate a detail loss with specific encoder semantic features. Third, a series of multiscale upsampling attention blocks symmetrical to the encoder is introduced in the decoder of the model. For each upsampling attention block, feature fusion is first performed on the previous-level low-resolution features and the symmetric skip connections of the same layer, and then spatial and channel attention are used to enhance the features. Image gradients of the input images are also introduced at the end of the decoder. Finally, the predicted segmentation masks are obtained by calculating a detail loss and a segmentation loss. Our method demonstrates outstanding performance on the public cardiac MR image dataset, which can achieve significant results for endocardial and epicardial segmentation of the left ventricle (LV). Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

19 pages, 974 KB  
Article
Short-Duration Monofractal Signals for Heart Failure Characterization Using CNN-ELM Models
by Juan L. López, José A. Vásquez-Coronel, David Morales-Salinas, Daniel Toral Acosta, Romeo Selvas Aguilar and Ricardo Chapa Garcia
Appl. Sci. 2025, 15(21), 11453; https://doi.org/10.3390/app152111453 (registering DOI) - 27 Oct 2025
Abstract
Monofractal analysis offers a promising framework for characterizing cardiac dynamics, particularly in the early detection of heart failure. However, most existing approaches rely on long-duration physiological signals and do not explore the classification of disease severity. In this study, we propose a hybrid [...] Read more.
Monofractal analysis offers a promising framework for characterizing cardiac dynamics, particularly in the early detection of heart failure. However, most existing approaches rely on long-duration physiological signals and do not explore the classification of disease severity. In this study, we propose a hybrid CNN-ELM model trained exclusively on synthetic monofractal time series of short length (128 to 512 samples), aiming to assess its ability to distinguish between healthy individuals and varying degrees of heart failure defined by the NYHA functional classification. Our results show that Hurst exponent distributions reflect the progressive loss of complexity in cardiac rhythms as heart failure severity increases. The model successfully classified both binary (healthy vs. sick) and multiclass (NYHA I–IV) scenarios by grouping Hurst exponent values (H0.1 to H0.9) into clinical categories, achieving peak accuracy ranges of 97.3–98.9% for binary classification and 96.2–98.8% for multiclass classification across signal lengths of 128, 256, and 512 samples. Importantly, the CNN-ELM architecture demonstrated fast training times and robust generalization, outperforming previous approaches based solely on support vector machines. These findings highlight the clinical potential of monofractal indices as non-invasive biomarkers of cardiovascular health and support the use of short synthetic signals for scalable, low-cost screening applications. Future work will extend this framework to multifractal and real-world clinical data and explore its integration into intelligent diagnostic systems. Full article
Show Figures

Figure 1

26 pages, 1617 KB  
Article
MemRoadNet: Human-like Memory Integration for Free Road Space Detection
by Sidra Shafiq, Abdullah Aman Khan and Jie Shao
Sensors 2025, 25(21), 6600; https://doi.org/10.3390/s25216600 (registering DOI) - 27 Oct 2025
Abstract
Detecting available road space is a fundamental task for autonomous driving vehicles, requiring robust image feature extraction methods that operate reliably across diverse sensor-captured scenarios. However, existing approaches process each input independently without leveraging Accumulated Experiential Knowledge (AEK), limiting their adaptability and reliability. [...] Read more.
Detecting available road space is a fundamental task for autonomous driving vehicles, requiring robust image feature extraction methods that operate reliably across diverse sensor-captured scenarios. However, existing approaches process each input independently without leveraging Accumulated Experiential Knowledge (AEK), limiting their adaptability and reliability. In order to explore the impact of AEK, we introduce MemRoadNet, a Memory-Augmented (MA) semantic segmentation framework that integrates human-inspired cognitive architectures with deep-learning models for free road space detection. Our approach combines an InternImage-XL backbone with a UPerNet decoder and a Human-like Memory Bank system implementing episodic, semantic, and working memory subsystems. The memory system stores road experiences with emotional valences based on segmentation performance, enabling intelligent retrieval and integration of relevant historical patterns during training and inference. Experimental validation on the KITTI road, Cityscapes, and R2D benchmarks demonstrates that our single-modality RGB approach achieves competitive performance with complex multimodal systems while maintaining computational efficiency and achieving top performance among single-modality methods. The MA framework represents a significant advancement in sensor-based computer vision systems, bridging computational efficiency and segmentation quality for autonomous driving applications. Full article
Show Figures

Figure 1

22 pages, 979 KB  
Article
Multi-Modal Semantic Fusion for Smart Contract Vulnerability Detection in Cloud-Based Blockchain Analytics Platforms
by Xingyu Zeng, Qiaoyan Wen and Sujuan Qin
Electronics 2025, 14(21), 4188; https://doi.org/10.3390/electronics14214188 (registering DOI) - 27 Oct 2025
Abstract
With the growth of trusted computing demand for big data analysis, cloud computing platforms are reshaping trusted data infrastructure by integrating Blockchain as a Service (BaaS), which uses elastic resource scheduling and heterogeneous hardware acceleration to support petabyte level multi-institution data security exchange [...] Read more.
With the growth of trusted computing demand for big data analysis, cloud computing platforms are reshaping trusted data infrastructure by integrating Blockchain as a Service (BaaS), which uses elastic resource scheduling and heterogeneous hardware acceleration to support petabyte level multi-institution data security exchange in medical, financial, and other fields. As the core hub of data-intensive scenarios, the BaaS platform has the dual capabilities of privacy computing and process automation. However, its deep dependence on smart contracts generates new code layer vulnerabilities, resulting in malicious contamination of analysis results. The existing detection schemes are limited to the perspective of single-source data, which makes it difficult to capture both global semantic associations and local structural details in a cloud computing environment, leading to a performance bottleneck in terms of scalability and detection accuracy. To address these challenges, this paper proposes a smart contract vulnerability detection method based on multi-modal semantic fusion for the blockchain analysis platform of cloud computing. Firstly, the contract source code is parsed into an abstract syntax tree, and the key code is accurately located based on the predefined vulnerability feature set. Then, the text features and graph structure features of key codes are extracted in parallel to realize the deep fusion of them. Finally, with the help of attention enhancement, the vulnerability probability is output through the fully connected network. The experiments on Ethereum benchmark datasets show that the detection accuracy of our method for re-entrancy vulnerability, timestamp vulnerability, overflow/underflow vulnerability, and delegatecall vulnerability can reach 92.2%, 96.3%, 91.4%, and 89.5%, surpassing previous methods. Additionally, our method has the potential for practical deployment in cloud-based blockchain service environments. Full article
(This article belongs to the Special Issue New Trends in Cloud Computing for Big Data Analytics)
Show Figures

Figure 1

23 pages, 18947 KB  
Article
IOPE-IPD: Water Properties Estimation Network Integrating Physical Model and Deep Learning for Hyperspectral Imagery
by Qi Li, Mingyu Gao, Ming Zhang, Junwen Wang, Jingjing Chen and Jinghua Li
Remote Sens. 2025, 17(21), 3546; https://doi.org/10.3390/rs17213546 (registering DOI) - 26 Oct 2025
Abstract
Hyperspectral underwater target detection holds great potential for marine exploration and environmental monitoring. A key challenge lies in accurately estimating water inherent optical properties (IOPs) from hyperspectral imagery. To address these limitations, we propose a novel water IOP estimation network to support the [...] Read more.
Hyperspectral underwater target detection holds great potential for marine exploration and environmental monitoring. A key challenge lies in accurately estimating water inherent optical properties (IOPs) from hyperspectral imagery. To address these limitations, we propose a novel water IOP estimation network to support the interpretation of bathymetric models. We propose the IOPs physical model that focuses on the description of the water IOPs, describing how the concentrations of chlorophyll, colored dissolved organic matter, and detrital material influence the absorption and backscattering coefficients. Building on this foundation, we proposed an innovative IOP estimation network integrating a physical model and deep learning (IOPE-IPD). This approach enables precise and physically interpretable estimation of the IOPs. Specially, the IOPE-IPD network takes water spectra as input. The encoder extracts spectral features, while dual parallel decoders simultaneously estimate four key parameters. Based on these outputs, the absorption and backscattering coefficients of the water body are computed using the IOPs physical model. Subsequently, the bathymetric model is employed to reconstruct the water spectrum. Under the constraint of a consistency loss, the retrieved spectrum is encouraged to closely match the input spectrum. To ensure the IOPE-IPD’s applicability across various scenarios, multiple actual and Jerlov-simulated aquatic environments were used. Comprehensive experimental results demonstrate the robustness and effectiveness of our proposed IOPE-IPD over the compared method. Full article
Show Figures

Figure 1

15 pages, 2225 KB  
Article
An Automatic Pixel-Level Segmentation Method for Coal-Crack CT Images Based on U2-Net
by Yimin Zhang, Chengyi Wu, Jinxia Yu, Guoqiang Wang and Yingying Li
Electronics 2025, 14(21), 4179; https://doi.org/10.3390/electronics14214179 (registering DOI) - 26 Oct 2025
Abstract
Automatically segmenting coal cracks in CT images is crucial for 3D reconstruction and the physical properties of mines. This paper proposes an automatic pixel-level deep learning method called Attention Double U2-Net to enhance the segmentation accuracy of coal cracks in CT [...] Read more.
Automatically segmenting coal cracks in CT images is crucial for 3D reconstruction and the physical properties of mines. This paper proposes an automatic pixel-level deep learning method called Attention Double U2-Net to enhance the segmentation accuracy of coal cracks in CT images. Due to the lack of public datasets of coal CT images, a pixel-level labeled coal crack dataset is first established through industrial CT scanning experiments and post-processing. Then, the proposed method utilizes a Double Residual U-Block structure (DRSU) based on U2-Net to improve feature extraction and fusion capabilities. Moreover, an attention mechanism module is proposed, which is called Atrous Asymmetric Fusion Non-Local Block (AAFNB). The AAFNB module is based on the idea of Asymmetric Non-Local, which enables the collection of global information to enhance the segmentation results. Compared with previous state-of-the-art models, the proposed Attention Double U2-Net method exhibits better performance over the coal crack CT image dataset in various evaluation metrics such as PA, mPA, MIoU, IoU, Precision, Recall, and Dice scores. The crack segmentation results obtained from this method are more accurate and efficient, which provides experimental data and theoretical support to the field of CBM exploration and damage of coal. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 1426 KB  
Article
Generalizable Hybrid Wavelet–Deep Learning Architecture for Robust Arrhythmia Detection in Wearable ECG Monitoring
by Ukesh Thapa, Bipun Man Pati, Attaphongse Taparugssanagorn and Lorenzo Mucchi
Sensors 2025, 25(21), 6590; https://doi.org/10.3390/s25216590 (registering DOI) - 26 Oct 2025
Abstract
This paper investigates Electrocardiogram (ECG) rhythm classification using a progressive deep learning framework that combines time–frequency representations with complementary hand-crafted features. In the first stage, ECG signals from the PhysioNet Challenge 2017 dataset are transformed into scalograms and input to diverse architectures, including [...] Read more.
This paper investigates Electrocardiogram (ECG) rhythm classification using a progressive deep learning framework that combines time–frequency representations with complementary hand-crafted features. In the first stage, ECG signals from the PhysioNet Challenge 2017 dataset are transformed into scalograms and input to diverse architectures, including Simple Convolutional Neural Network (SimpleCNN), Residual Network with 18 Layers (ResNet-18), Convolutional Neural Network-Transformer (CNNTransformer), and Vision Transformer (ViT). ViT achieved the highest accuracy (0.8590) and F1-score (0.8524), demonstrating the feasibility of pure image-based ECG analysis, although scalograms alone showed variability across folds. In the second stage, scalograms were fused with scattering and statistical features, enhancing robustness and interpretability. FusionViT without dimensionality reduction achieved the best performance (accuracy = 0.8623, F1-score = 0.8528), while Fusion ResNet-18 offered a favorable trade-off between accuracy (0.8321) and inference efficiency (0.016 s per sample). The application of Principal Component Analysis (PCA) reduced the dimensionality of the feature from 509 to 27, reducing the computational cost while maintaining competitive performance (FusionViT precision = 0.8590). The results highlight a trade-off between efficiency and fine-grained temporal resolution. Training-time augmentations mitigated class imbalance, enabling lightweight inference (0.006–0.043 s per sample). For real-world use, the framework can run on wearable ECG devices or mobile health apps. Scalogram transformation and feature extraction occur on-device or at the edge, with efficient models like ResNet-18 enabling near real-time monitoring. Abnormal rhythm alerts can be sent instantly to users or clinicians. By combining visual and statistical signal features, optionally reduced with PCA, the framework achieves high accuracy, robustness, and efficiency for practical deployment. Full article
(This article belongs to the Special Issue Human Body Communication)
21 pages, 6893 KB  
Article
A Multi-Source Data-Driven Fracturing Pressure Prediction Model
by Zhongwei Zhu, Mingqing Wan, Yanwei Sun, Xuan Gong, Biao Lei, Zheng Tang and Liangjie Mao
Processes 2025, 13(11), 3434; https://doi.org/10.3390/pr13113434 (registering DOI) - 26 Oct 2025
Abstract
Accurate prediction of fracturing pressure is critical for operational safety and fracturing efficiency in unconventional reservoirs. Traditional physics-based models and existing deep learning architectures often struggle to capture the intense fluctuations and complex temporal dependencies observed in actual fracturing operations. To address these [...] Read more.
Accurate prediction of fracturing pressure is critical for operational safety and fracturing efficiency in unconventional reservoirs. Traditional physics-based models and existing deep learning architectures often struggle to capture the intense fluctuations and complex temporal dependencies observed in actual fracturing operations. To address these challenges, this paper proposes a multi-source data-driven fracturing pressure prediction model, a model integrating TCN-BiLSTM-Attention Mechanism (Temporal Convolutional Network, Bidirectional Long Short-Term Memory, Attention Mechanism), and introduces a feature selection mechanism for fracture pressure prediction. This model employs TCN to extract multi-scale local fluctuation features, BiLSTM to capture long-term dependencies, and Attention to adaptively adjust feature weights. A two-stage feature selection strategy combining correlation analysis and ablation experiments effectively eliminates redundant features and enhances model robustness. Field data from the Sichuan Basin were used for model validation. Results demonstrate that our method significantly outperforms baseline models (LSTM, BiLSTM, and TCN-BiLSTM) in mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2), particularly under high-fluctuation conditions. When integrated with slope reversal analysis, it achieves sand blockage warnings up to 41 s in advance, offering substantial potential for real-time decision support in fracturing operations. Full article
Show Figures

Figure 1

18 pages, 9094 KB  
Article
Mycelial_Net: A Bio-Inspired Deep Learning Framework for Mineral Classification in Thin Section Microscopy
by Paolo Dell’Aversana
Minerals 2025, 15(11), 1112; https://doi.org/10.3390/min15111112 (registering DOI) - 25 Oct 2025
Viewed by 40
Abstract
This study presents the application of Mycelial_Net, a biologically inspired deep learning architecture, to the analysis and classification of mineral images in thin section under optical microscopy. The model, inspired by the adaptive connectivity of fungal mycelium networks, was trained on a test [...] Read more.
This study presents the application of Mycelial_Net, a biologically inspired deep learning architecture, to the analysis and classification of mineral images in thin section under optical microscopy. The model, inspired by the adaptive connectivity of fungal mycelium networks, was trained on a test mineral image database to extract structural features and to classify various minerals. The performance of Mycelial_Net was evaluated in terms of accuracy, robustness, and adaptability, and compared against conventional convolutional neural networks. The results demonstrate that Mycelial_Net, properly integrated with Residual Networks (ResNets), offers superior analysis capabilities, interpretability, and resilience to noise and artifacts in petrographic images. This approach holds promise for advancing automated mineral identification and geological analysis through adaptive AI systems. Full article
Show Figures

Figure 1

23 pages, 11997 KB  
Article
Deep Learning-Driven Automatic Segmentation of Weeds and Crops in UAV Imagery
by Jianghan Tao, Qian Qiao, Jian Song, Shan Sun, Yijia Chen, Qingyang Wu, Yongying Liu, Feng Xue, Hao Wu and Fan Zhao
Sensors 2025, 25(21), 6576; https://doi.org/10.3390/s25216576 (registering DOI) - 25 Oct 2025
Viewed by 54
Abstract
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces [...] Read more.
Accurate segmentation of crops and weeds is essential for enhancing crop yield, optimizing herbicide usage, and mitigating environmental impacts. Traditional weed management practices, such as manual weeding or broad-spectrum herbicide application, are labor-intensive, environmentally harmful, and economically inefficient. In response, this study introduces a novel precision agriculture framework integrating Unmanned Aerial Vehicle (UAV)-based remote sensing with advanced deep learning techniques, combining Super-Resolution Reconstruction (SRR) and semantic segmentation. This study is the first to integrate UAV-based SRR and semantic segmentation for tobacco fields, systematically evaluate recent Transformer and Mamba-based models alongside traditional CNNs, and release an annotated dataset that not only ensures reproducibility but also provides a resource for the research community to develop and benchmark future models. Initially, SRR enhanced the resolution of low-quality UAV imagery, significantly improving detailed feature extraction. Subsequently, to identify the optimal segmentation model for the proposed framework, semantic segmentation models incorporating CNN, Transformer, and Mamba architectures were used to differentiate crops from weeds. Among evaluated SRR methods, RCAN achieved the optimal reconstruction performance, reaching a Peak Signal-to-Noise Ratio (PSNR) of 24.98 dB and a Structural Similarity Index (SSIM) of 69.48%. In semantic segmentation, the ensemble model integrating Transformer (DPT with DINOv2) and Mamba-based architectures achieved the highest mean Intersection over Union (mIoU) of 90.75%, demonstrating superior robustness across diverse field conditions. Additionally, comprehensive experiments quantified the impact of magnification factors, Gaussian blur, and Gaussian noise, identifying an optimal magnification factor of 4×, proving that the method was robust to common environmental disturbances at optimal parameters. Overall, this research established an efficient, precise framework for crop cultivation management, offering valuable insights for precision agriculture and sustainable farming practices. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

25 pages, 17236 KB  
Article
Hierarchical Deep Learning Model for Identifying Similar Targets in UAV Imagery
by Dmytro Borovyk, Oleksander Barmak, Pavlo Radiuk and Iurii Krak
Drones 2025, 9(11), 743; https://doi.org/10.3390/drones9110743 (registering DOI) - 25 Oct 2025
Viewed by 49
Abstract
Accurate object detection in UAV imagery is critical for situational awareness, yet conventional deep learning models often struggle to distinguish between visually similar targets. To address this challenge, this study introduces a hierarchical deep learning architecture that decomposes the multi-class detection task into [...] Read more.
Accurate object detection in UAV imagery is critical for situational awareness, yet conventional deep learning models often struggle to distinguish between visually similar targets. To address this challenge, this study introduces a hierarchical deep learning architecture that decomposes the multi-class detection task into a structured, multi-level classification cascade. Our approach combines a high-recall Faster R-CNN for initial object proposal, specialized YOLO models for granular feature extraction, and a dedicated FT-Transformer for fine-grained classification. Experimental evaluation on a complex dataset demonstrated the effectiveness of this strategy. The hierarchical model achieved an aggregate F1-score of 93.9%, representing a 1.41% improvement over the 92.46% F1-score from a traditional, non-hierarchical baseline model. These results indicate that a modular, coarse-to-fine cascade can effectively reduce inter-class ambiguity, offering a scalable approach to improving object recognition in complex UAV-based monitoring environments. This work contributes a promising approach to developing more accurate and reliable situational awareness systems. Full article
Show Figures

Figure 1

29 pages, 7775 KB  
Article
Early Prediction of Student Performance Using an Activation Ensemble Deep Neural Network Model
by Hassan Bin Nuweeji and Ahmad Bassam Alzubi
Appl. Sci. 2025, 15(21), 11411; https://doi.org/10.3390/app152111411 (registering DOI) - 24 Oct 2025
Viewed by 63
Abstract
In recent years, academic performance prediction has evolved as a research field thanks to its development and exploration in the educational context. Early student performance prediction is crucial for enhancing educational outcomes and implementing timely interventions. Conventional approaches frequently struggle on behalf of [...] Read more.
In recent years, academic performance prediction has evolved as a research field thanks to its development and exploration in the educational context. Early student performance prediction is crucial for enhancing educational outcomes and implementing timely interventions. Conventional approaches frequently struggle on behalf of the complexity of student profiles as a consequence of single activation functions, which prevent them from effectively learning intricate patterns. In addition, these models could experience obstacles such as the vanishing gradient problem and computational complexity. Therefore, this research study designed an Activation Ensemble Deep Neural Network (AcEnDNN) model to gain control of the previously mentioned challenges. The main contribution is the creation of a credible student performance prediction model that comprises extensive data preprocessing, feature extraction, and an Activation Ensemble DNN. By utilizing various methods of activation functions, such as ReLU, tanh, sigmoid, and swish, the ensembled activation functions are able to learn the complex structure of student data, which leads to more accurate performance prediction. The AcEn-DNN model is trained and evaluated based on the publicly available Student-mat.csv dataset, Student-por.csv dataset, and a real-time dataset. The experimental results revealed that the AcEn-DNN model achieved lower error rates, with an MAE of 1.28, MAPE of 2.36, MSE of 4.55, and RMSE of 2.13 based on a training percentage of 90%, confirming its robustness in modeling nonlinear relationships within student data. The proposed model also gained the minimum error values MAE of 1.28, MAPE of 2.97, MSE of 4.77, and RMSE of 2.18, based on a K-fold value of 10, utilizing the Student-mat.csv dataset. These findings highlight the model’s potential in early identification of at-risk students, enabling educators to develop targeted learning strategies. This research contributes to educational data mining by advancing predictive modeling techniques that evaluate student performance. Full article
Show Figures

Figure 1

23 pages, 1659 KB  
Article
A Multi-View-Based Federated Learning Approach for Intrusion Detection
by Jia Yu, Guoqiang Wang, Nianfeng Shi, Raghav Saxena and Brian Lee
Electronics 2025, 14(21), 4166; https://doi.org/10.3390/electronics14214166 (registering DOI) - 24 Oct 2025
Viewed by 204
Abstract
Intrusion detection aims to identify the unauthorized activities within computer networks or systems by classifying events into normal or abnormal categories. As modern scenarios often involve multi-source data, multi-view fusion deep learning methods are employed to leverage diverse viewpoints for enhancing security threat [...] Read more.
Intrusion detection aims to identify the unauthorized activities within computer networks or systems by classifying events into normal or abnormal categories. As modern scenarios often involve multi-source data, multi-view fusion deep learning methods are employed to leverage diverse viewpoints for enhancing security threat detection. This paper introduces a novel intrusion detection approach using multi-view fusion within a federated learning framework, proposing an integrated AE Neural SVM (AE-NSVM) model that combines auto-encoder (AE) multi-view feature extraction and Support Vector Machine (SVM) classification. This approach simultaneously learns representative features from multiple views and classifies network samples into normal or seven attack categories while employing federated learning across clients to ensure adaptability and robustness in diverse network environments. The experimental results obtained from two benchmark datasets validate its superiority: on TON_IoT, the CAE-NSVM model achieves a highest F1-measure of 0.792 (1.4% higher than traditional pipeline systems); on UNSW-NB15, it delivers an F1-score of 0.829 with a 73% reduced training time and an 89% faster inference compared to baseline models. These results demonstrate the advantages of multi-view fusion in federated learning for balancing accuracy and efficiency in distributed intrusion detection systems. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

21 pages, 49278 KB  
Article
Lightweight Attention Refined and Complex-Valued BiSeNetV2 for Semantic Segmentation of Polarimetric SAR Image
by Ruiqi Xu, Shuangxi Zhang, Chenchu Dong, Shaohui Mei, Jinyi Zhang and Qiang Zhao
Remote Sens. 2025, 17(21), 3527; https://doi.org/10.3390/rs17213527 (registering DOI) - 24 Oct 2025
Viewed by 159
Abstract
In the semantic segmentation tasks of polarimetric SAR images, deep learning has become an important end-to-end method that uses convolutional neural networks (CNNs) and other advanced network architectures to extract features and classify the target region pixel by pixel. However, applying original networks [...] Read more.
In the semantic segmentation tasks of polarimetric SAR images, deep learning has become an important end-to-end method that uses convolutional neural networks (CNNs) and other advanced network architectures to extract features and classify the target region pixel by pixel. However, applying original networks used to optical images for PolSAR image segmentation directly will result in the loss of rich phase information in PolSAR data, which leads to unsatisfactory classification results. In order to make full use of polarization information, the complex-valued BiSeNetV2 with a bilateral-segmentation structure is studied and expanded in this work. Then, considering further improving the ability to extract semantic features in the complex domain and alleviating the imbalance of polarization channel response, the complex-valued BiSeNetV2 with a lightweight attention module (LAM-CV-BiSeNetV2) is proposed for the semantic segmentation of PolSAR images. LAM-CV-BiSeNetV2 supports complex-valued operations, and a lightweight attention module (LAM) is designed and introduced at the end of the Semantic Branch to enhance the extraction of detailed features. Compared with the original BiSeNetV2, the LAM-CV-BiSeNetV2 can not only more fully extract the phase information from polarimetric SAR data, but also has stronger semantic feature extraction capabilities. The experimental results on the Flevoland and San Francisco datasets demonstrate that the proposed LAM has better and more stable performance than other commonly used attention modules, and the proposed network can always obtain better classification results than BiSeNetV2 and other known real-valued networks. Full article
Show Figures

Figure 1

Back to TopTop