Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (547)

Search Parameters:
Keywords = CNN–RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 9222 KiB  
Article
Using Deep Learning in Forecasting the Production of Electricity from Photovoltaic and Wind Farms
by Michał Pikus, Jarosław Wąs and Agata Kozina
Energies 2025, 18(15), 3913; https://doi.org/10.3390/en18153913 - 23 Jul 2025
Abstract
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine [...] Read more.
Accurate forecasting of electricity production is crucial for the stability of the entire energy sector. However, predicting future renewable energy production and its value is difficult due to the complex processes that affect production using renewable energy sources. In this article, we examine the performance of basic deep learning models for electricity forecasting. We designed deep learning models, including recursive neural networks (RNNs), which are mainly based on long short-term memory (LSTM) networks; gated recurrent units (GRUs), convolutional neural networks (CNNs), temporal fusion transforms (TFTs), and combined architectures. In order to achieve this goal, we have created our benchmarks and used tools that automatically select network architectures and parameters. Data were obtained as part of the NCBR grant (the National Center for Research and Development, Poland). These data contain daily records of all the recorded parameters from individual solar and wind farms over the past three years. The experimental results indicate that the LSTM models significantly outperformed the other models in terms of forecasting. In this paper, multilayer deep neural network (DNN) architectures are described, and the results are provided for all the methods. This publication is based on the results obtained within the framework of the research and development project “POIR.01.01.01-00-0506/21”, realized in the years 2022–2023. The project was co-financed by the European Union under the Smart Growth Operational Programme 2014–2020. Full article
Show Figures

Figure 1

26 pages, 829 KiB  
Article
Enhanced Face Recognition in Crowded Environments with 2D/3D Features and Parallel Hybrid CNN-RNN Architecture with Stacked Auto-Encoder
by Samir Elloumi, Sahbi Bahroun, Sadok Ben Yahia and Mourad Kaddes
Big Data Cogn. Comput. 2025, 9(8), 191; https://doi.org/10.3390/bdcc9080191 - 22 Jul 2025
Abstract
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. [...] Read more.
Face recognition (FR) in unconstrained conditions remains an open research topic and an ongoing challenge. The facial images exhibit diverse expressions, occlusions, variations in illumination, and heterogeneous backgrounds. This work aims to produce an accurate and robust system for enhanced Security and Surveillance. A parallel hybrid deep learning model for feature extraction and classification is proposed. An ensemble of three parallel extraction layer models learns the best representative features using CNN and RNN. 2D LBP and 3D Mesh LBP are computed on face images to extract image features as input to two RNNs. A stacked autoencoder (SAE) merged the feature vectors extracted from the three CNN-RNN parallel layers. We tested the designed 2D/3D CNN-RNN framework on four standard datasets. We achieved an accuracy of 98.9%. The hybrid deep learning model significantly improves FR against similar state-of-the-art methods. The proposed model was also tested on an unconstrained conditions human crowd dataset, and the results were very promising with an accuracy of 95%. Furthermore, our model shows an 11.5% improvement over similar hybrid CNN-RNN architectures, proving its robustness in complex environments where the face can undergo different transformations. Full article
Show Figures

Figure 1

31 pages, 7723 KiB  
Article
A Hybrid CNN–GRU–LSTM Algorithm with SHAP-Based Interpretability for EEG-Based ADHD Diagnosis
by Makbal Baibulova, Murat Aitimov, Roza Burganova, Lazzat Abdykerimova, Umida Sabirova, Zhanat Seitakhmetova, Gulsiya Uvaliyeva, Maksym Orynbassar, Aislu Kassekeyeva and Murizah Kassim
Algorithms 2025, 18(8), 453; https://doi.org/10.3390/a18080453 - 22 Jul 2025
Viewed by 111
Abstract
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to [...] Read more.
This study proposes an interpretable hybrid deep learning framework for classifying attention deficit hyperactivity disorder (ADHD) using EEG signals recorded during cognitively demanding tasks. The core architecture integrates convolutional neural networks (CNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) layers to jointly capture spatial and temporal dynamics. In addition to the final hybrid architecture, the CNN–GRU–LSTM model alone demonstrates excellent accuracy (99.63%) with minimal variance, making it a strong baseline for clinical applications. To evaluate the role of global attention mechanisms, transformer encoder models with two and three attention blocks, along with a spatiotemporal transformer employing 2D positional encoding, are benchmarked. A hybrid CNN–RNN–transformer model is introduced, combining convolutional, recurrent, and transformer-based modules into a unified architecture. To enhance interpretability, SHapley Additive exPlanations (SHAP) are employed to identify key EEG channels contributing to classification outcomes. Experimental evaluation using stratified five-fold cross-validation demonstrates that the proposed hybrid model achieves superior performance, with average accuracy exceeding 99.98%, F1-scores above 0.9999, and near-perfect AUC and Matthews correlation coefficients. In contrast, transformer-only models, despite high training accuracy, exhibit reduced generalization. SHAP-based analysis confirms the hybrid model’s clinical relevance. This work advances the development of transparent and reliable EEG-based tools for pediatric ADHD screening. Full article
Show Figures

Graphical abstract

27 pages, 3019 KiB  
Article
New Deep Learning-Based Approach for Source Code Generation: Application to Computer Vision Systems
by Wafa Alshehri, Salma Kammoun Jarraya and Arwa Allinjawi
AI 2025, 6(7), 162; https://doi.org/10.3390/ai6070162 - 21 Jul 2025
Viewed by 203
Abstract
Deep learning has enabled significant progress in source code generation, aiming to reduce the manual, error-prone, and time-consuming aspects of software development. While many existing models rely on recurrent neural networks (RNNs) with sequence-to-sequence architectures, these approaches struggle with the long and complex [...] Read more.
Deep learning has enabled significant progress in source code generation, aiming to reduce the manual, error-prone, and time-consuming aspects of software development. While many existing models rely on recurrent neural networks (RNNs) with sequence-to-sequence architectures, these approaches struggle with the long and complex token sequences typical in source code. To address this, we propose a grammar-based convolutional neural network (CNN) combined with a tree-based representation to enhance accuracy and efficiency. Our model achieves state-of-the-art results on the benchmark HEARTHSTONE dataset, with a BLEU score of 81.4 and an Acc+ of 62.1%. We further evaluate the model on our proposed dataset, AST2CVCode, designed for computer vision applications, achieving 86.2 BLEU and 51.9% EM. Additionally, we introduce BLEU+, an enhanced evaluation metric tailored for functional correctness in code generation, which achieves a BLEU+ score of 92.0% on the AST2CVCode dataset. These results demonstrate the effectiveness of our approach in both model architecture and evaluation methodology. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

21 pages, 4238 KiB  
Article
Fault Prediction of Hydropower Station Based on CNN-LSTM-GAN with Biased Data
by Bei Liu, Xiao Wang, Zhaoxin Zhang, Zhenjie Zhao, Xiaoming Wang and Ting Liu
Energies 2025, 18(14), 3772; https://doi.org/10.3390/en18143772 - 16 Jul 2025
Viewed by 178
Abstract
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network [...] Read more.
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network (CNNs) and long short-term memory network (LSTM) with a generative adversarial network (GAN). Firstly, a reliability mechanism based on principal component analysis (PCA) is designed to solve the problem of data bias caused by multiple monitoring devices. Then, the CNN-LSTM network is used to predict time series data, and the GAN is used to expand fault data samples to solve the problem of an unbalanced data distribution. Meanwhile, a multi-scale feature extraction network with time–frequency information is designed to improve the accuracy of fault detection. Finally, a dynamic multi-task training algorithm is proposed to ensure the convergence and training efficiency of the deep models. Experimental results show that compared with RNN, GRU, SVM, and threshold detection algorithms, the proposed fault prediction method improves the accuracy performance by 5.5%, 4.8%, 7.8%, and 9.3%, with at least a 160% improvement in the fault recall rate. Full article
(This article belongs to the Special Issue Optimal Schedule of Hydropower and New Energy Power Systems)
Show Figures

Figure 1

18 pages, 721 KiB  
Article
An Adaptive Holt–Winters Model for Seasonal Forecasting of Internet of Things (IoT) Data Streams
by Samer Sawalha and Ghazi Al-Naymat
IoT 2025, 6(3), 39; https://doi.org/10.3390/iot6030039 - 10 Jul 2025
Viewed by 227
Abstract
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during [...] Read more.
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during training, which can result in the inclusion of noisy data. To address this critical issue, this paper proposes a new forecasting technique called Adaptive Holt–Winters (AHW). The AHW approach utilizes two models grounded in an exponential smoothing methodology. The first model is trained on the most current data window, whereas the second extracts information from a historical data segment exhibiting patterns most analogous to the present. The outputs of the two models are then combined, demonstrating enhanced prediction precision since the focus is on the relevant data patterns. The effectiveness of the AHW model is evaluated against well-known models (Rolling Window, SVR-RBF, ARIMA, LSTM, CNN, RNN, and Holt–Winters), utilizing various metrics, such as RMSE, MAE, p-value, and time performance. A comprehensive evaluation covers various real-world datasets at different granularities (daily and monthly), including temperature from the National Climatic Data Center (NCDC), humidity and soil moisture measurements from the Basel City environmental system, and global intensity and global reactive power from the Individual Household Electric Power Consumption (IHEPC) dataset. The evaluation results demonstrate that AHW constantly attains higher forecasting accuracy across the tested datasets compared to other models. This indicates the efficacy of AHW in leveraging pertinent data patterns for enhanced predictive precision, offering a robust solution for temporal IoT data forecasting. Full article
Show Figures

Figure 1

16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 366
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

16 pages, 1322 KiB  
Article
Application of a Transfer Learning Model Combining CNN and Self-Attention Mechanism in Wireless Signal Recognition
by Wu Wei, Chenqi Zhu, Lifan Hu and Pengfei Liu
Sensors 2025, 25(13), 4202; https://doi.org/10.3390/s25134202 - 5 Jul 2025
Viewed by 208
Abstract
In this paper, we propose TransConvNet, a hybrid model combining Convolutional Neural Networks (CNNs), self-attention mechanisms, and transfer learning for wireless signal recognition under challenging conditions. The model effectively addresses challenges such as low signal-to-noise ratio (SNR), low sampling rates, and limited labeled [...] Read more.
In this paper, we propose TransConvNet, a hybrid model combining Convolutional Neural Networks (CNNs), self-attention mechanisms, and transfer learning for wireless signal recognition under challenging conditions. The model effectively addresses challenges such as low signal-to-noise ratio (SNR), low sampling rates, and limited labeled data. The CNN module extracts local features and suppresses noise, while the self-attention mechanism within the Transformer encoder captures long-range dependencies in the signal. To enhance performance with limited data, we incorporate transfer learning by leveraging pre-trained models, ensuring faster convergence and improved generalization. Extensive experiments were conducted on a six-class wireless signal dataset, downsampled to 1 MSPS to simulate real-world constraints. The proposed TransConvNet achieved 92.1% accuracy, outperforming baseline models such as LSTM, CNN, and RNN across multiple evaluation metrics, including RMSE and R2. The model demonstrated strong robustness under varying SNR conditions and exhibited superior discriminative ability, as confirmed by Precision–Recall and ROC curves. These results validate the effectiveness and robustness of the TransConvNet model for wireless signal recognition, particularly in resource-constrained and noisy environments. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

25 pages, 2093 KiB  
Article
Deep Learning-Based Speech Enhancement for Robust Sound Classification in Security Systems
by Samuel Yaw Mensah, Tao Zhang, Nahid AI Mahmud and Yanzhang Geng
Electronics 2025, 14(13), 2643; https://doi.org/10.3390/electronics14132643 - 30 Jun 2025
Viewed by 597
Abstract
Deep learning has emerged as a powerful technique for speech enhancement, particularly in security systems where audio signals are often degraded by non-stationary noise. Traditional signal processing methods struggle in such conditions, making it difficult to detect critical sounds like gunshots, alarms, and [...] Read more.
Deep learning has emerged as a powerful technique for speech enhancement, particularly in security systems where audio signals are often degraded by non-stationary noise. Traditional signal processing methods struggle in such conditions, making it difficult to detect critical sounds like gunshots, alarms, and unauthorized speech. This study investigates a hybrid deep learning framework that combines Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs) to enhance speech quality and improve sound classification accuracy in noisy security environments. The proposed model is trained and validated using real-world datasets containing diverse noise distortions, including VoxCeleb for benchmarking speech enhancement and UrbanSound8K and ESC-50 for sound classification. Performance is evaluated using industry-standard metrics such as Perceptual Evaluation of Speech Quality (PESQ), Short-Time Objective Intelligibility (STOI), and Signal-to-Noise Ratio (SNR). The architecture includes multi-layered neural networks, residual connections, and dropout regularization to ensure robustness and generalizability. Additionally, the paper addresses key challenges in deploying deep learning models for security applications, such as computational complexity, latency, and vulnerability to adversarial attacks. Experimental results demonstrate that the proposed DNN + GAN-based approach significantly improves speech intelligibility and classification performance in high-interference scenarios, offering a scalable solution for enhancing the reliability of audio-based security systems. Full article
Show Figures

Figure 1

25 pages, 2432 KiB  
Article
LogRESP-Agent: A Recursive AI Framework for Context-Aware Log Anomaly Detection and TTP Analysis
by Juyoung Lee, Yeonsu Jeong, Taehyun Han and Taejin Lee
Appl. Sci. 2025, 15(13), 7237; https://doi.org/10.3390/app15137237 - 27 Jun 2025
Viewed by 480
Abstract
As cyber threats become increasingly sophisticated, existing log-based anomaly detection models face critical limitations in adaptability, semantic interpretation, and operational automation. Traditional approaches based on CNNs, RNNs, and LSTMs struggle with inconsistent log formats and often lack interpretability. To address these challenges, we [...] Read more.
As cyber threats become increasingly sophisticated, existing log-based anomaly detection models face critical limitations in adaptability, semantic interpretation, and operational automation. Traditional approaches based on CNNs, RNNs, and LSTMs struggle with inconsistent log formats and often lack interpretability. To address these challenges, we propose LogRESP-Agent, a modular AI framework built around a reasoning-based agent for log-driven security prediction and response. The architecture integrates three core capabilities, including (1) LLM-based anomaly detection with semantic explanation, (2) contextual threat reasoning via Retrieval-Augmented Generation (RAG), and (3) recursive investigation capabilities enabled by a planning-capable LLM agent. This architecture supports automated, multi-step analysis over heterogeneous logs without reliance on fixed templates. Experimental results validate the effectiveness of our approach on both binary and multi-class classification tasks. On the Monster-THC dataset, LogRESP-Agent achieved 99.97% accuracy and 97.00% F1-score, while also attaining 99.54% accuracy and 99.47% F1-score in multi-class classification using the EVTX-ATTACK-SAMPLES dataset. These results confirm the agent’s ability to not only detect complex threats but also explain them in context, offering a scalable foundation for next-generation threat detection and response automation. Full article
(This article belongs to the Special Issue Machine Learning and Its Application for Anomaly Detection)
Show Figures

Figure 1

27 pages, 6102 KiB  
Article
Inverse Kinematics for Robotic Manipulators via Deep Neural Networks: Experiments and Results
by Ana Calzada-Garcia, Juan G. Victores, Francisco J. Naranjo-Campos and Carlos Balaguer
Appl. Sci. 2025, 15(13), 7226; https://doi.org/10.3390/app15137226 - 26 Jun 2025
Viewed by 360
Abstract
This paper explores the application of Deep Neural Networks (DNNs) to solve the Inverse Kinematics (IK) problem in robotic manipulators. The IK problem, crucial for ensuring precision in robotic movements, involves determining joint configurations for a manipulator to reach a desired position or [...] Read more.
This paper explores the application of Deep Neural Networks (DNNs) to solve the Inverse Kinematics (IK) problem in robotic manipulators. The IK problem, crucial for ensuring precision in robotic movements, involves determining joint configurations for a manipulator to reach a desired position or orientation. Traditional methods, such as analytical and numerical approaches, have limitations, especially for redundant manipulators, or involve high computational costs. Recent advances in machine learning, particularly with DNNs, have shown promising results and seem fit for addressing these challenges. This study investigates several DNN architectures, namely Feed-Forward Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), for solving the IK problem, using the TIAGo robotic arm with seven Degrees of Freedom (DOFs). Different training datasets, normalization techniques, and orientation representations are tested, and custom metrics are introduced to evaluate position and orientation errors. The performance of these models is compared, with a focus on curriculum learning to optimize training. The results demonstrate the potential of DNNs to efficiently solve the IK problem while avoiding issues such as singularities, competing with traditional methods in precision and speed. Full article
(This article belongs to the Special Issue Technological Breakthroughs in Automation and Robotics)
Show Figures

Figure 1

22 pages, 5197 KiB  
Article
Electrical Resistivity Tomography Methods and Technical Research for Hydrate-Based Carbon Sequestration
by Zitian Lin, Qia Wang, Shufan Li, Xingru Li, Jiajie Ye, Yidi Zhang, Haoning Ye, Yangmin Kuang and Yanpeng Zheng
J. Mar. Sci. Eng. 2025, 13(7), 1205; https://doi.org/10.3390/jmse13071205 - 21 Jun 2025
Viewed by 273
Abstract
This study focuses on the application of electrical resistivity tomography (ERT) for monitoring the growth process of CO2 hydrate in subsea carbon sequestration, aiming to provide technical support for the safety assessment of marine carbon storage. By designing single-target, dual-target, and multi-target [...] Read more.
This study focuses on the application of electrical resistivity tomography (ERT) for monitoring the growth process of CO2 hydrate in subsea carbon sequestration, aiming to provide technical support for the safety assessment of marine carbon storage. By designing single-target, dual-target, and multi-target hydrate samples, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and residual neural networks (ResNets) were constructed and compared with traditional image reconstruction algorithms (e.g., back-projection) to quantitatively analyze ERT imaging accuracy. The experiments used boundary voltage as the input and internal conductivity distribution as the output, employing the relative image error (RIE) and image correlation coefficient (ICC) to evaluate algorithmic performance. The results demonstrate that neural network algorithms—particularly RNNs—exhibit superior performance compared to traditional image reconstruction methods due to their strong noise resistance and nonlinear mapping capabilities. These algorithms significantly improve the edge clarity in target identification, enabling the precise capture of the hydrate distribution during carbon sequestration. This advancement effectively enhances the monitoring capability of CO2 hydrate reservoir characteristics and provides reliable data support for the safety assessment of hydrate reservoirs. Full article
Show Figures

Figure 1

23 pages, 1784 KiB  
Article
Signal-Specific and Signal-Independent Features for Real-Time Beat-by-Beat ECG Classification with AI for Cardiac Abnormality Detection
by I Hua Tsai and Bashir I. Morshed
Electronics 2025, 14(13), 2509; https://doi.org/10.3390/electronics14132509 - 20 Jun 2025
Viewed by 400
Abstract
ECG monitoring is central to the early detection of cardiac abnormalities. We compared 28 manually selected signal-specific features with 159 automatically extracted signal-independent descriptors from the MIT BIH Arrhythmia Database. ANOVA reduced features to the 10 most informative attributes, which were evaluated alone [...] Read more.
ECG monitoring is central to the early detection of cardiac abnormalities. We compared 28 manually selected signal-specific features with 159 automatically extracted signal-independent descriptors from the MIT BIH Arrhythmia Database. ANOVA reduced features to the 10 most informative attributes, which were evaluated alone and in combination with the signal-specific features using Random Forest, SVM, and deep neural networks (CNN, RNN, ANN, LSTM) under an interpatient 80/20 split. Merging the two feature groups delivered the best results: a 128-layer CNN achieved 100% accuracy. Power profiling revealed that deeper models improve accuracy at the cost of runtime, memory, and CPU load, underscoring the trade-off faced in edge deployments. The proposed hybrid feature strategy provides beat-by-beat classification with a reduction in the number of features, enabling real-time ECG screening on wearable and IoT devices. Full article
Show Figures

Figure 1

26 pages, 916 KiB  
Review
Integrating Artificial Intelligence in Next-Generation Sequencing: Advances, Challenges, and Future Directions
by Konstantina Athanasopoulou, Vasiliki-Ioanna Michalopoulou, Andreas Scorilas and Panagiotis G. Adamopoulos
Curr. Issues Mol. Biol. 2025, 47(6), 470; https://doi.org/10.3390/cimb47060470 - 19 Jun 2025
Cited by 1 | Viewed by 832
Abstract
The integration of artificial intelligence (AI) into next-generation sequencing (NGS) has revolutionized genomics, offering unprecedented advancements in data analysis, accuracy, and scalability. This review explores the synergistic relationship between AI and NGS, highlighting its transformative impact across genomic research and clinical applications. AI-driven [...] Read more.
The integration of artificial intelligence (AI) into next-generation sequencing (NGS) has revolutionized genomics, offering unprecedented advancements in data analysis, accuracy, and scalability. This review explores the synergistic relationship between AI and NGS, highlighting its transformative impact across genomic research and clinical applications. AI-driven tools, including machine learning and deep learning, enhance every aspect of NGS workflows—from experimental design and wet-lab automation to bioinformatics analysis of the generated raw data. Key applications of AI integration in NGS include variant calling, epigenomic profiling, transcriptomics, and single-cell sequencing, where AI models such as CNNs, RNNs, and hybrid architectures outperform traditional methods. In cancer research, AI enables precise tumor subtyping, biomarker discovery, and personalized therapy prediction, while in drug discovery, it accelerates target identification and repurposing. Despite these advancements, challenges persist, including data heterogeneity, model interpretability, and ethical concerns. This review also discusses the emerging role of AI in third-generation sequencing (TGS), addressing long-read-specific challenges, like fast and accurate basecalling, as well as epigenetic modification detection. Future directions should focus on implementing federated learning to address data privacy, advancing interpretable AI to improve clinical trust and developing unified frameworks for seamless integration of multi-modal omics data. By fostering interdisciplinary collaboration, AI promises to unlock new frontiers in precision medicine, making genomic insights more actionable and scalable. Full article
(This article belongs to the Special Issue Technological Advances Around Next-Generation Sequencing Application)
Show Figures

Graphical abstract

25 pages, 3921 KiB  
Article
Sensor-Driven Real-Time Recognition of Basketball Goal States Using IMU and Deep Learning
by Jiajin Zhang, Rong Guo, Yan Zhu, Yonglin Che, Yucheng Zeng, Lin Yu, Ziqiong Yang and Jianke Yang
Sensors 2025, 25(12), 3709; https://doi.org/10.3390/s25123709 - 13 Jun 2025
Viewed by 603
Abstract
In recent years, advances in artificial intelligence, machine vision, and the Internet of Things have significantly impacted sports analytics, particularly basketball, where accurate measurement and analysis of player performance have become increasingly important. This study proposes a real-time goal state recognition system based [...] Read more.
In recent years, advances in artificial intelligence, machine vision, and the Internet of Things have significantly impacted sports analytics, particularly basketball, where accurate measurement and analysis of player performance have become increasingly important. This study proposes a real-time goal state recognition system based on inertial measurement unit (IMU) sensors, focusing on four shooting scenarios: rebounds, swishes, other shots, and misses. By installing IMU sensors around the basketball net, the system captures real-time data on acceleration, angular velocity, and angular changes to comprehensively analyze the fluency and success rate of shooting execution, utilizing five deep learning models—convolutional neural network (CNN), recurrent neural network (RNN), long short-term memory (LSTM), CNN-LSTM, and CNN-LSTM-Attention—to classify shot types. Experimental results indicate that the CNN-LSTM-Attention model outperformed other models with an accuracy of 87.79% in identifying goal states. This result represents a commanding level of real-time goal state recognition, demonstrating the robustness and efficiency of the model in complex sports environments. This high accuracy not only supports the application of the system in skill analysis and sports performance evaluation but also lays a solid foundation for the development of intelligent basketball training equipment, providing an efficient and practical solution for athletes and coaches. Full article
(This article belongs to the Special Issue Sensor Technologies in Sports and Exercise)
Show Figures

Figure 1

Back to TopTop