applsci-logo

Journal Browser

Journal Browser

Advances in Machine Learning and Data Mining: Emerging Trends and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2025 | Viewed by 15393

Special Issue Editor


E-Mail Website
Guest Editor
College of Computer Science & Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Interests: data mining; social network analysis; multimodel learning; graph data analysis; time serial analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning (ML) and data mining (DM) have significantly transformed various sectors by providing sophisticated techniques to analyze and extract valuable insights from vast datasets. As these technologies evolve, they offer new opportunities and challenges across diverse applications. The continued advancement in ML and DM is driven by emerging trends, innovative methodologies, and the need to address complex real-world problems. This Special Issue aims to explore these cutting-edge trends and their applications across various domains, reflecting these technologies' rapid progress and interdisciplinary impact.

This Special Issue's scope encompasses foundational advancements and practical applications of machine learning and data mining. We seek contributions highlighting novel approaches, theoretical advancements, and practical implementations. Topics of interest include but are not limited to emerging algorithms and models, applications across domains, interdisciplinary integration, data management and processing, and ethics and fairness.

Dr. Donghai Guan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • data mining
  • deep learning
  • predictive analytics
  • big data
  • natural language processing
  • cybersecurity
  • ethical AI
  • interdisciplinary applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 3517 KiB  
Article
The Optimal Design of an Inclined Porous Plate Wave Absorber Using an Artificial Neural Network Model
by Senthil Kumar Natarajan, Seokkyu Cho and Il-Hyoung Cho
Appl. Sci. 2025, 15(9), 4895; https://doi.org/10.3390/app15094895 - 28 Apr 2025
Viewed by 92
Abstract
This study seeks to optimize the shape of a wave absorber with an inclined porous plate using an artificial neural network (ANN) model to improve the operating efficiency and experimental accuracy of a square wave basin. As our numerical tool, we employed the [...] Read more.
This study seeks to optimize the shape of a wave absorber with an inclined porous plate using an artificial neural network (ANN) model to improve the operating efficiency and experimental accuracy of a square wave basin. As our numerical tool, we employed the dual boundary element method (DBEM) to avoid the rank deficiency problem occurring at the degenerate plate boundary with zero thickness. A quadratic velocity model incorporating a CFD-based drag coefficient was employed to account for energy dissipation across the porous plate. The developed DBEM tool was validated through comparisons with self-conducted experiments in a two-dimensional wave flume. The input features such as the inclined angle and plate length affect the performance of the wave absorber. These features have been optimized to minimize the averaged reflection coefficient and the installation space (spatial footprint) with the application of a trained ANN model. The dataset used for training the ANN model was created using the DBEM model. The trained model was subsequently utilized to predict the averaged reflection coefficient using a larger dataset, aiding in the determination of the optimal wave absorber design. In the optimization process of minimizing both reflected waves and spatial footprint, the weighting factors are assigned according to their relative importance to each other, using the weighted sum model (WSM) within the multi-criteria decision-making framework. It was found that the optimal design parameters of the non-dimensional plate length (l/h) and inclined angle (θ) are 1.46 and 5.34° when performing with a weighting factor ratio (80%: 20%) between reflection and spatial footprint. Full article
Show Figures

Figure 1

14 pages, 4140 KiB  
Article
Integrating AI-Driven Predictive Analytics in Wearable IoT for Real-Time Health Monitoring in Smart Healthcare Systems
by Siriwan Kajornkasirat, Chanathip Sawangwong, Kritsada Puangsuwan, Napat Chanapai, Weerapat Phutthamongkhon and Supattra Puttinaovarat
Appl. Sci. 2025, 15(8), 4400; https://doi.org/10.3390/app15084400 - 16 Apr 2025
Viewed by 537
Abstract
The spread of infectious diseases, such as COVID-19, presents a significant problem for public health and healthcare systems. Digital technology plays an important role in achieving access to healthcare by enhancing device connectivity and information sharing. This study aimed to develop, implement, and [...] Read more.
The spread of infectious diseases, such as COVID-19, presents a significant problem for public health and healthcare systems. Digital technology plays an important role in achieving access to healthcare by enhancing device connectivity and information sharing. This study aimed to develop, implement, and demonstrate a tracking and surveillance system to enhance monitoring for emerging infectious diseases, focusing on COVID-19 patient profiling. The system integrates IoT-based wearable devices, an artificial intelligence (AI) camera for real-time monitoring, and a MySQL database for data management. The program uses Charts.js for data visualization and Longdo Map API for mapping, leveraging Jetson Nano boards, webcams, and Python (Version 3.9). We employed a classification technique to categorize patients into two groups: those with a positive mood and those with a negative mood. For comparing accuracies, we utilized three types of models: multilayer perceptron (MLP), support vector machine (SVM), and random forest. Model validation and evaluation were conducted using Python programming. The results of this study fall into three parts. The first part involved testing the monitoring and surveillance system. It was found that the system could receive information from the wearable device, display the received data in graph form, and notify the medical staff when examining symptoms to consider whether the patient should be taken to the hospital. The second part focused on testing the device, and it was found that it could measure body temperature, heart rate, and blood oxygen levels (SpO2) and send those data to the database. The third part involved an AI camera test, and it was found that the most suitable algorithm to analyze the patient’s facial expressions was Random Forest. The results show that the system supports hospitals in managing COVID-19 and similar diseases by enabling timely interventions through facial expression analysis. Full article
Show Figures

Figure 1

13 pages, 810 KiB  
Article
In Silico Methods for Assessing Cancer Immunogenicity—A Comparison Between Peptide and Protein Models
by Stanislav Sotirov and Ivan Dimitrov
Appl. Sci. 2025, 15(8), 4123; https://doi.org/10.3390/app15084123 - 9 Apr 2025
Viewed by 236
Abstract
Identifying and characterizing putative tumor antigens is essential to cancer vaccine development. Given the impracticality of isolating and evaluating each potential antigen individually, in silico prediction algorithms, especially those employing machine learning (ML) techniques, are indispensable. These algorithms substantially decrease the experimental workload [...] Read more.
Identifying and characterizing putative tumor antigens is essential to cancer vaccine development. Given the impracticality of isolating and evaluating each potential antigen individually, in silico prediction algorithms, especially those employing machine learning (ML) techniques, are indispensable. These algorithms substantially decrease the experimental workload required for discovering viable vaccine candidates, thereby accelerating the development process and enhancing the efficiency of identifying promising immunogenic targets. In this study, we employed six supervised ML methods on a dataset containing 546 experimentally validated immunogenic human tumor proteins and 548 non-immunogenic human proteins to develop models for immunogenicity prediction. These models included k-nearest neighbor (kNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). After validation through internal cross-validation and an external test set, the best-performing models (QDA, RF, and XGBoost) were selected for further evaluation. A comparison between the chosen protein models and our previously developed peptide models for tumor immunogenicity prediction revealed that the peptide models slightly outperformed the protein models. However, since both proteins and peptides can be subject to tumor immunogenicity assessment, evaluating each with the respective models is prudent. The three selected protein models are set to be integrated into the new version of the VaxiJen server. Full article
Show Figures

Figure 1

17 pages, 2144 KiB  
Article
Comparative Evaluation and Optimization of Neural Networks for Epileptic Magnetoencephalogram Classification
by Andreas Stylianou, Athanasia Kotini, Aikaterini Terzoudi and Adam Adamopoulos
Appl. Sci. 2025, 15(7), 3593; https://doi.org/10.3390/app15073593 - 25 Mar 2025
Viewed by 180
Abstract
The primary objective of this study is to evaluate and compare the classification performance of feed-forward neural networks (FFNNs) and one-dimensional convolutional neural networks (1D-CNNs) on magnetoencephalography (MEG) signals from epileptic patients. MEG signals were recorded using the NEUROMAG-122 whole-brain superconducting quantum interference [...] Read more.
The primary objective of this study is to evaluate and compare the classification performance of feed-forward neural networks (FFNNs) and one-dimensional convolutional neural networks (1D-CNNs) on magnetoencephalography (MEG) signals from epileptic patients. MEG signals were recorded using the NEUROMAG-122 whole-brain superconducting quantum interference device (SQUID), installed, and operated in our laboratory. The dataset comprised over 5000 MEG segments, each one with a duration of 1 s and sampled at a frequency of 256 Hz. Each segment was classified by expert neurologists as either epileptic or non-epileptic. The FFNN with five hidden layers demonstrated promising results, achieving a classification accuracy of approximately 92%. The 1D-CNN, utilizing four layers, achieved an accuracy of 90.4%, with a significantly reduced training time. Building on these findings, the study’s secondary objective was to enhance the artificial neural network (ANN) model by incorporating transfer learning–stacked generalization for FFNN in various configurations. These enhancements successfully improved the performance of the pretrained network by approximately 1%. Full article
Show Figures

Figure 1

21 pages, 4194 KiB  
Article
Predicting Olive Tree Chlorophyll Fluorescence Using Explainable AI with Sentinel-2 Imagery in Mediterranean Environment
by Leonardo Costanza, Beatriz Lorente, Francisco Pedrero Salcedo, Francesco Pasanisi, Vincenzo Giannico, Francesca Ardito, Carlota María Martí Martínez and Simone Pietro Garofalo
Appl. Sci. 2025, 15(5), 2746; https://doi.org/10.3390/app15052746 - 4 Mar 2025
Viewed by 760
Abstract
Chlorophyll fluorescence is a useful indicator of a plant’s physiological status, particularly under stress conditions. Remote sensing is an increasingly adopted technology in modern agriculture, allowing the acquisition of crop information (e.g., chlorophyll fluorescence) without direct contact, reducing fieldwork. The objective of this [...] Read more.
Chlorophyll fluorescence is a useful indicator of a plant’s physiological status, particularly under stress conditions. Remote sensing is an increasingly adopted technology in modern agriculture, allowing the acquisition of crop information (e.g., chlorophyll fluorescence) without direct contact, reducing fieldwork. The objective of this study is to improve the monitoring of olive tree fluorescence (Fv′/Fm′) via remote sensing in a Mediterranean environment, where the frequency of stress factors, such as drought, is increasing. An advanced approach combining explainable artificial intelligence and multispectral Sentinel-2 satellite data was developed to predict olive tree fluorescence. Field measurements were conducted in southeastern Italy on two olive groves: one irrigated and the other one under rainfed conditions. Sentinel-2 reflectance bands and vegetation indices were used as predictors and different machine learning algorithms were tested and compared. Random Forest showed the highest predictive accuracy, particularly when Sentinel-2 reflectance bands were used as predictors. Using spectral bands preserves more information per observation, enabling models to detect variations that VIs might miss. Additionally, raw reflectance data minimizes potential bias that could arise from selecting specific indices. SHapley Additive exPlanations (SHAP) analysis was performed to explain the model. Random Forest showed the highest predictive accuracy, particularly when using Sentinel-2 reflectance bands as predictors. Key spectral regions associated with Fv′/Fm′, such as red-edge and NIR, were identified. The results highlight the potential of integrating remote sensing and machine learning to improve olive grove management, providing a useful tool for early stress detection and targeted interventions. Full article
Show Figures

Figure 1

22 pages, 3475 KiB  
Article
Uncertainty-Aware Adaptive Multiscale U-Net for Low-Contrast Cardiac Image Segmentation
by A. S. M. Sharifuzzaman Sagar, Muhammad Zubair Islam, Jawad Tanveer and Hyung Seok Kim
Appl. Sci. 2025, 15(4), 2222; https://doi.org/10.3390/app15042222 - 19 Feb 2025
Viewed by 579
Abstract
Medical image analysis is critical for diagnosing and planning treatments, particularly in addressing heart disease, a leading cause of mortality worldwide. Precise segmentation of the left atrium, a key structure in cardiac imaging, is essential for detecting conditions such as atrial fibrillation, heart [...] Read more.
Medical image analysis is critical for diagnosing and planning treatments, particularly in addressing heart disease, a leading cause of mortality worldwide. Precise segmentation of the left atrium, a key structure in cardiac imaging, is essential for detecting conditions such as atrial fibrillation, heart failure, and stroke. However, its complex anatomy, subtle boundaries, and inter-patient variations make accurate segmentation challenging for traditional methods. Recent advancements in deep learning, especially semantic segmentation, have shown promise in addressing these limitations by enabling detailed, pixel-wise classification. This study proposes a novel segmentation framework Adaptive Multiscale U-Net (AMU-Net) combining Convolutional Neural Networks (CNNs) and transformer-based encoder–decoder architectures. The framework introduces a Contextual Dynamic Encoder (CDE) for extracting multi-scale features and capturing long-range dependencies. An Adaptive Feature Decoder Block (AFDB), leveraging an Adaptive Feature Attention Block (AFAB) improves boundary delineation. Additionally, a Spectral Synthesis Fusion Head (SFFH) synthesizes spectral and spatial features, enhancing segmentation performance in low-contrast regions. To ensure robustness, data augmentation techniques such as rotation, scaling, and flipping are applied. Laplacian approximation is employed for uncertainty estimation, enabling interpretability and identifying regions of low confidence. Our proposed model achieves a Dice score of 93.35, a Precision of 94.12, and a Recall of 92.78, outperforming existing methods. Full article
Show Figures

Figure 1

19 pages, 10296 KiB  
Article
Extended Maximum Actor–Critic Framework Based on Policy Gradient Reinforcement for System Optimization
by Jung-Hyun Kim, Yong-Hoon Choi, You-Rak Choi, Jae-Hyeok Jeong and Min-Suk Kim
Appl. Sci. 2025, 15(4), 1828; https://doi.org/10.3390/app15041828 - 11 Feb 2025
Viewed by 531
Abstract
Recently, significant research efforts have been directed toward leveraging Artificial Intelligence for sensor data processing and system control. In particular, it is essential to determine the optimal path and trajectory by calculating sensor data for effective control systems. For instance, model-predictive control based [...] Read more.
Recently, significant research efforts have been directed toward leveraging Artificial Intelligence for sensor data processing and system control. In particular, it is essential to determine the optimal path and trajectory by calculating sensor data for effective control systems. For instance, model-predictive control based on Proportional-Integral-Derivative models is intuitive, efficient, and provides outstanding control performance. However, challenges in tracking persist, which requires active research and development to integrate and optimize the control system in terms of Machine Learning. Specifically, Reinforcement Learning, a branch of Machine Learning, has been used in several research fields to solve optimal control problems. In this paper, we propose an Extended Maximum Actor–Critic using a Reinforcement Learning-based method to combine the advantages of both value and policy to enhance the learning stability of actor–critic for optimization of system control. The proposed method integrates the actor and the maximized actor in the learning process to evaluate and identify actions with the highest value, facilitating effective learning exploration. Additionally, to enhance the efficiency and robustness of the agent learning process, we propose Prioritized Hindsight Experience Replay, combining the advantages of Prioritized Experience Replay and Hindsight Experience Replay. To verify this, we performed evaluations and experiments to examine the improved training stability in the MuJoCo environment, which is a simulator based on Reinforcement Learning. The proposed Prioritized Hindsight Experience Replay method significantly enhances the experience to be compared with the standard replay buffer and PER in experimental simulators, such as the simple HalfCheetah-v4 and the complex Ant-v4. Thus, Prioritized Hindsight Experience Replay achieves a higher success rate than PER in FetchReach-v2, demonstrating the significant effectiveness of our proposed method in more complex reward environments. Full article
Show Figures

Figure 1

10 pages, 2196 KiB  
Article
Deep Learning and Automatic Detection of Pleomorphic Esophageal Lesions—A Necessary Step for Minimally Invasive Panendoscopy
by Miguel Martins, Miguel Mascarenhas, Maria João Almeida, João Afonso, Tiago Ribeiro, Pedro Cardoso, Francisco Mendes, Joana Mota, Patrícia Andrade, Hélder Cardoso, Miguel Mascarenhas-Saraiva, João Ferreira and Guilherme Macedo
Appl. Sci. 2025, 15(2), 709; https://doi.org/10.3390/app15020709 - 13 Jan 2025
Viewed by 676
Abstract
Background: Capsule endoscopy (CE) improved the digestive tract assessment; yet, its reading burden is substantial. Deep-learning (DL) algorithms were developed for the detection of enteric and gastric lesions. Nonetheless, their application in the esophagus lacks evidence. The study aim was to develop a [...] Read more.
Background: Capsule endoscopy (CE) improved the digestive tract assessment; yet, its reading burden is substantial. Deep-learning (DL) algorithms were developed for the detection of enteric and gastric lesions. Nonetheless, their application in the esophagus lacks evidence. The study aim was to develop a DL model for esophageal pleomorphic lesion (PL) detection. Methods: A bicentric retrospective study was conducted using 598 CE exams. Three different CE devices provided 7982 esophageal frames, including 2942 PL lesions. The data were divided into the training/validation and test groups, in a patient-split design. Three runs were conducted, each with unique patient sets. The sensitivity, specificity, accuracy, positive and negative predictive value (PPV and NPV), area under the conventional receiver operating characteristic curve (AUC-ROC), and precision–recall curve (AUC-PR) were calculated per run. The model’s diagnostic performance was assessed using the median and range values. Results: The median sensitivity, specificity, PPV, and NPV were 75.8% (63.6–82.1%), 95.8% (93.7–97.9%), 71.9% (50.0–90.1%), and 96.4% (94.2–97.6%), respectively. The median accuracy was 93.5% (91.8–93.8%). The median AUC-ROC and AUC-PR were 0.82 and 0.93. Conclusions: This study focused on the automatic detection of pleomorphic esophageal lesions, potentially enhancing the diagnostic yield of this type of lesion, compared to conventional methods. Specific esophageal DL algorithms may provide a significant contribution and bridge the gap for the implementation of minimally invasive CE-enhanced panendoscopy. Full article
Show Figures

Figure 1

26 pages, 3079 KiB  
Article
Analyzing Student Behavioral Patterns in MOOCs Using Hidden Markov Models in Distance Education
by Vassilios S. Verykios, Nikolaos S. Alachiotis, Evgenia Paxinou and Georgios Feretzakis
Appl. Sci. 2024, 14(24), 12067; https://doi.org/10.3390/app142412067 - 23 Dec 2024
Viewed by 869
Abstract
The log files of Massive Open Online Courses (MOOCs) reveal useful information that can help interpret student behavior. In this study, we focus on student performance based on their access to course resources and the grades they achieve. We define states as the [...] Read more.
The log files of Massive Open Online Courses (MOOCs) reveal useful information that can help interpret student behavior. In this study, we focus on student performance based on their access to course resources and the grades they achieve. We define states as the Moodle resources and quiz grades for each student ID, considering participation in resources such as wikis and forums. We use efficient Hidden Markov Models to interpret the abundance of information provided in the Moodle log files. The transitions among certain resources for each student or groups of students are determined as behaviors. Other studies employ Machine Learning and Pattern Classification algorithms to recognize these behaviors. As an example, we visualize these transitions for individual learners. Additionally, we have created row and column charts to present our findings in a comprehensible manner. For implementing the proposed methodology, we use the R programming language. The dataset that we use was obtained from Kaggle and pertains to a MOOC of 4037 students. Full article
Show Figures

Figure 1

47 pages, 6139 KiB  
Article
Consistent Vertical Federated Deep Learning Using Task-Driven Features to Construct Integrated IoT Services
by Soyeon Oh and Minsoo Lee
Appl. Sci. 2024, 14(24), 11977; https://doi.org/10.3390/app142411977 - 20 Dec 2024
Viewed by 824
Abstract
By training a multivariate deep learning model distributed across existing IoT services using vertical federated learning, expanded services could be constructed cost-effectively while preserving the independent data architecture of each service. Previously, we proposed a design approach for vertical federated learning considering IoT [...] Read more.
By training a multivariate deep learning model distributed across existing IoT services using vertical federated learning, expanded services could be constructed cost-effectively while preserving the independent data architecture of each service. Previously, we proposed a design approach for vertical federated learning considering IoT domain characteristics. Also, our previous method, designed leveraging our approach, achieved improved performance, especially in IoT domains, compared to other representative vertical federated learning mechanisms. However, our previous method was difficult to apply in real-world scenarios because its mechanism consisted of several options. In this paper, we propose a new vertical federated learning method, TT-VFDL-ST (Task-driven Transferred Vertical Federated Deep Learning using Self-Transfer partial training), a consistent single mechanism even in various real-world scenarios. The proposed method is also designed based on our previous design approach. However, the difference is that it leverages a newly proposed self-transfer partial training mechanism. The self-transfer partial training mechanism improved the MSE and accuracy of TT-VFDL-ST by 0.00262 and 12.08% on average compared to existing mechanisms. In addition, MSE and accuracy improved by up to 0.00290 and 5.08% compared to various options of our previous method. By applying the self-transfer partial training mechanism, TT-VFDL-ST could be used as a key solution to construct real-world-integrated IoT services. Full article
Show Figures

Figure 1

29 pages, 1921 KiB  
Article
Large Language Models and the Elliott Wave Principle: A Multi-Agent Deep Learning Approach to Big Data Analysis in Financial Markets
by Michał Wawer, Jarosław A. Chudziak and Ewa Niewiadomska-Szynkiewicz
Appl. Sci. 2024, 14(24), 11897; https://doi.org/10.3390/app142411897 - 19 Dec 2024
Cited by 3 | Viewed by 3515
Abstract
Traditional technical analysis methods face limitations in accurately predicting trends in today’s complex financial markets. Meanwhile, existing AI-driven approaches, while powerful in processing large datasets, often lack interpretability due to their black-box nature. This paper presents ElliottAgents, a multi-agent system that combines the [...] Read more.
Traditional technical analysis methods face limitations in accurately predicting trends in today’s complex financial markets. Meanwhile, existing AI-driven approaches, while powerful in processing large datasets, often lack interpretability due to their black-box nature. This paper presents ElliottAgents, a multi-agent system that combines the Elliott wave principle with LLMs, showcasing the application of deep reinforcement learning (DRL) and natural language processing (NLP) in financial analysis. By integrating retrieval-augmented generation (RAG) and deep reinforcement learning (DRL), the system processes vast amounts of market data to identify Elliott wave patterns and generate actionable insights. The system employs a coordinated team of specialized agents, each responsible for specific aspects of analysis, from pattern recognition to investment strategy formulation. We tested ElliottAgents on both stock and cryptocurrency markets, evaluating its effectiveness in pattern identification and trend prediction across different time scales. Our experimental results demonstrate improvements in prediction accuracy when combining classical technical analysis with AI-driven approaches, particularly when enhanced by DRL-based backtesting process. This research contributes to the advancement of financial technology by introducing a scalable, interpretable framework that enhances market analysis capabilities, offering a promising new methodology for both practitioners and researchers. Full article
Show Figures

Figure 1

13 pages, 1062 KiB  
Article
Real-Time Computing Strategies for Automatic Detection of EEG Seizures in ICU
by Laura López-Viñas, Jose L. Ayala and Francisco Javier Pardo Moreno
Appl. Sci. 2024, 14(24), 11616; https://doi.org/10.3390/app142411616 - 12 Dec 2024
Viewed by 4280
Abstract
Developing interfaces for seizure diagnosis, often challenging to detect visually, is rising. However, their effectiveness is constrained by the need for diverse and extensive databases. This study aimed to create a seizure detection methodology incorporating detailed information from each EEG channel and accounts [...] Read more.
Developing interfaces for seizure diagnosis, often challenging to detect visually, is rising. However, their effectiveness is constrained by the need for diverse and extensive databases. This study aimed to create a seizure detection methodology incorporating detailed information from each EEG channel and accounts for frequency band variations linked to the primary brain pathology leading to ICU admission, enhancing our ability to identify epilepsy onset. This study involved 460 video-electroencephalography recordings from 71 patients under monitoring. We applied signal preprocessing and conducted a numerical quantitative analysis in the frequency domain. Various machine learning algorithms were assessed for their efficacy. The k-nearest neighbours (KNN) model was the most effective in our overall sample, achieving an average F1 score of 0.76. For specific subgroups, different models showed superior performance: Decision Tree for ‘Epilepsy’ (average F1 score of 0.80) and ‘Craniencephalic Trauma’ (average F1 score of 0.84), Random Forest for ‘Cardiorespiratory Arrest’ (average F1 score of 0.89) and ‘Brain Haemorrhage’ (average F1 score of 0.84). In the categorisation of seizure types, Linear Discriminant Analysis was most effective for focal seizures (average F1 score of 0.87), KNN for generalised (average F1 score of 0.84) and convulsive seizures (average F1 score of 0.88), and logistic regression for non-convulsive seizures (average F1 score of 0.83). Our study demonstrates the potential of using classifier models based on quantified EEG data for diagnosing seizures in ICU patients. The performance of these models varies significantly depending on the underlying cause of the seizure, highlighting the importance of tailored approaches. The automation of these diagnostic tools could facilitate early seizure detection. Full article
Show Figures

Figure 1

15 pages, 2928 KiB  
Article
A Multi-Objective Optimization Framework for Peer-to-Peer Energy Trading in South Korea’s Tiered Pricing System
by Laura Kharatovi, Rahma Gantassi, Zaki Masood and Yonghoon Choi
Appl. Sci. 2024, 14(23), 11071; https://doi.org/10.3390/app142311071 - 28 Nov 2024
Viewed by 760
Abstract
This study proposes a multi-objective optimization framework for peer-to-peer (P2P) energy trading in South Korea’s tiered electricity pricing system. The framework employs the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) to optimize three conflicting objectives: minimizing consumer costs, maximizing prosumer benefits, and enhancing [...] Read more.
This study proposes a multi-objective optimization framework for peer-to-peer (P2P) energy trading in South Korea’s tiered electricity pricing system. The framework employs the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) to optimize three conflicting objectives: minimizing consumer costs, maximizing prosumer benefits, and enhancing energy utilization. Using real microgrid data from a South Korean community, the framework’s performance is validated through simulations. The results highlight that MOEA/D achieved an optimal cost of KRW 32,205.0, a benefit of KRW 32,205.0, and an energy utilization rate of 57.46%, outperforming the widely used NSGA-II algorithm. Pareto front analysis demonstrates MOEA/D’s ability to generate diverse and balanced solutions, making it well suited for regulated energy markets. These findings underline the framework’s potential to improve energy efficiency, lower costs, and foster sustainable energy trading practices. This research offers valuable insights for advancing decentralized energy systems in South Korea and similar environments. Full article
Show Figures

Figure 1

Back to TopTop