Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (82)

Search Parameters:
Keywords = Naive Bayesian Classifier

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1816 KB  
Article
Research on Synchronous Transfer Control Technology for Distribution Network Load Based on Imprecise Probability
by Hua Zhang, Cheng Long, Xueneng Su, Yiwen Gao and Wei Luo
Mathematics 2025, 13(20), 3299; https://doi.org/10.3390/math13203299 - 16 Oct 2025
Viewed by 426
Abstract
As the penetration rate of distributed power sources increases and distribution network structures grow increasingly complex, the uncertainty in switch action control during load transfer has become a critical issue affecting grid safety and reliability. Traditional control methods based on precise probability-based predictive [...] Read more.
As the penetration rate of distributed power sources increases and distribution network structures grow increasingly complex, the uncertainty in switch action control during load transfer has become a critical issue affecting grid safety and reliability. Traditional control methods based on precise probability-based predictive control are susceptible to bias introduced by prior settings under small-sample conditions, making it difficult to meet the stringent requirements of time-synchronized control. To address this, this study proposes an imprecise probability-based synchronous load transfer control method for distribution networks. By integrating the Imprecise Dirichlet model (IDM) with a Naive Credal Classifier (NCC), it constructs an interval predictive control model for switching action timing. This approach effectively mitigates the prior dependency issue and enhances estimation robustness under small-sample conditions. Combined with a dynamic delay strategy, this approach strictly controls the interval between disconnection and reconnection actions within 20 ms, preventing circulating current risks and ensuring transfer reliability. The simulation and experimental results demonstrate that the proposed method outperforms traditional Bayesian classifiers in both time prediction control accuracy and model robustness, providing a theoretical foundation and a reference for engineering applications for secure action control in distribution networks. Full article
(This article belongs to the Special Issue Complex Process Modeling and Control Based on AI Technology)
Show Figures

Figure 1

16 pages, 2952 KB  
Article
Influence of Florfenicol Treatments on Marine-Sediment Microbiomes: A Metagenomic Study of Bacterial Communities in Proximity to Salmon Aquaculture in Southern Chile
by Sergio Lynch, Pamela Thomson, Rodrigo Santibañez and Ruben Avendaño-Herrera
Antibiotics 2025, 14(10), 1016; https://doi.org/10.3390/antibiotics14101016 - 13 Oct 2025
Viewed by 1571
Abstract
Background/Objectives: Metagenomic analyses are an important tool for understanding ecological effects, particularly in sites exposed to antimicrobial treatments. Marine sediments host diverse microbial communities and may serve as reservoirs for microbial resistance. Although it is known that antimicrobials can alter microbial composition, [...] Read more.
Background/Objectives: Metagenomic analyses are an important tool for understanding ecological effects, particularly in sites exposed to antimicrobial treatments. Marine sediments host diverse microbial communities and may serve as reservoirs for microbial resistance. Although it is known that antimicrobials can alter microbial composition, specific impacts on sediments surrounding salmon farms remain poorly understood. This study analyzed bacterial community structure in marine sediments subjected to florfenicol treatment from salmon farms in the Los Lagos Region of southern Chile. Methods: Sediment samples were collected and examined through DNA extraction and PCR amplification of the 16S rRNA gene (V3-V4 region). Sequences were analyzed using a bioinformatics pipeline, and amplicon sequence variants (ASVs) were taxonomically classified with a Naïve Bayesian classifier. The resulting ASV abundance were then used to predict metabolic functions and pathways via PICRUSt2, referencing the MetaCyc database. Results: Significant differences in bacterial phyla were observed between the control farm and two farms treated with florfenicol (17 mg kg−1 body weight per day) for 33 and 20 days, respectively. Farm 1 showed notable differences in phyla such as Bacteroidota, Bdellovibrionota, Crenarchaeota, Deferrisomatota, Desulfobacterota, Fibrobacterota, Firmicutes, and Fusobacteriota, while Farm 2 exhibited differences in the phyla Bdellovibrionota, Calditrichota, Crenarchaeota, Deferrisomatota, Desulfobacterota, Fusobacteriota, Nanoarchaeota, and Nitrospirota. Shannon Index analysis revealed a reduction in alpha diversity in the treated farms. Comparative analysis between the control and the treated farms showed pronounced shifts in the relative abundance of several bacterial phyla, including statistically significant differences in Chloroflexi and Firmicutes. Predicted functional pathways revealed a notable enrichment of L-methionine biosynthesis III in Farm 2, suggesting a shift in sulfur metabolism potentially driven by antimicrobial treatment. Additionally, increased activity in fatty acid oxidation pathways indicates a higher microbial potential for lipid degradation at this site. Conclusions: These findings highlight the considerable influence of florfenicol on sediment microbial communities and reinforce the need for sustainable management strategies to minimize ecological disruption and the spread of antimicrobial resistance. Full article
Show Figures

Figure 1

24 pages, 1966 KB  
Article
A Hybrid Bayesian Machine Learning Framework for Simultaneous Job Title Classification and Salary Estimation
by Wail Zita, Sami Abou El Faouz, Mohanad Alayedi and Ebrahim E. Elsayed
Symmetry 2025, 17(8), 1261; https://doi.org/10.3390/sym17081261 - 7 Aug 2025
Cited by 1 | Viewed by 1730
Abstract
In today’s fast-paced and evolving job market, salary continues to play a critical role in career decision-making. The ability to accurately classify job titles and predict corresponding salary ranges is increasingly vital for organizations seeking to attract and retain top talent. This paper [...] Read more.
In today’s fast-paced and evolving job market, salary continues to play a critical role in career decision-making. The ability to accurately classify job titles and predict corresponding salary ranges is increasingly vital for organizations seeking to attract and retain top talent. This paper proposes a novel approach, the Hybrid Bayesian Model (HBM), which combines Bayesian classification with advanced regression techniques to jointly address job title identification and salary prediction. HBM is designed to capture the inherent complexity and variability of real-world job market data. The model was evaluated against established machine learning (ML) algorithms, including Random Forests (RF), Support Vector Machines (SVM), Decision Trees (DT), and multinomial naïve Bayes classifiers. Experimental results show that HBM outperforms these benchmarks, achieving 99.80% accuracy, 99.85% precision, 100% recall, and an F1 score of 98.8%. These findings highlight the potential of hybrid ML frameworks to improve labor market analytics and support data-driven decision-making in global recruitment strategies. Consequently, the suggested HBM algorithm provides high accuracy and handles the dual tasks of job title classification and salary estimation in a symmetric way. It does this by learning from class structures and mirrored decision limits in feature space. Full article
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)
Show Figures

Figure 1

16 pages, 662 KB  
Article
Augmenting Naïve Bayes Classifiers with k-Tree Topology
by Fereshteh R. Dastjerdi and Liming Cai
Mathematics 2025, 13(13), 2185; https://doi.org/10.3390/math13132185 - 4 Jul 2025
Viewed by 850
Abstract
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential [...] Read more.
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential full dependencies among feature variables. On the other hand, Naïve Bayes, which presumes zero dependencies among features, trades accuracy for efficiency and often comes with underperformance. As a result, non-zero dependency structures, such as trees, are often used as more feasible probabilistic graph approximations; in particular, Tree Augmented Naïve Bayes (TAN) has been demonstrated to outperform Naïve Bayes and has become a popular choice. For applications where a variable is strongly influenced by multiple other features, TAN has been further extended to the k-dependency Bayesian classifier (KDB), where one feature can depend on up to k other features (for a given k2). In such cases, however, the selection of the k parent features for each variable is often made through heuristic search methods (such as sorting), which do not guarantee an optimal approximation of network topology. In this paper, the novel notion of k-tree Augmented Naïve Bayes (k-TAN) is introduced to augment Naïve Bayesian classifiers with k-tree topology as an approximation of Bayesian networks. It is proved that, under the Kullback–Leibler divergence measurement, k-tree topology approximation of Bayesian classifiers loses the minimum information with the topology of a maximum spanning k-tree, where the edge weights of the graph are mutual information between random variables conditional upon the class label. In addition, while in general finding a maximum spanning k-tree is NP-hard for fixed k2, this work shows that the approximation problem can be solved in time O(nk+1) if the spanning k-tree also desires to retain a given Hamiltonian path in the graph. Therefore, this algorithm can be employed to ensure efficient approximation of Bayesian networks with k-tree augmented Naïve Bayesian classifiers of the guaranteed minimum loss of information. Full article
Show Figures

Figure 1

31 pages, 2469 KB  
Article
A Dynamic Hidden Markov Model with Real-Time Updates for Multi-Risk Meteorological Forecasting in Offshore Wind Power
by Ruijia Yang, Jiansong Tang, Ryosuke Saga and Zhaoqi Ma
Sustainability 2025, 17(8), 3606; https://doi.org/10.3390/su17083606 - 16 Apr 2025
Cited by 3 | Viewed by 2203
Abstract
Offshore wind farms play a pivotal role in the global transition to clean energy but remain susceptible to diverse meteorological hazards—ranging from highly variable wind speeds and temperature anomalies to severe oceanic disturbances—that can jeopardize both turbine safety and overall power output. Although [...] Read more.
Offshore wind farms play a pivotal role in the global transition to clean energy but remain susceptible to diverse meteorological hazards—ranging from highly variable wind speeds and temperature anomalies to severe oceanic disturbances—that can jeopardize both turbine safety and overall power output. Although Hidden Markov Models (HMMs) have a longstanding track record in operational forecasting, this study leverages and extends their capabilities by introducing a dynamic HMM framework tailored specifically for multi-risk offshore wind applications. Building upon historical datasets and expert assessments, the proposed model begins with initial transition and observation probabilities and then refines them adaptively through periodic or event-triggered recalibrations (e.g., Baum–Welch), thus capturing evolving weather patterns in near-real-time. Compared to static Markov chains, naive Bayes classifiers, and RNN (LSTM) baselines, our approach demonstrates notable accuracy gains, with improvements of up to 10% in severe weather conditions across three industrial-scale wind farms. Additionally, the model’s minutes-level computational overhead for parameter updates and state decoding proves feasible for real-time deployment, thereby supporting proactive scheduling and maintenance decisions. While this work focuses on the core dynamic HMM method, future expansions may incorporate hierarchical structures, Bayesian uncertainty quantification, and GAN-based synthetic data to further enhance robustness under high-dimensional measurements and rare, long-tail meteorological events. In sum, the multi-risk forecasting methodology presented here—though built on an established HMM concept—offers a practical, adaptive solution that significantly bolsters safety margins and operational reliability in offshore wind power systems. Full article
Show Figures

Figure 1

29 pages, 6331 KB  
Article
Multimodal Affective Communication Analysis: Fusing Speech Emotion and Text Sentiment Using Machine Learning
by Diego Resende Faria, Abraham Itzhak Weinberg and Pedro Paulo Ayrosa
Appl. Sci. 2024, 14(15), 6631; https://doi.org/10.3390/app14156631 - 29 Jul 2024
Cited by 14 | Viewed by 5780
Abstract
Affective communication, encompassing verbal and non-verbal cues, is crucial for understanding human interactions. This study introduces a novel framework for enhancing emotional understanding by fusing speech emotion recognition (SER) and sentiment analysis (SA). We leverage diverse features and both classical and deep learning [...] Read more.
Affective communication, encompassing verbal and non-verbal cues, is crucial for understanding human interactions. This study introduces a novel framework for enhancing emotional understanding by fusing speech emotion recognition (SER) and sentiment analysis (SA). We leverage diverse features and both classical and deep learning models, including Gaussian naive Bayes (GNB), support vector machines (SVMs), random forests (RFs), multilayer perceptron (MLP), and a 1D convolutional neural network (1D-CNN), to accurately discern and categorize emotions in speech. We further extract text sentiment from speech-to-text conversion, analyzing it using pre-trained models like bidirectional encoder representations from transformers (BERT), generative pre-trained transformer 2 (GPT-2), and logistic regression (LR). To improve individual model performance for both SER and SA, we employ an extended dynamic Bayesian mixture model (DBMM) ensemble classifier. Our most significant contribution is the development of a novel two-layered DBMM (2L-DBMM) for multimodal fusion. This model effectively integrates speech emotion and text sentiment, enabling the classification of more nuanced, second-level emotional states. Evaluating our framework on the EmoUERJ (Portuguese) and ESD (English) datasets, the extended DBMM achieves accuracy rates of 96% and 98% for SER, 85% and 95% for SA, and 96% and 98% for combined emotion classification using the 2L-DBMM, respectively. Our findings demonstrate the superior performance of the extended DBMM for individual modalities compared to individual classifiers and the 2L-DBMM for merging different modalities, highlighting the value of ensemble methods and multimodal fusion in affective communication analysis. The results underscore the potential of our approach in enhancing emotional understanding with broad applications in fields like mental health assessment, human–robot interaction, and cross-cultural communication. Full article
Show Figures

Figure 1

23 pages, 665 KB  
Review
Machine Learning Models and Applications for Early Detection
by Orlando Zapata-Cortes, Martin Darío Arango-Serna, Julian Andres Zapata-Cortes and Jaime Alonso Restrepo-Carmona
Sensors 2024, 24(14), 4678; https://doi.org/10.3390/s24144678 - 18 Jul 2024
Cited by 14 | Viewed by 5067
Abstract
From the various perspectives of machine learning (ML) and the multiple models used in this discipline, there is an approach aimed at training models for the early detection (ED) of anomalies. The early detection of anomalies is crucial in multiple areas of knowledge [...] Read more.
From the various perspectives of machine learning (ML) and the multiple models used in this discipline, there is an approach aimed at training models for the early detection (ED) of anomalies. The early detection of anomalies is crucial in multiple areas of knowledge since identifying and classifying them allows for early decision making and provides a better response to mitigate the negative effects caused by late detection in any system. This article presents a literature review to examine which machine learning models (MLMs) operate with a focus on ED in a multidisciplinary manner and, specifically, how these models work in the field of fraud detection. A variety of models were found, including Logistic Regression (LR), Support Vector Machines (SVMs), decision trees (DTs), Random Forests (RFs), naive Bayesian classifier (NB), K-Nearest Neighbors (KNNs), artificial neural networks (ANNs), and Extreme Gradient Boosting (XGB), among others. It was identified that MLMs operate as isolated models, categorized in this article as Single Base Models (SBMs) and Stacking Ensemble Models (SEMs). It was identified that MLMs for ED in multiple areas under SBMs’ and SEMs’ implementation achieved accuracies greater than 80% and 90%, respectively. In fraud detection, accuracies greater than 90% were reported by the authors. The article concludes that MLMs for ED in multiple applications, including fraud, offer a viable way to identify and classify anomalies robustly, with a high degree of accuracy and precision. MLMs for ED in fraud are useful as they can quickly process large amounts of data to detect and classify suspicious transactions or activities, helping to prevent financial losses. Full article
(This article belongs to the Special Issue AI-Assisted Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

17 pages, 987 KB  
Article
ACME: A Classification Model for Explaining the Risk of Preeclampsia Based on Bayesian Network Classifiers and a Non-Redundant Feature Selection Approach
by Franklin Parrales-Bravo, Rosangela Caicedo-Quiroz, Elianne Rodríguez-Larraburu and Julio Barzola-Monteses
Informatics 2024, 11(2), 31; https://doi.org/10.3390/informatics11020031 - 17 May 2024
Cited by 16 | Viewed by 3467
Abstract
While preeclampsia is the leading cause of maternal death in Guayas province (Ecuador), its causes have not yet been studied in depth. The objective of this research is to build a Bayesian network classifier to diagnose cases of preeclampsia while facilitating the understanding [...] Read more.
While preeclampsia is the leading cause of maternal death in Guayas province (Ecuador), its causes have not yet been studied in depth. The objective of this research is to build a Bayesian network classifier to diagnose cases of preeclampsia while facilitating the understanding of the causes that generate this disease. Data for the years 2017 through 2023 were gathered retrospectively from medical histories of patients treated at “IESS Los Ceibos” hospital in Guayaquil, Ecuador. Naïve Bayes (NB), The Chow–Liu Tree-Augmented Naïve Bayes (TANcl), and Semi Naïve Bayes (FSSJ) algorithms have been considered for building explainable classification models. A proposed Non-Redundant Feature Selection approach (NoReFS) is proposed to perform the feature selection task. The model trained with the TANcl and NoReFS was the best of them, with an accuracy close to 90%. According to the best model, patients whose age is above 35 years, have a severe vaginal infection, live in a rural area, use tobacco, have a family history of diabetes, and have had a personal history of hypertension are those with a high risk of developing preeclampsia. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

14 pages, 1081 KB  
Article
Ensemble Modeling with a Bayesian Maximal Information Coefficient-Based Model of Bayesian Predictions on Uncertainty Data
by Tisinee Surapunt and Shuliang Wang
Information 2024, 15(4), 228; https://doi.org/10.3390/info15040228 - 18 Apr 2024
Cited by 8 | Viewed by 3801
Abstract
Uncertainty presents unfamiliar circumstances or incomplete information that may be difficult to handle with a single model of a traditional machine learning algorithm. They are possibly limited by inadequate data, an ambiguous model, and learning performance to make a prediction. Therefore, ensemble modeling [...] Read more.
Uncertainty presents unfamiliar circumstances or incomplete information that may be difficult to handle with a single model of a traditional machine learning algorithm. They are possibly limited by inadequate data, an ambiguous model, and learning performance to make a prediction. Therefore, ensemble modeling is proposed as a powerful model for enhancing predictive capabilities and robustness. This study aims to apply Bayesian prediction to ensemble modeling because it can encode conditional dependencies between variables and present the reasoning model using the BMIC model. The BMIC has clarified knowledge in the model which is ready for learning. Then, it was selected as the base model to be integrated with well-known algorithms such as logistic regression, K-nearest neighbors, decision trees, random forests, support vector machines (SVMs), neural networks, naive Bayes, and XGBoost classifiers. Also, the Bayesian neural network (BNN) and the probabilistic Bayesian neural network (PBN) were considered to compare their performance as a single model. The findings of this study indicate that the ensemble model of the BMIC with some traditional algorithms, which are SVM, random forest, neural networks, and XGBoost classifiers, returns 96.3% model accuracy in prediction. It provides a more reliable model and a versatile approach to support decision-making. Full article
Show Figures

Figure 1

18 pages, 2117 KB  
Article
A Novel Approach for Data Feature Weighting Using Correlation Coefficients and Min–Max Normalization
by Mohammed Shantal, Zalinda Othman and Azuraliza Abu Bakar
Symmetry 2023, 15(12), 2185; https://doi.org/10.3390/sym15122185 - 11 Dec 2023
Cited by 98 | Viewed by 8128
Abstract
In the realm of data analysis and machine learning, achieving an optimal balance of feature importance, known as feature weighting, plays a pivotal role, especially when considering the nuanced interplay between the symmetry of data distribution and the need to assign differential weights [...] Read more.
In the realm of data analysis and machine learning, achieving an optimal balance of feature importance, known as feature weighting, plays a pivotal role, especially when considering the nuanced interplay between the symmetry of data distribution and the need to assign differential weights to individual features. Also, avoiding the dominance of large-scale traits is essential in data preparation. This step makes choosing an effective normalization approach one of the most challenging aspects of machine learning. In addition to normalization, feature weighting is another strategy to deal with the importance of the different features. One of the strategies to measure the dependency of features is the correlation coefficient. The correlation between features shows the relationship strength between the features. The integration of the normalization method with feature weighting in data transformation for classification has not been extensively studied. The goal is to improve the accuracy of classification methods by striking a balance between the normalization step and assigning greater importance to features with a strong relation to the class feature. To achieve this, we combine Min–Max normalization and weight the features by increasing their values based on their correlation coefficients with the class feature. This paper presents a proposed Correlation Coefficient with Min–Max Weighted (CCMMW) approach. The data being normalized depends on their correlation with the class feature. Logistic regression, support vector machine, k-nearest neighbor, neural network, and naive Bayesian classifiers were used to evaluate the proposed method. Twenty UCI Machine Learning Repository and Kaggle datasets with numerical values were also used in this study. The empirical results showed that the proposed CCMMW significantly improves the classification performance through support vector machine, logistic regression, and neural network classifiers in most datasets. Full article
(This article belongs to the Topic Decision-Making and Data Mining for Sustainable Computing)
Show Figures

Figure 1

11 pages, 1491 KB  
Article
Viability of ABO Blood Typing with ATR-FTIR Spectroscopy
by Alfonso Fernández-González, Álvaro J. Obaya, Christian Chimeno-Trinchet, Tania Fontanil and Rosana Badía-Laíño
Appl. Sci. 2023, 13(17), 9650; https://doi.org/10.3390/app13179650 - 25 Aug 2023
Cited by 3 | Viewed by 2319
Abstract
Fourier Transform Infrared Spectroscopy (FTIR) provides valuable biochemical information for biomedical analysis. It aids in identifying cancerous tissues, diagnosing diseases like acute pancreatitis or Alzheimer’s, and has applications in genomics, proteomics, and metabolomics. A combination of FTIR and chemometrics constitute an approach that [...] Read more.
Fourier Transform Infrared Spectroscopy (FTIR) provides valuable biochemical information for biomedical analysis. It aids in identifying cancerous tissues, diagnosing diseases like acute pancreatitis or Alzheimer’s, and has applications in genomics, proteomics, and metabolomics. A combination of FTIR and chemometrics constitute an approach that shows promise in fields like biology, forensics, food quality control, and plant variety identification. This study aims to explore the feasibility of ATR-FTIR spectroscopy for identifying ABO-blood types using spectroscopic tools. We employ various classifying algorithms, including Linear Discriminant Analysis (LDA), Naïve Bayes Classifier (NBC), Principal Component Analysis (PCA), and combinations of these methods, to detect A and B antigens and determine the ABO blood type. The results show that these algorithms predict the blood type to a greater extent than random selection, although they do not match the precision of biochemical blood typing tools. Additionally, our findings suggest the higher sensitivity of the methodology in identifying B antigens compared to A antigens. Full article
Show Figures

Figure 1

19 pages, 5478 KB  
Article
Rapid Classification and Quantification of Coal by Using Laser-Induced Breakdown Spectroscopy and Machine Learning
by Yanning Zheng, Qingmei Lu, Anqi Chen, Yulin Liu and Xiaohan Ren
Appl. Sci. 2023, 13(14), 8158; https://doi.org/10.3390/app13148158 - 13 Jul 2023
Cited by 13 | Viewed by 2570
Abstract
Coal is expected to be an important energy resource for some developing countries in the coming decades; thus, the rapid classification and qualification of coal quality has an important impact on the improvement in industrial production and the reduction in pollution emissions. The [...] Read more.
Coal is expected to be an important energy resource for some developing countries in the coming decades; thus, the rapid classification and qualification of coal quality has an important impact on the improvement in industrial production and the reduction in pollution emissions. The traditional methods for the proximate analysis of coal are time consuming and labor intensive, whose results will lag in the combustion condition of coal-fired boilers. However, laser-induced breakdown spectroscopy (LIBS) assisted with machine learning can meet the requirements of rapid detection and multi-element analysis of coal quality. In this work, 100 coal samples from 11 origins were divided into training, test, and prediction sets, and some clustering models, classification models, and regression models were established for the performance analysis in different application scenarios. Among them, clustering models can cluster coal samples into several clusterings only by coal spectra; classification models can classify coal with labels into different categories; and the regression model can give quantitative prediction results for proximate analysis indicators. Cross-validation was used to evaluate the model performance, which helped to select the optimal parameters for each model. The results showed that K-means clustering could effectively divide coal samples into four clusters that were similar within the class but different between classes; naive Bayesian classification can distinguish coal samples into different origins according to the probability distribution function, and its prediction accuracy could reach 0.967; and partial least squares regression can reduce the influence of multivariate collinearity on prediction, whose root mean square error of prediction for ash, volatile matter, and fixed carbon are 1.012%, 0.878%, and 1.409%, respectively. In this work, the built model provided a reference for the selection of machine learning methods for LIBS when applied to classification and qualification. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

14 pages, 1786 KB  
Article
Pancreas Rejection in the Artificial Intelligence Era: New Tool for Signal Patients at Risk
by Emanuel Vigia, Luís Ramalhete, Rita Ribeiro, Inês Barros, Beatriz Chumbinho, Edite Filipe, Ana Pena, Luís Bicho, Ana Nobre, Sofia Carrelha, Mafalda Sobral, Jorge Lamelas, João Santos Coelho, Aníbal Ferreira and Hugo Pinto Marques
J. Pers. Med. 2023, 13(7), 1071; https://doi.org/10.3390/jpm13071071 - 29 Jun 2023
Cited by 13 | Viewed by 2460
Abstract
Introduction: Pancreas transplantation is currently the only treatment that can re-establish normal endocrine pancreatic function. Despite all efforts, pancreas allograft survival and rejection remain major clinical problems. The purpose of this study was to identify features that could signal patients at risk of [...] Read more.
Introduction: Pancreas transplantation is currently the only treatment that can re-establish normal endocrine pancreatic function. Despite all efforts, pancreas allograft survival and rejection remain major clinical problems. The purpose of this study was to identify features that could signal patients at risk of pancreas allograft rejection. Methods: We collected 74 features from 79 patients who underwent simultaneous pancreas–kidney transplantation (SPK) and used two widely-applicable classification methods, the Naive Bayesian Classifier and Support Vector Machine, to build predictive models. We used the area under the receiver operating characteristic curve and classification accuracy to evaluate the predictive performance via leave-one-out cross-validation. Results: Rejection events were identified in 13 SPK patients (17.8%). In feature selection approach, it was possible to identify 10 features, namely: previous treatment for diabetes mellitus with long-term Insulin (U/I/day), type of dialysis (peritoneal dialysis, hemodialysis, or pre-emptive), de novo DSA, vPRA_Pre-Transplant (%), donor blood glucose, pancreas donor risk index (pDRI), recipient height, dialysis time (days), warm ischemia (minutes), recipient of intensive care (days). The results showed that the Naive Bayes and Support Vector Machine classifiers prediction performed very well, with an AUROC and classification accuracy of 0.97 and 0.87, respectively, in the first model and 0.96 and 0.94 in the second model. Conclusion: Our results indicated that it is feasible to develop successful classifiers for the prediction of graft rejection. The Naive Bayesian generated nomogram can be used for rejection probability prediction, thus supporting clinical decision making. Full article
(This article belongs to the Special Issue Personalized Medicine in Organ Transplantation)
Show Figures

Figure 1

21 pages, 4511 KB  
Article
Attention Aware Deep Learning Approaches for an Efficient Stress Classification Model
by Muhammad Zulqarnain, Habib Shah, Rozaida Ghazali, Omar Alqahtani, Rubab Sheikh and Muhammad Asadullah
Brain Sci. 2023, 13(7), 994; https://doi.org/10.3390/brainsci13070994 - 25 Jun 2023
Cited by 7 | Viewed by 3061
Abstract
In today’s world, stress is a major factor for various diseases in modern societies which affects the day-to-day activities of human beings. The measurement of stress is a contributing factor for governments and societies that impacts the quality of daily lives. The strategy [...] Read more.
In today’s world, stress is a major factor for various diseases in modern societies which affects the day-to-day activities of human beings. The measurement of stress is a contributing factor for governments and societies that impacts the quality of daily lives. The strategy of stress monitoring systems requires an accurate stress classification technique which is identified via the reactions of the body to regulate itself to changes within the environment through mental and emotional responses. Therefore, this research proposed a novel deep learning approach for the stress classification system. In this paper, we presented an Enhanced Long Short-Term Memory(E-LSTM) based on the feature attention mechanism that focuses on determining and categorizing the stress polarity using sequential modeling and word-feature seizing. The proposed approach integrates pre-feature attention in E-LSTM to identify the complicated relationship and extract the keywords through an attention layer for stress classification. This research has been evaluated using a selected dataset accessed from the sixth Korea National Health and Nutrition Examination Survey conducted from 2013 to 2015 (KNHANES VI) to analyze health-related stress data. Statistical performance of the developed approach was analyzed based on the nine features of stress detection, and we compared the effectiveness of the developed approach with other different stress classification approaches. The experimental results shown that the developed approach obtained accuracy, precision, recall and a F1-score of 75.54%, 74.26%, 72.99% and 74.58%, respectively. The feature attention mechanism-based E-LSTM approach demonstrated superior performance in stress detection classification when compared to other classification methods including naïve Bayesian, SVM, deep belief network, and standard LSTM. The results of this study demonstrated the efficiency of the proposed approach in accurately classifying stress detection, particularly in stress monitoring systems where it is expected to be effective for stress prediction. Full article
(This article belongs to the Special Issue Intelligent Neural Systems for Solving Real Problems)
Show Figures

Figure 1

18 pages, 2600 KB  
Article
Software Defect Prediction with Bayesian Approaches
by María José Hernández-Molinos, Angel J. Sánchez-García, Rocío Erandi Barrientos-Martínez, Juan Carlos Pérez-Arriaga and Jorge Octavio Ocharán-Hernández
Mathematics 2023, 11(11), 2524; https://doi.org/10.3390/math11112524 - 31 May 2023
Cited by 17 | Viewed by 3815
Abstract
Software defect prediction is an important area in software engineering because it helps developers identify and fix problems before they become costly and hard-to-fix bugs. Early detection of software defects helps save time and money in the software development process and ensures the [...] Read more.
Software defect prediction is an important area in software engineering because it helps developers identify and fix problems before they become costly and hard-to-fix bugs. Early detection of software defects helps save time and money in the software development process and ensures the quality of the final product. This research aims to evaluate three algorithms to build Bayesian Networks to classify whether a project is prone to defects. The choice is based on the fact that the most used approach in the literature is Naive Bayes, but no works use Bayesian Networks. Thus, K2, Hill Climbing, and TAN are used to construct Bayesian Networks. On the other hand, three public PROMISE data sets are used based on McCabe and Halstead complexity metrics. The results are compared with the most used approaches in the literature, such as Decision Tree and Random Forest. The results from different performance metrics applied to a cross-validation process show that the classification results are comparable to Decision Tree and Random Forest, with the advantage that Bayesian algorithms show less variability, which helps engineering software to have greater robustness in their predictions since the selection of training and test data do not give variable results, unlike Decision Tree and Random Forest. Full article
Show Figures

Figure 1

Back to TopTop