Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (760)

Search Parameters:
Keywords = bayes approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1718 KiB  
Review
A Review on Risk and Reliability Analysis in Photovoltaic Power Generation
by Ahmad Zaki Abdul Karim, Mohamad Shaiful Osman and Mohd. Khairil Rahmat
Energies 2025, 18(14), 3790; https://doi.org/10.3390/en18143790 - 17 Jul 2025
Abstract
Precise evaluation of risk and reliability is crucial for decision making and predicting the outcome of investment in a photovoltaic power system (PVPS) due to its intermittent source. This paper explores different methodologies for risk evaluation and reliability assessment, which can be categorized [...] Read more.
Precise evaluation of risk and reliability is crucial for decision making and predicting the outcome of investment in a photovoltaic power system (PVPS) due to its intermittent source. This paper explores different methodologies for risk evaluation and reliability assessment, which can be categorized into qualitative, quantitative, and hybrid qualitative and quantitative (HQQ) approaches. Qualitative methods include failure mode analysis, graphical analysis, and hazard analysis, while quantitative methods include analytical methods, stochastic methods, Bayes’ theorem, reliability optimization, multi-criteria analysis, and data utilization. HQQ methodology combines table-based and visual analysis methods. Currently, reliability assessment techniques such as mean time between failures (MTBF), system average interruption frequency index (SAIFI), and system average interruption duration index (SAIDI) are commonly used to predict PVPS performance. However, alternative methods such as economical metrics like the levelized cost of energy (LCOE) and net present value (NPV) can also be used. Therefore, a risk and reliability approach should be applied together to improve the accuracy of predicting significant aspects in the photovoltaic industry. Full article
(This article belongs to the Section B: Energy and Environment)
Show Figures

Figure 1

59 pages, 11250 KiB  
Article
Automated Analysis of Vertebral Body Surface Roughness for Adult Age Estimation: Ellipse Fitting and Machine-Learning Approach
by Erhan Kartal and Yasin Etli
Diagnostics 2025, 15(14), 1794; https://doi.org/10.3390/diagnostics15141794 - 16 Jul 2025
Viewed by 65
Abstract
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT [...] Read more.
Background/Objectives: Vertebral degenerative features are promising but often subjectively scored indicators for adult age estimation. We evaluated an objective surface roughness metric, the “average distance to the fitted ellipse” score (DS), calculated automatically for every vertebra from C7 to S1 on routine CT images. Methods: CT scans of 176 adults (94 males, 82 females; 21–94 years) were retrospectively analyzed. For each vertebra, the mean orthogonal deviation of the anterior superior endplate from an ideal ellipse was extracted. Sex-specific multiple linear regression served as a baseline; support vector regression (SVR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve-Bayes pseudo-regressor (GNB-R) were tuned with 10-fold cross-validation and evaluated on a 20% hold-out set. Performance was quantified with the standard error of the estimate (SEE). Results: DS values correlated moderately to strongly with age (peak r = 0.60 at L3–L5). Linear regression explained 40% (males) and 47% (females) of age variance (SEE ≈ 11–12 years). Non-parametric learners improved precision: RF achieved an SEE of 8.49 years in males (R2 = 0.47), whereas k-NN attained 10.8 years (R2 = 0.45) in women. Conclusions: Automated analysis of vertebral cortical roughness provides a transparent, observer-independent means of estimating adult age with accuracy approaching that of more complex deep learning pipelines. Streamlining image preparation and validating the approach across diverse populations are the next steps toward forensic adoption. Full article
(This article belongs to the Special Issue New Advances in Forensic Radiology and Imaging)
Show Figures

Figure 1

20 pages, 351 KiB  
Article
Multi-Level Depression Severity Detection with Deep Transformers and Enhanced Machine Learning Techniques
by Nisar Hussain, Amna Qasim, Gull Mehak, Muhammad Zain, Grigori Sidorov, Alexander Gelbukh and Olga Kolesnikova
AI 2025, 6(7), 157; https://doi.org/10.3390/ai6070157 - 15 Jul 2025
Viewed by 231
Abstract
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed [...] Read more.
Depression is now one of the most common mental health concerns in the digital era, calling for powerful computational tools for its detection and its level of severity estimation. A multi-level depression severity detection framework in the Reddit social media network is proposed in this study, and posts are classified into four levels: minimum, mild, moderate, and severe. We take a dual approach using classical machine learning (ML) algorithms and recent Transformer-based architectures. For the ML track, we build ten classifiers, including Logistic Regression, SVM, Naive Bayes, Random Forest, XGBoost, Gradient Boosting, K-NN, Decision Tree, AdaBoost, and Extra Trees, with two recently proposed embedding methods, Word2Vec and GloVe embeddings, and we fine-tune them for mental health text classification. Of these, XGBoost yields the highest F1-score of 94.01 using GloVe embeddings. For the deep learning track, we fine-tune ten Transformer models, covering BERT, RoBERTa, XLM-RoBERTa, MentalBERT, BioBERT, RoBERTa-large, DistilBERT, DeBERTa, Longformer, and ALBERT. The highest performance was achieved by the MentalBERT model, with an F1-score of 97.31, followed by RoBERTa (96.27) and RoBERTa-large (96.14). Our results demonstrate that, to the best of the authors’ knowledge, domain-transferred Transformers outperform non-Transformer-based ML methods in capturing subtle linguistic cues indicative of different levels of depression, thereby highlighting their potential for fine-grained mental health monitoring in online settings. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 118
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 2297 KiB  
Article
Detecting Fake News in Urdu Language Using Machine Learning, Deep Learning, and Large Language Model-Based Approaches
by Muhammad Shoaib Farooq, Syed Muhammad Asadullah Gilani, Muhammad Faraz Manzoor and Momina Shaheen
Information 2025, 16(7), 595; https://doi.org/10.3390/info16070595 - 10 Jul 2025
Viewed by 178
Abstract
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in [...] Read more.
Fake news is false or misleading information that looks like real news and spreads through traditional and social media. It has a big impact on our social lives, especially in politics. In Pakistan, where Urdu is the main language, finding fake news in Urdu is difficult because there are not many effective systems for this. This study aims to solve this problem by creating a detailed process and training models using machine learning, deep learning, and large language models (LLMs). The research uses methods that look at the features of documents and classes to detect fake news in Urdu. Different models were tested, including machine learning models like Naïve Bayes and Support Vector Machine (SVM), as well as deep learning models like Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), which used embedding techniques. The study also used advanced models like BERT and GPT to improve the detection process. These models were first evaluated on the Bend-the-Truth dataset, where CNN achieved an F1 score of 72%, Naïve Bayes scored 78%, and the BERT Transformer achieved the highest F1 score of 79% on Bend the Truth dataset. To further validate the approach, the models were tested on a more diverse dataset, Ax-to-Grind, where both SVM and LSTM achieved an F1 score of 89%, while BERT outperformed them with an F1 score of 93%. Full article
Show Figures

Figure 1

18 pages, 359 KiB  
Article
On the Decision-Theoretic Foundations and the Asymptotic Bayes Risk of the Region of Practical Equivalence for Testing Interval Hypotheses
by Riko Kelter
Stats 2025, 8(3), 56; https://doi.org/10.3390/stats8030056 - 8 Jul 2025
Viewed by 115
Abstract
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian [...] Read more.
Testing interval hypotheses is of huge relevance in the biomedical and cognitive sciences; for example, in clinical trials. Frequentist approaches include the proposal of equivalence tests, which have been used to study if there is a predetermined meaningful treatment effect. In the Bayesian paradigm, two popular approaches exist: The first is the region of practical equivalence (ROPE), which has become increasingly popular in the cognitive sciences. The second is the Bayes factor for interval null hypotheses, which was proposed by Morey et al. One advantage of the ROPE procedure is that, in contrast to the Bayes factor, it is quite robust to the prior specification. However, while the ROPE is conceptually appealing, it lacks a clear decision-theoretic foundation like the Bayes factor. In this paper, a decision-theoretic justification for the ROPE procedure is derived for the first time, which shows that the Bayes risk of a decision rule based on the highest-posterior density interval (HPD) and the ROPE is asymptotically minimized for increasing sample size. To show this, a specific loss function is introduced. This result provides an important decision-theoretic justification for testing the interval hypothesis in the Bayesian approach based on the ROPE and HPD, in particular, when sample size is large. Full article
(This article belongs to the Section Bayesian Methods)
Show Figures

Figure 1

26 pages, 4907 KiB  
Article
A Novel Approach Utilizing Bagging, Histogram Gradient Boosting, and Advanced Feature Selection for Predicting the Onset of Cardiovascular Diseases
by Norma Latif Fitriyani, Muhammad Syafrudin, Nur Chamidah, Marisa Rifada, Hendri Susilo, Dursun Aydin, Syifa Latif Qolbiyani and Seung Won Lee
Mathematics 2025, 13(13), 2194; https://doi.org/10.3390/math13132194 - 4 Jul 2025
Viewed by 226
Abstract
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study [...] Read more.
Cardiovascular diseases (CVDs) rank among the leading global causes of mortality, underscoring the necessity for early detection and effective management. This research presents a novel prediction model for CVDs utilizing a bagging algorithm that incorporates histogram gradient boosting as the estimator. This study leverages three preprocessed cardiovascular datasets, employing the Local Outlier Factor technique for outlier removal and the information gain method for feature selection. Through rigorous experimentation, the proposed model demonstrates superior performance compared to conventional machine learning approaches, such as Logistic Regression, Support Vector Classification, Gaussian Naïve Bayes, Multi-Layer Perceptron, k-nearest neighbors, Random Forest, AdaBoost, gradient boosting, and histogram gradient boosting. Evaluation metrics, including precision, recall, F1 score, accuracy, and AUC, yielded impressive results: 93.90%, 98.83%, 96.30%, 96.25%, and 0.9916 for dataset I; 94.17%, 99.05%, 96.54%, 96.48%, and 0.9931 for dataset II; and 89.81%, 82.40%, 85.91%, 86.66%, and 0.9274 for dataset III. The findings indicate that the proposed prediction model has the potential to facilitate early CVD detection, thereby enhancing preventive strategies and improving patient outcomes. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Decision Making)
Show Figures

Figure 1

23 pages, 5294 KiB  
Article
CMB Parity Asymmetry from Unitary Quantum Gravitational Physics
by Enrique Gaztañaga and K. Sravan Kumar
Symmetry 2025, 17(7), 1056; https://doi.org/10.3390/sym17071056 - 4 Jul 2025
Viewed by 193
Abstract
Longstanding anomalies in the Cosmic Microwave Background (CMB), including the low quadrupole moment and hemispherical power asymmetry, have recently been linked to an underlying parity asymmetry. We show here how this parity asymmetry naturally arises within a quantum framework that explicitly incorporates the [...] Read more.
Longstanding anomalies in the Cosmic Microwave Background (CMB), including the low quadrupole moment and hemispherical power asymmetry, have recently been linked to an underlying parity asymmetry. We show here how this parity asymmetry naturally arises within a quantum framework that explicitly incorporates the construction of a geometric quantum vacuum based on parity (P) and time-reversal (T) transformations. This framework restores unitarity in quantum field theory in curved spacetime (QFTCS). When applied to inflationary quantum fluctuations, this unitary QFTCS formalism predicts parity asymmetry as a natural consequence of cosmic expansion, which inherently breaks time-reversal symmetry. Observational data strongly favor this unitary QFTCS approach, with a Bayes factor, the ratio of marginal likelihoods associated with the model given the data pM|D, exceeding 650 times that of predictions from the standard inflationary framework. This Bayesian approach contrasts with the standard practice in the CMB community, which evaluates pD|M, the likelihood of the data under the model, which undermines the importance of low- physics. Our results, for the first time, provide compelling evidence for the quantum gravitational origins of CMB parity asymmetry on large scales. Full article
(This article belongs to the Special Issue Quantum Gravity and Cosmology: Exploring the Astroparticle Interface)
Show Figures

Figure 1

17 pages, 572 KiB  
Article
Statistical Analysis Under a Random Censoring Scheme with Applications
by Mustafa M. Hasaballah and Mahmoud M. Abdelwahab
Symmetry 2025, 17(7), 1048; https://doi.org/10.3390/sym17071048 - 3 Jul 2025
Viewed by 203
Abstract
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates [...] Read more.
The Gumbel Type-II distribution is a widely recognized and frequently utilized lifetime distribution, playing a crucial role in reliability engineering. This paper focuses on the statistical inference of the Gumbel Type-II distribution under a random censoring scheme. From a frequentist perspective, point estimates for the unknown parameters are derived using the maximum likelihood estimation method, and confidence intervals are constructed based on the Fisher information matrix. From a Bayesian perspective, Bayes estimates of the parameters are obtained using the Markov Chain Monte Carlo method, and the average lengths of credible intervals are calculated. The Bayesian inference is performed under both the squared error loss function and the general entropy loss function. Additionally, a numerical simulation is conducted to evaluate the performance of the proposed methods. To demonstrate their practical applicability, a real world example is provided, illustrating the application and development of these inference techniques. In conclusion, the Bayesian method appears to outperform other approaches, although each method offers unique advantages. Full article
Show Figures

Figure 1

30 pages, 16041 KiB  
Article
Estimation of Inverted Weibull Competing Risks Model Using Improved Adaptive Progressive Type-II Censoring Plan with Application to Radiobiology Data
by Refah Alotaibi, Mazen Nassar and Ahmed Elshahhat
Symmetry 2025, 17(7), 1044; https://doi.org/10.3390/sym17071044 - 2 Jul 2025
Viewed by 289
Abstract
This study focuses on estimating the unknown parameters and the reliability function of the inverted-Weibull distribution, using an improved adaptive progressive Type-II censoring scheme under a competing risks model. Both classical and Bayesian estimation approaches are explored to offer a thorough analysis. Under [...] Read more.
This study focuses on estimating the unknown parameters and the reliability function of the inverted-Weibull distribution, using an improved adaptive progressive Type-II censoring scheme under a competing risks model. Both classical and Bayesian estimation approaches are explored to offer a thorough analysis. Under the classical approach, maximum likelihood estimators are obtained for the unknown parameters and the reliability function. Approximate confidence intervals are also constructed to assess the uncertainty in the estimates. From a Bayesian standpoint, symmetric Bayes estimates and highest posterior density credible intervals are computed using Markov Chain Monte Carlo sampling, assuming a symmetric squared error loss function. An extensive simulation study is carried out to assess how well the proposed methods perform under different experimental conditions, showing promising accuracy. To demonstrate the practical use of these methods, a real dataset is analyzed, consisting of the survival times of male mice aged 35 to 42 days after being exposed to 300 roentgens of X-ray radiation. The analysis demonstrated that the inverted Weibull distribution is well-suited for modeling the given dataset. Furthermore, the Bayesian estimation method, considering both point estimates and interval estimates, was found to be more effective than the classical approach in estimating the model parameters as well as the reliability function. Full article
Show Figures

Figure 1

19 pages, 914 KiB  
Article
RU-OLD: A Comprehensive Analysis of Offensive Language Detection in Roman Urdu Using Hybrid Machine Learning, Deep Learning, and Transformer Models
by Muhammad Zain, Nisar Hussain, Amna Qasim, Gull Mehak, Fiaz Ahmad, Grigori Sidorov and Alexander Gelbukh
Algorithms 2025, 18(7), 396; https://doi.org/10.3390/a18070396 - 28 Jun 2025
Cited by 1 | Viewed by 319
Abstract
The detection of abusive language in Roman Urdu is important for secure digital interaction. This work investigates machine learning (ML), deep learning (DL), and transformer-based methods for detecting offensive language in Roman Urdu comments collected from YouTube news channels. Extracted features use TF-IDF [...] Read more.
The detection of abusive language in Roman Urdu is important for secure digital interaction. This work investigates machine learning (ML), deep learning (DL), and transformer-based methods for detecting offensive language in Roman Urdu comments collected from YouTube news channels. Extracted features use TF-IDF and Count Vectorizer for unigrams, bigrams, and trigrams. Of all the ML models—Random Forest (RF), Logistic Regression (LR), Support Vector Machine (SVM), and Naïve Bayes (NB)—the best performance was achieved by the same SVM. DL models involved evaluating Bi-LSTM and CNN models, where the CNN model outperformed the others. Moreover, transformer variants such as LLaMA 2 and ModernBERT (MBERT) were instantiated and fine-tuned with LoRA (Low-Rank Adaptation) for better efficiency. LoRA has been tuned for large language models (LLMs), a family of advanced machine learning frameworks, based on the principle of making the process efficient with extremely low computational cost with better enhancement. According to the experimental results, LLaMA 2 with LoRA attained the highest F1-score of 96.58%, greatly exceeding the performance of other approaches. To elaborate, LoRA-optimized transformers perform well in capturing detailed subtleties of linguistic nuances, lending themselves well to Roman Urdu offensive language detection. The study compares the performance of conventional and contemporary NLP methods, highlighting the relevance of effective fine-tuning methods. Our findings pave the way for scalable and accurate automated moderation systems for online platforms supporting multiple languages. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

16 pages, 2152 KiB  
Article
Vehicle Motion State Recognition Method Based on Hidden Markov Model and Support Vector Machine
by Xiaojun Zou, Weibo Xiang, Jihong Lian, En Song, Chengkai Tang and Yangyang Liu
Symmetry 2025, 17(7), 1011; https://doi.org/10.3390/sym17071011 - 27 Jun 2025
Viewed by 178
Abstract
With the development of intelligent transportation, vehicle motion state recognition has become a crucial method for enhancing the reliability of vehicle navigation and ensuring driving safety. Currently, machine learning is the main approach for recognizing vehicle motion states. The symmetry characteristics of sensor [...] Read more.
With the development of intelligent transportation, vehicle motion state recognition has become a crucial method for enhancing the reliability of vehicle navigation and ensuring driving safety. Currently, machine learning is the main approach for recognizing vehicle motion states. The symmetry characteristics of sensor data have also been studied to better recognize motion states. However, the existing approaches face challenges during motion state changes due to indeterminate state boundaries, resulting in reduced recognition accuracy. To address this problem, this paper proposes a vehicle motion state recognition method based on the Hidden Markov Model (HMM) and Support Vector Machine (SVM). Firstly, Kalman filtering is applied to denoise the data of inertial sensors. Then, HMM is employed to capture the subtle state transition, enabling the recognition of complex dynamic state changes. Finally, SVM is utilized to classify motion states. The sensor data were collected in various vehicle motion states, including stationary, straight-line driving, lane changing, turning, and then the proposed method is compared with SVM, KNN (K-Nearest Neighbor), DT (Decision Tree), RF (Random Forest), and NB (Naive Bayes). The results of the experiment show that the proposed method improves the recognition accuracy of motion state transitions in the case of boundary ambiguity and is superior to the existing methods. Full article
(This article belongs to the Special Issue Symmetry and Its Application in Wireless Communication)
Show Figures

Figure 1

16 pages, 5302 KiB  
Article
BREAST-CAD: A Computer-Aided Diagnosis System for Breast Cancer Detection Using Machine Learning
by Riyam M. Masoud, Ramadan Madi Ali Bakir, M. Sabry Saraya and Sarah M. Ayyad
Technologies 2025, 13(7), 268; https://doi.org/10.3390/technologies13070268 - 24 Jun 2025
Viewed by 341
Abstract
This research presents a novel Computer-Aided Diagnosis (CAD) system called BREAST-CAD, developed to support clinicians in breast cancer detection. Our approach follows a three-phase methodology: Initially, a comprehensive literature review between 2000 and 2024 informed the choice of a suitable dataset and the [...] Read more.
This research presents a novel Computer-Aided Diagnosis (CAD) system called BREAST-CAD, developed to support clinicians in breast cancer detection. Our approach follows a three-phase methodology: Initially, a comprehensive literature review between 2000 and 2024 informed the choice of a suitable dataset and the selection of Naive Bayes (NB), K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and Decision Trees (DT) Machine Learning (ML) algorithms. Subsequently, the dataset was preprocessed and the four ML models were trained and validated, with the DT model achieving superior accuracy. We developed a novel, integrated client–server architecture for real-time diagnostic support, an aspect often underexplored in the current CAD literature. In the final phase, the DT model was embedded within a user-friendly client application, empowering clinicians to input patient diagnostic data directly and receive immediate, AI-driven predictions of cancer probability, with results securely transmitted and managed by a dedicated server, facilitating remote access and centralized data storage and ensuring data integrity. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

21 pages, 1764 KiB  
Article
Machine Learning-Based Predictive Maintenance at Smart Ports Using IoT Sensor Data
by Sheraz Aslam, Alejandro Navarro, Andreas Aristotelous, Eduardo Garro Crevillen, Alvaro Martınez-Romero, Álvaro Martínez-Ceballos, Alessandro Cassera, Kyriacos Orphanides, Herodotos Herodotou and Michalis P. Michaelides
Sensors 2025, 25(13), 3923; https://doi.org/10.3390/s25133923 - 24 Jun 2025
Viewed by 1147
Abstract
Maritime transportation plays a critical role in global containerized cargo logistics, with seaports serving as key nodes in this system. Ports are responsible for container loading and unloading, along with inspection, storage, and timely delivery to the destination, all of which heavily depend [...] Read more.
Maritime transportation plays a critical role in global containerized cargo logistics, with seaports serving as key nodes in this system. Ports are responsible for container loading and unloading, along with inspection, storage, and timely delivery to the destination, all of which heavily depend on the performance of the container handling equipment (CHE). Inefficient maintenance strategies and unplanned maintenance of the port equipment can lead to operational disruptions, including unexpected delays and long waiting times in the supply chain. Therefore, the maritime industry must adopt intelligent maintenance strategies at the port to optimize operational efficiency and resource utilization. Towards this end, this study presents a machine learning (ML)-based approach for predicting faults in CHE to improve equipment reliability and overall port performance. Firstly, a statistical model was developed to check the status and health of the hydraulic system, as it is crucial for the operation of the machines. Then, several ML models were developed, including artificial neural networks (ANNs), decision trees (DTs), random forest (RF), Extreme Gradient Boosting (XGBoost), and Gaussian Naive Bayes (GNB) to predict inverter over-temperature faults due to fan failures, clogged filters, and other related issues. From the tested models, the ANNs achieved the highest performance in predicting the specific faults with a 98.7% accuracy and 98.0% F1-score. Full article
(This article belongs to the Special Issue Sensors and IoT Technologies for the Smart Industry)
Show Figures

Figure 1

21 pages, 5516 KiB  
Article
Hyperspectral Imaging for Non-Destructive Moisture Prediction in Oat Seeds
by Peng Zhang and Jiangping Liu
Agriculture 2025, 15(13), 1341; https://doi.org/10.3390/agriculture15131341 - 22 Jun 2025
Viewed by 353
Abstract
Oat is a highly nutritious cereal crop, and the moisture content of its seeds plays a vital role in cultivation management, storage preservation, and quality control. To enable efficient and non-destructive prediction of this key quality parameter, this study presents a modeling framework [...] Read more.
Oat is a highly nutritious cereal crop, and the moisture content of its seeds plays a vital role in cultivation management, storage preservation, and quality control. To enable efficient and non-destructive prediction of this key quality parameter, this study presents a modeling framework integrating hyperspectral imaging (HSI) technology with a dual-optimization machine learning strategy. Seven spectral preprocessing techniques—standard normal variate (SNV), multiplicative scatter correction (MSC), first derivative (FD), second derivative (SD), and combinations such as SNV + FD, SNV + SD, and SNV + MSC—were systematically evaluated. Among them, SNV combined with FD was identified as the optimal preprocessing scheme, effectively enhancing spectral feature expression. To further refine the predictive model, three feature selection methods—successive projections algorithm (SPA), competitive adaptive reweighted sampling (CARS), and principal component analysis (PCA)—were assessed. PCA exhibited superior performance in information compression and modeling stability. Subsequently, a dual-optimized neural network model, termed Bayes-ASFSSA-BP, was developed by incorporating Bayesian optimization and the Adaptive Spiral Flight Sparrow Search Algorithm (ASFSSA). Bayesian optimization was used for global tuning of network structural parameters, while ASFSSA was applied to fine-tune the initial weights and thresholds, improving convergence efficiency and predictive accuracy. The proposed Bayes-ASFSSA-BP model achieved determination coefficients (R2) of 0.982 and 0.963, and root mean square errors (RMSEs) of 0.173 and 0.188 on the training and test sets, respectively. The corresponding mean absolute error (MAE) on the test set was 0.170, indicating excellent average prediction accuracy. These results significantly outperformed benchmark models such as SSA-BP, ASFSSA-BP, and Bayes-BP. Compared to the conventional BP model, the proposed approach increased the test R2 by 0.046 and reduced the RMSE by 0.157. Moreover, the model produced the narrowest 95% confidence intervals for test set performance (Rp2: [0.961, 0.971]; RMSE: [0.185, 0.193]), demonstrating outstanding robustness and generalization capability. Although the model incurred a slightly higher computational cost (480.9 s), the accuracy gain was deemed worthwhile. In conclusion, the proposed Bayes-ASFSSA-BP framework shows strong potential for accurate and stable non-destructive prediction of oat seed moisture content. This work provides a practical and efficient solution for quality assessment in agricultural products and highlights the promise of integrating Bayesian optimization with ASFSSA in modeling high-dimensional spectral data. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Back to TopTop