Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (167)

Search Parameters:
Keywords = supervised, semi-supervised, and unsupervised learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 338 KB  
Review
Remote Sensing, GIS, and Machine Learning in Water Resources Management for Arid Agricultural Regions: A Review
by Anas B. Rabie, Mohamed Elhag and Ali Subyani
Water 2025, 17(21), 3125; https://doi.org/10.3390/w17213125 - 31 Oct 2025
Viewed by 370
Abstract
Efficient water resource management in arid and semi-arid regions is a critical challenge due to persistent scarcity, climate change, and unsustainable agricultural practices. This review synthesizes recent advances in applying remote sensing (RS), geographic information systems (GIS), and machine learning (ML) to monitor, [...] Read more.
Efficient water resource management in arid and semi-arid regions is a critical challenge due to persistent scarcity, climate change, and unsustainable agricultural practices. This review synthesizes recent advances in applying remote sensing (RS), geographic information systems (GIS), and machine learning (ML) to monitor, analyze, and optimize water use in vulnerable agricultural landscapes. RS is evaluated for its capacity to quantify soil moisture, evapotranspiration, vegetation dynamics, and surface water extent. GIS applications are reviewed for hydrological modeling, watershed analysis, irrigation zoning, and multi-criteria decision-making. ML algorithms, including supervised, unsupervised, and deep learning approaches, are assessed for forecasting, classification, and hybrid integration with RS and GIS. Case studies from Central Asia, North Africa, the Middle East, and the United States illustrate successful implementations across various applications. The review also applies the DPSIR (Driving Force–Pressure–State–Impact–Response) framework to connect geospatial analytics with water policy, stakeholder engagement, and resilience planning. Key gaps include data scarcity, limited model interpretability, and equity challenges in tool access. Future directions emphasize explainable AI, cloud-based platforms, real-time modeling, and participatory approaches. By integrating RS, GIS, and ML, this review demonstrates pathways for more transparent, precise, and inclusive water governance in arid agricultural regions. Full article
22 pages, 2618 KB  
Article
Improving Coronary Artery Disease Diagnosis in Cardiac MRI with Self-Supervised Learning
by Usman Khalid, Mehmet Kaya and Reda Alhajj
Diagnostics 2025, 15(20), 2618; https://doi.org/10.3390/diagnostics15202618 - 17 Oct 2025
Viewed by 342
Abstract
The Background/Objectives: The excessive dependence on data annotation, the lack of labeled data, and the substantial expense of data annotation, especially in healthcare, have constrained the efficacy of conventional supervised learning methodologies. Self-supervised learning (SSL) has arisen as a viable option by utilizing [...] Read more.
The Background/Objectives: The excessive dependence on data annotation, the lack of labeled data, and the substantial expense of data annotation, especially in healthcare, have constrained the efficacy of conventional supervised learning methodologies. Self-supervised learning (SSL) has arisen as a viable option by utilizing unlabeled data via pretext tasks. This paper examines the efficacy of supervised (pseudo-labels) and unsupervised (no pseudo-labels) pretext models in semi-supervised learning (SSL) for the classification of coronary artery disease (CAD) utilizing cardiac MRI data, highlighting performance in scenarios of data scarcity, out-of-distribution (OOD) conditions, and adversarial robustness. Methods: Two datasets, referred to as CAD Cardiac MRI and Ohio State Cardiac MRI Raw Data (OCMR), were utilized to establish three pretext tasks: (i) supervised Gaussian noise addition, (ii) supervised image rotation, and (iii) unsupervised generative reconstruction. These models were evaluated against  Simple Framework for Contrastive Learning (SimCLR), a prevalent unsupervised contrastive learning framework. Performance was assessed under three data reduction scenarios (20%, 50%, 70%), out-of-distribution situations, and adversarial attacks utilizing FGSM and PGD, alongside other significant evaluation criteria. Results: The Gaussian noise-based model attained the highest validation accuracy (up to 99.9%) across all data reduction scenarios and exhibited superiority over adversarial perturbations and all other employed measures. The rotation-based model exhibited considerable susceptibility to attacks and diminished accuracy with reduced data. The generative reconstruction model demonstrated moderate efficacy with minimal performance decline. SimCLR exhibited strong performance under standard conditions but shown inferior robustness relative to the Gaussian noise model. Conclusions: Meticulously crafted self-supervised pretext tasks exhibit potential in cardiac MRI classification, showcasing dependable performance and generalizability despite little data. These initial findings underscore SSL’s capacity to create reliable models for safety-critical healthcare applications and encourage more validation across varied datasets and clinical environments. Full article
Show Figures

Figure 1

28 pages, 7590 KB  
Article
A Two-Stage Machine Learning Framework for Air Quality Prediction in Hamilton, New Zealand
by Noor H. S. Alani, Praneel Chand and Mohammad Al-Rawi
Environments 2025, 12(9), 336; https://doi.org/10.3390/environments12090336 - 20 Sep 2025
Viewed by 1165
Abstract
Air quality significantly affects human health, productivity, and overall well-being. This study applies machine learning techniques to analyse and predict air quality in Hamilton, New Zealand, focusing on particulate matter (PM2.5 and PM10) and environmental factors such as temperature, humidity, wind speed, and [...] Read more.
Air quality significantly affects human health, productivity, and overall well-being. This study applies machine learning techniques to analyse and predict air quality in Hamilton, New Zealand, focusing on particulate matter (PM2.5 and PM10) and environmental factors such as temperature, humidity, wind speed, and wind direction. Data were collected from two monitoring sites (Claudelands and Rotokauri) to explore relationships between variables and evaluate the performance of different predictive models. First, the unsupervised k-means clustering algorithm was used to categorise air quality levels based on data from one or both locations. These cluster labels were then used as target variables in supervised learning models, including random forests, decision trees, support vector machines, and k-nearest neighbours. Model performance was assessed by comparing prediction accuracy for air quality at either Claudelands or Rotokauri. Results show that the random forest (93.6%) and decision tree (91.8%) models outperformed k-nearest neighbours (KNN, 83%) and support vector machine (SVM, 61%) in predicting air quality clusters derived from k-means analysis. The three clusters (very good, good, and moderate) reflected seasonal and urban–semi-urban gradients, while cross-location validation confirmed that models trained at Claudelands generalised effectively to Rotokauri, demonstrating scalability for regional air quality forecasting. These findings highlight the potential of combining clustering with supervised learning to improve air quality predictions. Such methods could support environmental monitoring and inform strategies for mitigating pollution-related health risks in New Zealand cities and beyond. Full article
(This article belongs to the Special Issue Air Pollution in Urban and Industrial Areas III)
Show Figures

Figure 1

33 pages, 8991 KB  
Article
Towards Sustainable Waste Management: Predictive Modelling of Illegal Dumping Risk Zones Using Circular Data Loops and Remote Sensing
by Borut Hojnik, Gregor Horvat, Domen Mongus, Matej Brumen and Rok Kamnik
Sustainability 2025, 17(18), 8280; https://doi.org/10.3390/su17188280 - 15 Sep 2025
Cited by 1 | Viewed by 844
Abstract
Illegal waste dumping poses a severe challenge to sustainable urban and regional development, undermining environmental integrity, public health, and the efficient use of resources. This study contributes to sustainability science by proposing a circular data feedback loop that enables dynamic, scalable, and cost-efficient [...] Read more.
Illegal waste dumping poses a severe challenge to sustainable urban and regional development, undermining environmental integrity, public health, and the efficient use of resources. This study contributes to sustainability science by proposing a circular data feedback loop that enables dynamic, scalable, and cost-efficient monitoring and prevention of illegal dumping, aligned with the goals of sustainable waste governance. Historical data from the Slovenian illegal dumping register, UAV-based surveys and a newly developed application were used to update, monitor, and validate waste site locations. A comprehensive risk model, developed using machine learning methods, was created for the Municipality of Maribor (Slovenia). The modelling approach combined unsupervised and semi-supervised learning techniques, suitable for a positive-unlabeled (PU) dataset structure, where only confirmed illegal waste dumping sites were labeled. The approach demonstrates the feasibility of a circular data feedback loop integrating updated field data and predictive analytics to support waste management authorities and illegal waste dumping prevention. The fundamental characteristic of the stated approach is that each iteration of the loop improves the prediction of risk areas, providing a high-quality database for conducting targeted UAV overflights and consequently detecting locations of illegally dumped waste (LNOP) risk areas. At the same time, information on risk areas serves as the primary basis for each field detection of new LNOPs. The proposed model outperforms earlier approaches by addressing smaller and less conspicuous dumping events and by enabling systematic, technology-supported detection and prevention planning. Full article
(This article belongs to the Section Waste and Recycling)
Show Figures

Figure 1

42 pages, 5040 KB  
Systematic Review
A Systematic Review of Machine Learning Analytic Methods for Aviation Accident Research
by Aziida Nanyonga, Ugur Turhan and Graham Wild
Sci 2025, 7(3), 124; https://doi.org/10.3390/sci7030124 - 4 Sep 2025
Cited by 1 | Viewed by 1530
Abstract
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over [...] Read more.
The aviation industry prioritizes safety and has embraced innovative approaches for both reactive and proactive safety measures. Machine learning (ML) has emerged as a useful tool for aviation safety. This systematic literature review explores ML applications for safety within the aviation industry over the past 25 years. Through a comprehensive search on Scopus and backward reference searches via Google Scholar, 87 of the most relevant papers were identified. The investigation focused on the application context, ML techniques employed, data sources, and the implications of contextual nuances for safety analysis outcomes. ML techniques have been effective for post-accident analysis, predictive, and real-time incident detection across diverse aviation scenarios. Supervised, unsupervised, and semi-supervised learning methods, including neural networks, decision trees, support vector machines, and deep learning models, have all been applied for analyzing accidents, identifying patterns, and forecasting potential incidents. Notably, data sources such as the Aviation Safety Reporting System (ASRS) and the National Transportation Safety Board (NTSB) datasets were the most used. Transparency, fairness, and bias mitigation emerge as critical factors that shape the credibility and acceptance of ML-based safety research in aviation. The review revealed seven recommended future research directions: (1) interpretable AI; (2) real-time prediction; (3) hybrid models; (4) handling of unbalanced datasets; (5) privacy and data security; (6) human–machine interface for safety professionals; (7) regulatory implications. These directions provide a blueprint for further ML-based aviation safety research. This review underscores the role of ML applications in shaping aviation safety practices, thereby enhancing safety for all stakeholders. It serves as a constructive and cautionary guide for researchers, practitioners, and decision-makers, emphasizing the value of ML when used appropriately to transform aviation safety to be more data-driven and proactive. Full article
Show Figures

Figure 1

26 pages, 389 KB  
Article
Integrating AI with Meta-Language: An Interdisciplinary Framework for Classifying Concepts in Mathematics and Computer Science
by Elena Kramer, Dan Lamberg, Mircea Georgescu and Miri Weiss Cohen
Information 2025, 16(9), 735; https://doi.org/10.3390/info16090735 - 26 Aug 2025
Viewed by 531
Abstract
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from [...] Read more.
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from selected subfields within both disciplines. In particular, we focus on meta-languages—the linguistic tools used to express definitions, axioms, intuitions, and heuristics within a discipline. The primary objective of this research is to identify which subfields of Mathematics and Computer Science share similar meta-languages. Identifying such correspondences may enable the rephrasing of content from less familiar subfields using styles that students already recognize from more familiar areas, thereby enhancing accessibility and comprehension. To pursue this aim, we compiled text corpora from multiple subfields across both disciplines. We compared their meta-languages using a combination of supervised (Neural Network) and unsupervised (clustering) learning methods. Specifically, we applied several clustering algorithms—K-means, Partitioning around Medoids (PAM), Density-Based Clustering, and Gaussian Mixture Models—to analyze inter-discipline similarities. To validate the resulting classifications, we used XLNet, a deep learning model known for its sensitivity to linguistic patterns. The model achieved an accuracy of 78% and an F1-score of 0.944. Our findings show that subfields can be meaningfully grouped based on meta-language similarity, offering valuable insights for tailoring educational content more effectively. To further verify these groupings and explore their pedagogical relevance, we conducted both quantitative and qualitative research involving student participation. This paper presents findings from the qualitative component—namely, a content analysis of semi-structured interviews with software engineering students and lecturers. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

27 pages, 6169 KB  
Article
Application of Semi-Supervised Clustering with Membership Information and Deep Learning in Landslide Susceptibility Assessment
by Hua Xia, Zili Qin, Yuanxin Tong, Yintian Li, Rui Zhang and Hongxia Luo
Land 2025, 14(7), 1472; https://doi.org/10.3390/land14071472 - 15 Jul 2025
Viewed by 531
Abstract
Landslide susceptibility assessment (LSA) plays a crucial role in disaster prevention and mitigation. Traditional random selection of non-landslide samples (labeled as 0) suffers from poor representativeness and high randomness, which may include potential landslide areas and affect the accuracy of LSA. To address [...] Read more.
Landslide susceptibility assessment (LSA) plays a crucial role in disaster prevention and mitigation. Traditional random selection of non-landslide samples (labeled as 0) suffers from poor representativeness and high randomness, which may include potential landslide areas and affect the accuracy of LSA. To address this issue, this study proposes a novel Landslide Susceptibility Index–based Semi-supervised Fuzzy C-Means (LSI-SFCM) sampling strategy combining membership degrees. It utilizes landslide and unlabeled samples to map landslide membership degree via Semi-supervised Fuzzy C-Means (SFCM). Non-landslide samples are selected from low-membership regions and assigned membership values as labels. This study developed three models for LSA—Convolutional Neural Network (CNN), U-Net, and Support Vector Machine (SVM), and compared three negative sample sampling strategies: Random Sampling (RS), SFCM (samples labeled 0), and LSI-SFCM. The results demonstrate that the LSI-SFCM effectively enhances the representativeness and diversity of negative samples, improving the predictive performance and classification reliability. Deep learning models using LSI-SFCM performed with superior predictive capability. The CNN model achieved an area under the receiver operating characteristic curve (AUC) of 95.52% and a prediction rate curve value of 0.859. Furthermore, compared with the traditional unsupervised fuzzy C-means (FCM) clustering, SFCM produced a more reasonable distribution of landslide membership degrees, better reflecting the distinction between landslides and non-landslides. This approach enhances the reliability of LSA and provides a scientific basis for disaster prevention and mitigation authorities. Full article
Show Figures

Figure 1

20 pages, 1198 KB  
Article
Semi-Supervised Deep Learning Framework for Predictive Maintenance in Offshore Wind Turbines
by Valerio F. Barnabei, Tullio C. M. Ancora, Giovanni Delibra, Alessandro Corsini and Franco Rispoli
Int. J. Turbomach. Propuls. Power 2025, 10(3), 14; https://doi.org/10.3390/ijtpp10030014 - 4 Jul 2025
Cited by 1 | Viewed by 1044
Abstract
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, [...] Read more.
The increasing deployment of wind energy systems, particularly offshore wind farms, necessitates advanced monitoring and maintenance strategies to ensure optimal performance and minimize downtime. Supervisory Control And Data Acquisition (SCADA) systems have become indispensable tools for monitoring the operational health of wind turbines, generating vast quantities of time series data from various sensors. Anomaly detection techniques applied to this data offer the potential to proactively identify deviations from normal behavior, providing early warning signals of potential component failures. Traditional model-based approaches for fault detection often struggle to capture the complexity and non-linear dynamics of wind turbine systems. This has led to a growing interest in data-driven methods, particularly those leveraging machine learning and deep learning, to address anomaly detection in wind energy applications. This study focuses on the development and application of a semi-supervised, multivariate anomaly detection model for horizontal axis wind turbines. The core of this study lies in Bidirectional Long Short-Term Memory (BI-LSTM) networks, specifically a BI-LSTM autoencoder architecture, to analyze time series data from a SCADA system and automatically detect anomalous behavior that could indicate potential component failures. Moreover, the approach is reinforced by the integration of the Isolation Forest algorithm, which operates in an unsupervised manner to further refine normal behavior by identifying and excluding additional anomalous points in the training set, beyond those already labeled by the data provider. The research utilizes a real-world dataset provided by EDP Renewables, encompassing two years of comprehensive SCADA records collected from a single offshore wind turbine operating in the Gulf of Guinea. Furthermore, the dataset contains the logs of failure events and recorded alarms triggered by the SCADA system across a wide range of subsystems. The paper proposes a multi-modal anomaly detection framework orchestrating an unsupervised module (i.e., decision tree method) with a supervised one (i.e., BI-LSTM AE). The results highlight the efficacy of the BI-LSTM autoencoder in accurately identifying anomalies within the SCADA data that exhibit strong temporal correlation with logged warnings and the actual failure events. The model’s performance is rigorously evaluated using standard machine learning metrics, including precision, recall, F1 Score, and accuracy, all of which demonstrate favorable results. Further analysis is conducted using Cumulative Sum (CUSUM) control charts to gain a deeper understanding of the identified anomalies’ behavior, particularly their persistence and timing leading up to the failures. Full article
Show Figures

Figure 1

33 pages, 6831 KB  
Review
Machine Learning and Artificial Intelligence Techniques in Smart Grids Stability Analysis: A Review
by Arman Fathollahi
Energies 2025, 18(13), 3431; https://doi.org/10.3390/en18133431 - 30 Jun 2025
Cited by 2 | Viewed by 4667
Abstract
The incorporation of renewable energy sources in power grids has necessitated innovative solutions for effective energy management. Smart grids have emerged as transformative systems which integrate consumer, generator and dual-role entities to deliver secure, sustainable and economical electricity supplies. This review explores the [...] Read more.
The incorporation of renewable energy sources in power grids has necessitated innovative solutions for effective energy management. Smart grids have emerged as transformative systems which integrate consumer, generator and dual-role entities to deliver secure, sustainable and economical electricity supplies. This review explores the important role of artificial intelligence and machine learning approaches in managing the developing stability characteristics of smart grids. This work starts with a discussion of the smart grid’s dynamic structures and subsequently transitions into an overview of machine learning approaches that explore various algorithms and their applications to enhance smart grid operations. A comprehensive analysis of frameworks illustrates how machine learning and artificial intelligence solve issues related to distributed energy supplies, load management and contingency planning. This review includes general pseudocode and schematic architectures of artificial intelligence and machine learning methods which are categorized into supervised, semi-supervised, unsupervised and reinforcement learning. It includes support vector machines, decision trees, artificial neural networks, extreme learning machines and probabilistic graphical models, as well as reinforcement strategies like dynamic programming, Monte Carlo methods, temporal difference learning and Deep Q-networks, etc. Examination extends to stability, voltage and frequency regulation along with fault detection methods that highlight their applications in increasing smart grid operational boundaries. The review underlines the various arrays of machine learning algorithms that emphasize the integration of reinforcement learning as a pivotal enhancement in intelligent decision-making within smart grid environments. As a resource this review offers insights for researchers, practitioners and policymakers by providing a roadmap for leveraging intelligent technologies in smart grid control and stability analysis. Full article
(This article belongs to the Special Issue Advances in Power Converters and Microgrids)
Show Figures

Figure 1

27 pages, 7068 KB  
Article
Semi-Supervised Fault Diagnosis Method for Hydraulic Pumps Based on Data Augmentation Consistency Regularization
by Siyuan Liu, Jixiong Yin, Zhengming Zhang, Yongqiang Zhang, Chao Ai and Wanlu Jiang
Machines 2025, 13(7), 557; https://doi.org/10.3390/machines13070557 - 26 Jun 2025
Cited by 1 | Viewed by 626
Abstract
Due to the scarcity of labeled samples, the practical engineering application of deep learning-based hydraulic pump fault diagnosis methods is extremely challenging. This study proposes a semi-supervised learning method based on data augmented consistency regularization (DACR) to address the issue of lack of [...] Read more.
Due to the scarcity of labeled samples, the practical engineering application of deep learning-based hydraulic pump fault diagnosis methods is extremely challenging. This study proposes a semi-supervised learning method based on data augmented consistency regularization (DACR) to address the issue of lack of labeled data in diagnostic models. It utilizes augmented data obtained from the improved symplectic geometry modal decomposition method as additional perturbations, expanding the feature space of limited labeled samples under different operating conditions of the pump. A high-confidence label prediction process is formulated through a threshold determination strategy to estimate the potential label distribution of unlabeled samples. Consistent regularization loss is introduced in labeled and unlabeled data, respectively, to regularize model training, reducing the sensitivity of the classifier to additional perturbations. The supervised loss term ensures that the predictions of the augmented labeled samples are consistent with the true labels. Meanwhile, the unsupervised loss term can be used to minimize the difference between the distributions of unlabeled samples for different augmented versions. Finally, the proposed method is combined with Kolmogorov–Arnold Network (KAN). Comparative experiments based on data from two models of hydraulic pumps verify the superior recognition performance of this method under low label rate. Full article
Show Figures

Figure 1

24 pages, 1617 KB  
Article
Destructive Creation of New Invasive Technologies: Generative Artificial Intelligence Behaviour
by Mario Coccia
Technologies 2025, 13(7), 261; https://doi.org/10.3390/technologies13070261 - 20 Jun 2025
Cited by 1 | Viewed by 926
Abstract
This study proposes a new concept that explains a source of technological change: The invasive behaviour of general purpose technologies that breaks into scientific and technological ecosystems with accelerated diffusion of new products and processes that destroy the usage value of all units [...] Read more.
This study proposes a new concept that explains a source of technological change: The invasive behaviour of general purpose technologies that breaks into scientific and technological ecosystems with accelerated diffusion of new products and processes that destroy the usage value of all units previously used. This study highlights the dynamics of the invasive destruction of new path-breaking technologies in driving innovative activity. Invasive technologies conquer the scientific, technological, and business spaces of alternative technologies by introducing manifold radical innovations that support technological, economic, and social change. The proposed theoretical framework is verified empirically in new technologies of neural network architectures, comparing transformer technology (a deep learning architecture having unsupervised and semi-supervised algorithms that create new contents and mimic human ability, supporting Generative Artificial Intelligence) to Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNNs). Statistical evidence here, based on patent analyses, reveals that the exponential growth rate of transformer technology over a period of five years (2020–2024) is 45.91% more than double compared to the alternative technologies of LSTM (21.17%) and RNN (18.15%). Moreover, the proposed invasive rate in technological space shows that is very high for transformer technology at the level of 2.2%, whereas for LSTM it is 1.39% and for RNN it is 1.22% over 2020–2024, respectively. Invasive behaviour of drastic technologies is a new approach that can explain one of the major causes of global technological change and this scientific examination here significantly contributes to our understanding of the current dynamics in technological evolution of the Artificial Intelligence technology having high industrial impacts on the progress of human society. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

49 pages, 2038 KB  
Review
A Review of Non-Fully Supervised Deep Learning for Medical Image Segmentation
by Xinyue Zhang, Jianfeng Wang, Jinqiao Wei, Xinyu Yuan and Ming Wu
Information 2025, 16(6), 433; https://doi.org/10.3390/info16060433 - 24 May 2025
Cited by 2 | Viewed by 3148
Abstract
Medical image segmentation, a critical task in medical image analysis, aims to precisely delineate regions of interest (ROIs) such as organs, lesions, and cells, and is crucial for applications including computer-aided diagnosis, surgical planning, radiation therapy, and pathological analysis. While fully supervised deep [...] Read more.
Medical image segmentation, a critical task in medical image analysis, aims to precisely delineate regions of interest (ROIs) such as organs, lesions, and cells, and is crucial for applications including computer-aided diagnosis, surgical planning, radiation therapy, and pathological analysis. While fully supervised deep learning methods have demonstrated remarkable performance in this domain, their reliance on large-scale, pixel-level annotated datasets—a significant label scarcity challenge—severely hinders their widespread deployment in clinical settings. Addressing this limitation, this review focuses on non-fully supervised learning paradigms, systematically investigating the application of semi-supervised, weakly supervised, and unsupervised learning techniques for medical image segmentation. We delve into the theoretical foundations, core advantages, typical application scenarios, and representative algorithmic implementations associated with each paradigm. Furthermore, this paper compiles and critically reviews commonly utilized benchmark datasets within the field. Finally, we discuss future research directions and challenges, offering insights for advancing the field and reducing dependence on extensive annotation. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

17 pages, 6015 KB  
Article
Process Monitoring of One-Shot Drilling of Al/CFRP Aeronautical Stacks Using the 1DCAE-GMM Framework
by Giulio Mattera, Maria Grazia Marchesano, Alessandra Caggiano, Guido Guizzi and Luigi Nele
Electronics 2025, 14(9), 1777; https://doi.org/10.3390/electronics14091777 - 27 Apr 2025
Cited by 2 | Viewed by 841
Abstract
This study explores advanced process monitoring for one-shot drilling of aeronautical stacks made of aluminium 2024 and carbon fibre-reinforced polymer (CFRP) laminates using a 4.8 mm diameter drilling tool and unsupervised machine learning techniques. An experimental campaign is conducted to collect thrust force [...] Read more.
This study explores advanced process monitoring for one-shot drilling of aeronautical stacks made of aluminium 2024 and carbon fibre-reinforced polymer (CFRP) laminates using a 4.8 mm diameter drilling tool and unsupervised machine learning techniques. An experimental campaign is conducted to collect thrust force and torque signals at a 10 kHz sampling rate during the drilling process. These signals are employed for real-time process monitoring, focusing on material change detection and anomaly identification, where anomalies are defined as holes that fail to meet predefined quality criteria. An innovative approach based on unsupervised learning is proposed to enable automatic material change identification, signal segmentation, feature extraction, and hole quality assessment. Specifically, a semi-supervised approach based on a Gaussian Mixture Model (GMM) and 1D Convolutional AutoEncoder (1D-CAE) is employed to detect deviations from normal drilling conditions. The proposed method is benchmarked against state-of-the-art supervised techniques, including logistic regression (LR) and Support Vector Machines (SVMs). Results show that these traditional models struggle with class imbalance, leading to overfitting and limited generalisation, as reflected by the F1 scores of 0.78 and 0.75 for LR and SVM, respectively. In contrast, the proposed semi-supervised approach improves anomaly detection, achieving an F1 score of 0.87 by more effectively identifying poor-quality holes. This study demonstrates the potential of deep learning-based semi-supervised methods for intelligent process monitoring, enabling adaptive control in the drilling process of hybrid stacks and detecting anomalous holes. While the proposed approach effectively handles small and imbalanced datasets, further research into the application of generative AI could enhance performance, aiming for F1 scores above 0.90, thereby supporting adaptation in real industrial environments with high performance. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Intelligent Manufacturing)
Show Figures

Figure 1

28 pages, 37690 KB  
Article
Surface-Related Multiple Suppression Based on Field-Parameter-Guided Semi-Supervised Learning for Marine Data
by Jiao Qi, Siyuan Cao, Zhiyong Wang, Yankai Xu and Qiqi Zhang
J. Mar. Sci. Eng. 2025, 13(5), 862; https://doi.org/10.3390/jmse13050862 - 25 Apr 2025
Viewed by 635
Abstract
Surface-related multiple suppression is a critical step in seismic data processing, while traditional adaptive matching subtraction methods often distort primaries, resulting in either the leakage of primaries or the residue of surface-related multiples. To address these challenges, we propose a field-parameter-guided semi-supervised learning [...] Read more.
Surface-related multiple suppression is a critical step in seismic data processing, while traditional adaptive matching subtraction methods often distort primaries, resulting in either the leakage of primaries or the residue of surface-related multiples. To address these challenges, we propose a field-parameter-guided semi-supervised learning (FPSSL) method to more effectively eliminate surface-related multiples. Field parameters refer to the time–space coordinate information derived from the seismic acquisition system, including offsets, trace spaces, and sampling intervals. These parameters reveal the relative positional relationships of seismic data in the time–space domain. The FPSSL framework comprises a supervised network module (SNM) and an unsupervised network module (USNM). The input and output data of the SNM are a small sample of full wavefield data and the weights of a polynomial function, respectively. A linear weighted sum method is employed to represent the SNM outputs (weights), the full wavefield data, and field parameters as a polynomial function of the primaries, which is matched with adaptive subtraction label data. The trained SNM generates preliminary estimates of the primaries and multiples with improved lateral continuity from full wavefield data, both of which are used as inputs to the USNM. The USNM is essentially an optimization operator that refines the underlying nonlinear mapping relationship between primaries and full wavefield data using the local wavefield feature loss function, thereby obtaining more accurate prediction results with respect to primaries. Examples from synthetic data and real marine data demonstrate that the FPSSL method surpasses the traditional L1-norm adaptive subtraction method in suppressing multiples, significantly reducing the leakage of primaries and the residuals of surface-related multiples in the estimated demultiple results. The effectiveness and efficiency of our proposed method are verified through two sets of synthetic data and one marine data example. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 5307 KB  
Article
A Transformer–VAE Approach for Detecting Ship Trajectory Anomalies in Cross-Sea Bridge Areas
by Jiawei Hou, Hongzhu Zhou, Manel Grifoll, Yusheng Zhou, Jiao Liu, Yun Ye and Pengjun Zheng
J. Mar. Sci. Eng. 2025, 13(5), 849; https://doi.org/10.3390/jmse13050849 - 25 Apr 2025
Viewed by 1661
Abstract
Abnormal ship navigation behaviors in cross-sea bridge waters pose significant threats to maritime safety, creating a critical need for accurate anomaly detection methods. Ship AIS trajectory data contain complex temporal features but often lack explicit labels. Most existing anomaly detection methods heavily rely [...] Read more.
Abnormal ship navigation behaviors in cross-sea bridge waters pose significant threats to maritime safety, creating a critical need for accurate anomaly detection methods. Ship AIS trajectory data contain complex temporal features but often lack explicit labels. Most existing anomaly detection methods heavily rely on labeled or semi-supervised data, thus limiting their applicability in scenarios involving completely unlabeled ship trajectory data. Furthermore, these methods struggle to capture long-term temporal dependencies inherent in trajectory data. To address these limitations, this study proposes an unsupervised trajectory anomaly detection model combining a transformer architecture with a variational autoencoder (transformer–VAE). By training on large volumes of unlabeled normal trajectory data, the transformer–VAE employs a multi-head self-attention mechanism to model both local and global temporal relationships within the latent feature space. This approach significantly enhances the model’s ability to learn and reconstruct normal trajectory patterns, with reconstruction errors serving as the criterion for anomaly detection. Experimental results show that the transformer–VAE outperforms conventional VAE and LSTM–VAE in reconstruction accuracy and achieves better detection balance and robustness compared to LSTM–-VAE and transformer–GAN in anomaly detection. The model effectively identifies abnormal behaviors such as sudden changes in speed, heading, and trajectory deviation under fully unsupervised conditions. Preliminary experiments using the POT method validate the feasibility of dynamic thresholding, enhancing the model’s adaptability in complex maritime environments. Overall, the proposed approach enables early identification and proactive warning of potential risks, contributing to improved maritime traffic safety. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop