Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,863)

Search Parameters:
Keywords = machine learning and image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 3978 KB  
Article
Experimental Investigations of Oxidation Formation During Pulsed Laser Surface Structuring on Stainless Steel AISI 304
by Tuğrul Özel and Faik Derya Ince
Metals 2026, 16(2), 224; https://doi.org/10.3390/met16020224 - 15 Feb 2026
Viewed by 58
Abstract
Laser surface texturing (LST) structures or laser-induced periodic surface structures (LIPSS) are typically created using laser pulses with durations ranging from femtoseconds to nanoseconds. However, nanosecond pulsed lasers, as cost-effective and more productive alternatives, can also be used to generate LST structures on [...] Read more.
Laser surface texturing (LST) structures or laser-induced periodic surface structures (LIPSS) are typically created using laser pulses with durations ranging from femtoseconds to nanoseconds. However, nanosecond pulsed lasers, as cost-effective and more productive alternatives, can also be used to generate LST structures on stainless steel (SS) surfaces, making these structures more suitable for industrial applications. In this study, pulsed laser processing is employed to create LST structures on SS (AISI 304), with varying pulse and accumulated fluences, effective pulse counts, and scan parameters, such as pulse-to-pulse distance (pitch) and hatch spacing between scanning lines. A methodology for calculating oxidation density on processed AISI 304 surfaces is presented. Oxidation density, defined as the ratio of the oxidized area to the total processed area, is determined as a function of accumulated fluence, laser power, pulse-to-pulse distance, and hatch spacing. Optical images of the surfaces are analyzed, and oxidation regions are identified using machine learning techniques. The images are converted to grayscale, and machine learning algorithms are applied to classify the images into oxidation and non-oxidation regions based on pixel intensity values. This approach identifies the optimal threshold for separating the two regions by maximizing inter-class variance. Experimental modeling using response surface methodology is applied to experimentally generated data. Optimization algorithms are then employed to determine the process parameters that maximize pulsed laser irradiation performance while minimizing surface oxidation and processing time. This paper also presents a novel method for characterizing oxidation density using image segmentation and machine learning. The results provide a comprehensive understanding of the process and offer optimized models, contributing valuable insights for practical applications. Full article
(This article belongs to the Special Issue Surface Treatments and Coating of Metallic Materials (2nd Edition))
Show Figures

Graphical abstract

30 pages, 12009 KB  
Article
Comparison of CNN-Based Image Classification Approaches for Implementation of Low-Cost Multispectral Arcing Detection
by Elizabeth Piersall and Peter Fuhr
Sensors 2026, 26(4), 1268; https://doi.org/10.3390/s26041268 - 15 Feb 2026
Viewed by 97
Abstract
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as [...] Read more.
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as updates, maintenance, or potential replacement if damaged, can limit adoption of more expensive sensing options for some applications. To evaluate more affordable options with less expensive, more available, and more easily replaceable hardware, we examine the use of machine learning-based image classification with custom datasets, utilizing deep learning based-image classification and the use of ensemble models for sensor fusion. Utilizing the same models for each camera to reduce technical overhead, we showed that for a very representative training dataset, camera-based detection can be successful for detection of electrical arcing. We also use multiple validation datasets, based on conditions expected to be of varying difficulty, to evaluate custom data. These results show that ensemble models of different data sources can mitigate risks from gaps in training data, though the system will be less redundant for those cases unless other precautions are taken. We found that with good quality custom datasets, data fusion models can be utilized without specialization in design to the specific cameras utilized, allowing for less specialized, more accessible equipment to be utilized as multispectral camera components. This approach can provide an alternative to expensive sensing equipment for applications in which lower-cost or more easily replaceable sensing equipment is desirable. Full article
(This article belongs to the Section Sensing and Imaging)
22 pages, 34398 KB  
Article
Quantifying Bilberry Counts and Densities: A Comparative Assessment of Segmentation and Object Detection Models from Drone and Camera Imagery
by Susanna Hyyppä, Josef Taher, Harri Kaartinen, Teemu Hakala, Kirsi Karila, Leena Matikainen, Marjut Turtiainen, Antero Kukko and Juha Hyyppä
Forests 2026, 17(2), 253; https://doi.org/10.3390/f17020253 - 13 Feb 2026
Viewed by 78
Abstract
Nordic forest management is increasingly emphasizing multi-functional goals, expanding beyond timber production towards non-wood forest products such as wild berries. Wild berry yield maps are based on sample plot data combined with meteorological, remote sensing, and geoinformation data. Automating sample plot data processing [...] Read more.
Nordic forest management is increasingly emphasizing multi-functional goals, expanding beyond timber production towards non-wood forest products such as wild berries. Wild berry yield maps are based on sample plot data combined with meteorological, remote sensing, and geoinformation data. Automating sample plot data processing is crucial, as manual collection is labor-intensive, time-consuming, and complicated by short berry seasons and fluctuating yields. This study compares two methods for automatic bilberry detection and counting: a deep learning detector YOLO and a machine learning model using the segment anything model (SAM) followed by a random forest classification (SAM-RF). Both system camera and drone imagery were evaluated as input data. YOLOv8 clearly outperformed SAM–RF in berry detection, achieving an R2 of 0.98 and an RMSE of 3.8 berries when evaluated against annotated system camera images, compared to an R2 of 0.80 for SAM–RF. System camera imagery consistently produced higher accuracy than drone imagery due to higher image clarity and more optimal viewing angles, with YOLOv8 achieving an R2 of 0.95 against field counts, compared to 0.81 for drone images. The results also indicate that the primary error source in berry counting arises from the fact that many berries are not visible in the captured images. The results from the data analysis support the use of the developed technologies in yield modeling and even in implementing future ‘follow-me’ drone berry assistants. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Graphical abstract

23 pages, 2606 KB  
Article
A Proof-of-Concept Framework Integrating ML-Based MRI Segmentation with FEM for Transfemoral Residual Limb Modelling
by Ryota Sayama, Yukio Agarie, Hironori Suda, Hiroshi Otsuka, Kengo Ohnishi, Shinichiro Kon, Akihiko Hanahusa, Motoki Takagi and Shinichiro Yamamoto
Prosthesis 2026, 8(2), 16; https://doi.org/10.3390/prosthesis8020016 - 13 Feb 2026
Viewed by 95
Abstract
Background: Accurate evaluation of pressure distribution at the socket–limb interface is essential for improving prosthetic fit and comfort in transfemoral amputees. This study aimed to develop a proof-of-concept framework that integrates machine learning–based segmentation with the finite element method (FEM) to explore the [...] Read more.
Background: Accurate evaluation of pressure distribution at the socket–limb interface is essential for improving prosthetic fit and comfort in transfemoral amputees. This study aimed to develop a proof-of-concept framework that integrates machine learning–based segmentation with the finite element method (FEM) to explore the feasibility of an initial workflow for residual-limb analysis during socket application. Methods: MRI data from a transfemoral amputee were processed using a custom image segmentation algorithm to extract adipose tissue, femur, and ischium, achieving high F-measure scores. The segmented tissues were reconstructed into 3D models, refined through outlier removal and surface smoothing, and used for FEM simulations in LS-DYNA. Pressure values were extracted at nine sensor locations and compared with experimental measurements to provide a preliminary qualitative assessment of model behaviour. Results: The results showed consistent polarity between measured and simulated values across all points. Moderate correspondence was observed at eight low-pressure locations, whereas a substantial discrepancy occurred at the ischial tuberosity (IS), the primary load-bearing site. This discrepancy likely reflects the combined influence of geometric deviation in the reconstructed ischium and the non-physiological medial boundary condition required to prevent unrealistic tissue displacement. This limitation indicates that the current formulation does not support reliable quantitative interpretation at clinically critical locations. Conclusions: Overall, the proposed framework provides an initial demonstration of the methodological feasibility of combining automated anatomical modeling with FEM for exploratory pressure evaluation, indicating that such an integrated pipeline may serve as a useful foundation for future development. While extensive refinement and validation are required before any quantitative or clinically meaningful application is possible, this work represents an early step toward more advanced computational investigations of transfemoral socket–limb interaction. Full article
(This article belongs to the Special Issue Finite Element Analysis in Prosthesis and Orthosis Research)
Show Figures

Figure 1

19 pages, 6128 KB  
Article
Ionospheric Schumann Resonance Signal Image Recognition Model and Its Application to the Yangbi Earthquake
by Kexin Zhu, Zhong Li, Jianping Huang, Kexin Pan, Bo Hao and Yuanjing Zhang
Atmosphere 2026, 17(2), 193; https://doi.org/10.3390/atmos17020193 - 12 Feb 2026
Viewed by 154
Abstract
The Schumann resonance (SR) signal has attracted much attention as a potential earthquake precursor indicator. To enable rapid identification of these signals from massive volumes of China Seismo-Electromagnetic Satellite (CSES) data, this paper presents a machine learning-based image recognition algorithm. Firstly, the Ultra-Low [...] Read more.
The Schumann resonance (SR) signal has attracted much attention as a potential earthquake precursor indicator. To enable rapid identification of these signals from massive volumes of China Seismo-Electromagnetic Satellite (CSES) data, this paper presents a machine learning-based image recognition algorithm. Firstly, the Ultra-Low Frequency (ULF) band power spectrum data of the ionospheric electric field was standardized to enhance the visual contrast of the signal and generate a spectrogram. A small-image dataset with standardized image size and labeled positive and negative samples was constructed by cropping the original images. High-dimensional features of the image were extracted using the deep convolutional neural network VGG16 algorithm, combined with the support vector machine (SVM) algorithm to classify whether the high-dimensional data contains SR signals. The sliding window recognition algorithm is designed to process large-format power spectrum images. The results showed that this VGG16-SVM hybrid model achieved an accuracy of 95.00% on the independent small-image test set, which was superior to both pure SVM and pure VGG16 models. On the large-format image prediction set, the overall accuracy of the model is 81.48%, and the SR physical properties of the recognition signal are verified through frequency statistics. The hybrid model was applied to the SR detection and recognition of the Yangbi earthquake in Yunnan, China, and achieved ideal results. This indicates that the proposed VGG16-SVM hybrid model can quickly and effectively identify SR signals in CSES data, which has important practical value for automated electromagnetic signal analysis in seismic research. Full article
Show Figures

Figure 1

20 pages, 5744 KB  
Article
FibroidX: Vision Transformer-Powered Prognosis and Recurrence Prediction for Uterine Fibroids Using Ultrasound Images
by Fatma M. Talaat, Yathreb Bayan Mohamed, Amira Abdulrahman, Mohamed Salem and Mohamed Shehata
Cancers 2026, 18(4), 605; https://doi.org/10.3390/cancers18040605 - 12 Feb 2026
Viewed by 146
Abstract
Background/Objectives: One of the common gynecological issues that can have a major effect on women’s reproductive health and quality of life is uterine fibroids (UFs). For personalized treatment planning and a reduction in long-term consequences, early fibroid prognosis and recurrence prediction are essential. [...] Read more.
Background/Objectives: One of the common gynecological issues that can have a major effect on women’s reproductive health and quality of life is uterine fibroids (UFs). For personalized treatment planning and a reduction in long-term consequences, early fibroid prognosis and recurrence prediction are essential. In this context, prognosis refers to anticipated symptom progression and treatment response, while recurrence prediction estimates the likelihood of regrowth after interventions such as myomectomy, uterine artery embolization (UAE), or new fibroid formation during follow-up. Conventional techniques for predicting the prognosis and recurrence of UFs depend on imaging, clinical evaluations, and statistical models; nevertheless, they frequently have limited accuracy and are subjective. Methods: Therefore, we introduce FibroidX, which utilizes vision transformers and self-attention processes to improve forecast accuracy, automate feature extraction, and offer customized risk evaluations to overcome these obstacles. Prognosis encompasses overall disease progression, symptom severity, and response to therapy, whereas recurrence prediction focuses on post-treatment regrowth or new fibroid formation. Results: The dataset comprises 1990 ultrasound images split into training-test sets (80-20). With an accuracy of 98.4%, the suggested model outperformed baseline models like Model A (92.3%) and Model B (94.1%), exhibiting exceptional performance. A significant percentage of accurately anticipated cases was ensured by the precision and recall values, which were 97.8% and 96.9%, respectively. The model’s balanced precision-recall trade-off is highlighted by its F1-score of 97.3%, and its exceptional class distinction is confirmed by its AUC-ROC score of 0.99. Conclusions: The model was suitable for real-time applications, with an average inference time of 0.02 s per sample. The proposed method showed its effectiveness and reliability in prediction tasks. It achieved a 15% increase in accuracy and a 12% reduction in the false positive rate compared to traditional machine learning techniques. Full article
Show Figures

Figure 1

23 pages, 2557 KB  
Article
MECFN: A Multi-Modal Temporal Fusion Network for Valve Opening Prediction in Fluororubber Material Level Control
by Weicheng Yan, Kaiping Yuan, Han Hu, Minghui Liu, Haigang Gong, Xiaomin Wang and Guantao Zhang
Electronics 2026, 15(4), 783; https://doi.org/10.3390/electronics15040783 - 12 Feb 2026
Viewed by 85
Abstract
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the [...] Read more.
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the true process state. This information deficiency leads to inaccurate valve opening adjustment and degrades material level control performance. To address this issue, valve opening prediction is formulated as a data-driven, control-oriented regression task for material level regulation, and an end-to-end multimodal temporal regression framework, termed MECFN (Multi-Modal Enhanced Cross-Fusion Network), is proposed. The model performs deep fusion of visual image sequences and height sensor signals. A customized Multi-Feature Extraction (MFE) module is designed to enhance visual feature representation under complex surface conditions, while two independent Transformer encoders are employed to capture long-range temporal dependencies within each modality. Furthermore, a context-aware cross-attention mechanism is introduced to enable effective interaction and adaptive fusion between heterogeneous modalities. Experimental validation on a real-world industrial fluororubber production dataset demonstrates that MECFN consistently outperforms traditional machine learning approaches and single-modality deep learning models in valve opening prediction. Quantitative results show that MECFN achieves a mean absolute error of 2.36, a root mean squared error of 3.73, and an R2 of 0.92. These results indicate that the proposed framework provides a robust and practical data-driven solution for supporting valve control and achieving stable material level regulation in industrial production environments. Full article
(This article belongs to the Special Issue AI for Industry)
Show Figures

Figure 1

23 pages, 1791 KB  
Review
Artificial Intelligence in Veterinary Education: Preparing the Workforce for Clinical Applications in Diagnostics and Animal Health
by Esteban Pérez-García, Ana S. Ramírez, Miguel Ángel Quintana-Suárez, Magnolia M. Conde-Felipe, Conrado Carrascosa, Inmaculada Morales, Juan Alberto Corbera, Esther SanJuan and Jose Raduan Jaber
Vet. Sci. 2026, 13(2), 181; https://doi.org/10.3390/vetsci13020181 - 12 Feb 2026
Viewed by 181
Abstract
Artificial intelligence (AI), including machine learning (ML) and deep learning (DL), is rapidly transforming clinical veterinary practice by enhancing diagnostics, disease surveillance and decision support processes across animal health domains. The safe and effective clinical deployment of these technologies, however, depends critically on [...] Read more.
Artificial intelligence (AI), including machine learning (ML) and deep learning (DL), is rapidly transforming clinical veterinary practice by enhancing diagnostics, disease surveillance and decision support processes across animal health domains. The safe and effective clinical deployment of these technologies, however, depends critically on the preparedness of the veterinary workforce, positioning veterinary education as a strategic enabler of translational adoption. This narrative review examines the integration of AI within veterinary education as a foundational step toward its responsible application in clinical practice. We synthesize current evidence on AI-driven tools relevant to veterinary curricula, including generative and multimodal large language models, intelligent tutoring systems, virtual and augmented reality platforms and AI-based decision support tools applied to imaging, epidemiology, parasitology, food safety and animal health. Particular attention is given to how the structured educational use of AI mirrors real-world clinical workflows and supports the development of competencies essential for clinical translation, such as data interpretation, uncertainty management, ethical reasoning and professional accountability. The review further addresses ethical, regulatory and cognitive considerations associated with AI adoption, including algorithmic bias, data privacy, equity of access and the risks of overreliance, emphasizing their direct implications for diagnostic reliability and animal welfare. By framing veterinary education as a controlled and reflective environment for AI engagement, this article highlights how pedagogically grounded training can facilitate safer clinical deployment, foster interdisciplinary collaboration and align technological innovation with professional standards in veterinary medicine. Full article
Show Figures

Figure 1

22 pages, 2084 KB  
Article
Estimating Fibrosity Scores of Plant-Based Meat Products from Images: A Deep Neural Network Approach
by Abdullah Aljishi, Shirin Sheikhizadeh, Sanjoy Das and Sajid Alavi
Foods 2026, 15(4), 665; https://doi.org/10.3390/foods15040665 - 12 Feb 2026
Viewed by 82
Abstract
This paper proposes a deep neural network to estimate the fibrosities of plant-based meat product images. Images of varying fibrous microstructures were collected for this purpose, which were subject to spatial preprocessing and data enhancement. Their corresponding fibrosity scores were provided by two [...] Read more.
This paper proposes a deep neural network to estimate the fibrosities of plant-based meat product images. Images of varying fibrous microstructures were collected for this purpose, which were subject to spatial preprocessing and data enhancement. Their corresponding fibrosity scores were provided by two human experts. This data was used to train the network and to analyze its performance. Various statistical performance metrics were applied to evaluate the accuracy of the trained network’s estimated scores. It was found that the network performed significantly better when trained separately with fibrosity scores of each individual subject than with their combined scores, indicating that it was able to capture nuanced aspects of a subject’s perception. Another study was directed at explainability of the network’s estimates. Using standard software, a set of synthetic images of varying shapes and sizes were created as inputs to the network. Visual inspection of the output scores indicated that its estimates were influenced only by those features (i.e., food matrices and air cells) that were directly relevant to fibrosity, and not by extraneous factors. Full article
Show Figures

Figure 1

42 pages, 3053 KB  
Review
A Comprehensive Review of Deepfake Detection Techniques: From Traditional Machine Learning to Advanced Deep Learning Architectures
by Ahmad Raza, Abdul Basit, Asjad Amin, Zeeshan Ahmad Arfeen, Muhammad I. Masud, Umar Fayyaz and Touqeer Ahmed Jumani
AI 2026, 7(2), 68; https://doi.org/10.3390/ai7020068 - 11 Feb 2026
Viewed by 456
Abstract
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical [...] Read more.
Deepfake technology is causing unprecedented threats to the authenticity of digital media, and demand is high for reliable digital media detection systems. This systematic review focuses on an analysis of deepfake detection methods using deep learning approaches, machine learning methods, and the classical methods of image processing from 2018 to 2025 with a specific focus on the trade-off between accuracy, computing efficiency, and cross-dataset generalization. Through lavish analysis of a robust peer-reviewed studies using three benchmark data sets (FaceForensics++, DFDC, Celeb-DF) we expose important truths to bring some of the field’s prevailing assumptions into question. Our analysis produces three important results that radically change the understanding of detection abilities and limitations. Transformer-based architectures have significantly better cross-dataset generalization (11.33% performance decline) than CNN-based (more than 15% decline), at the expense of computation (3–5× more). To the contrary, there is no strong reason to assume the superiority of deep learning, and the performance of traditional machine learning methods (in our case, Random Forest) is quite comparable (accuracy of 99.64% on the DFDC) with dramatically lower computing needs, which opens up the prospects for their application in resource-constrained deployment scenarios. Most critically, we demonstrate deterioration of performance (10–15% on average) systematically across all methodological classes and we provide empirical support for the fact that current detection systems are, to a high degree, learning dataset specific compression artifacts, rather than deepfake characteristics that are generalizable. These results highlight the importance of moving from an accuracy-focused evaluation approach toward more comprehensive evaluation approaches that balance either generalization capability, computational feasibility, or practical deployment constraints, and therefore further direct future research efforts towards designing systems for detection that could be deployed in practical applications. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

17 pages, 3118 KB  
Data Descriptor
CryoVirusDB: An Annotated Dataset for AI-Based Virus Particle Identification in Cryo-EM Micrographs
by Rajan Gyawali, Ashwin Dhakal, Liguo Wang and Jianlin Cheng
Viruses 2026, 18(2), 224; https://doi.org/10.3390/v18020224 - 11 Feb 2026
Viewed by 222
Abstract
With the advancements in instrumentation, image processing algorithms, and computational capabilities, single-particle cryo-electron microscopy (cryo-EM) has achieved atomic resolution in determining the 3D structures of viruses. The virus structures play a crucial role in studying their biological function and advancing the development of [...] Read more.
With the advancements in instrumentation, image processing algorithms, and computational capabilities, single-particle cryo-electron microscopy (cryo-EM) has achieved atomic resolution in determining the 3D structures of viruses. The virus structures play a crucial role in studying their biological function and advancing the development of antiviral vaccines and treatments. Despite the effectiveness of artificial intelligence (AI) in general image processing, its development for identifying and extracting virus particles from cryo-EM micrographs has been hindered by the lack of manually labeled high-quality datasets. To fill the gap, we introduce CryoVirusDB, a labeled dataset containing the coordinates of expert-picked virus particles in cryo-EM micrographs. CryoVirusDB comprises 9941 micrographs from nine datasets representing seven distinct non-enveloped viruses exhibiting icosahedral or pseudo-icosahedral symmetry, along with coordinates of 339,398 labeled virus particles. It can be used to train and test AI and machine learning (e.g., deep learning) methods to accurately identify virus particles in cryo-EM micrographs for building atomic 3D structural models for viruses. Full article
(This article belongs to the Special Issue Microscopy Methods for Virus Research)
Show Figures

Figure 1

28 pages, 867 KB  
Review
Recent Advances in Deep Learning for SAR Images: Overview of Methods, Challenges, and Future Directions
by Eno Peter, Li-Minn Ang, Kah Phooi Seng and Sanjeev Srivastava
Sensors 2026, 26(4), 1143; https://doi.org/10.3390/s26041143 - 10 Feb 2026
Viewed by 196
Abstract
The analysis of Synthetic Aperture Radar (SAR) imagery is essential to modern remote sensing, with applications in disaster management, agricultural monitoring, and military surveillance. A significant challenge is that the complex and noisy nature of SAR data severely limits the performance of traditional [...] Read more.
The analysis of Synthetic Aperture Radar (SAR) imagery is essential to modern remote sensing, with applications in disaster management, agricultural monitoring, and military surveillance. A significant challenge is that the complex and noisy nature of SAR data severely limits the performance of traditional machine learning (TML) methods, leading to high error rates. In contrast, deep learning (DL) has recently proven highly effective at addressing these limitations. This study provides a comprehensive review of recent DL advances applied to SAR image despeckling, segmentation, classification, and detection. It evaluates widely adopted models, examines the potential of underutilized ones like GANs and GNNs, and compiles available datasets to support researchers. This review concludes by outlining key challenges and proposing future research directions to guide continued progress in SAR image analysis. Full article
Show Figures

Figure 1

17 pages, 318 KB  
Entry
Artificial Intelligence and the Transformation of the Media System
by Georgiana Camelia Stănescu
Encyclopedia 2026, 6(2), 45; https://doi.org/10.3390/encyclopedia6020045 - 10 Feb 2026
Viewed by 306
Definition
Artificial intelligence is increasingly being used in all branches of the media system and has transformed the way specialists in this field work in recent years. Currently, applications of artificial intelligence are used across a range of processes involved in the production, editing, [...] Read more.
Artificial intelligence is increasingly being used in all branches of the media system and has transformed the way specialists in this field work in recent years. Currently, applications of artificial intelligence are used across a range of processes involved in the production, editing, distribution, and consumption of media content. These include technologies such as generative chatbots, automated transcription, writing, translation, and editing tools, as well as applications for image and video creation. All of these types of applications have taken over a significant portion of the traditional activities carried out by media professionals. From a technological point of view, these uses primarily rely on machine learning, natural language processing, and computer vision techniques, complemented by generative models that automatically analyze, generate, and interpret text, sound, and images. Although these technologies contribute to increased efficiency, faster work, and reduced operating costs, they also pose significant risks, particularly regarding the spread of false information. From a theoretical perspective, artificial intelligence goes beyond the status of a technological tool, being conceptualized as a communicational actor that actively intervenes in the generation, structuring, and circulation of messages, influencing the relationships between producers, content, and audiences in the current media environment. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Graphical abstract

13 pages, 1716 KB  
Article
Estimation of the Length at First Maturity of the Swimming Crab (Portunus trituberculatus) in the Yellow Sea of Korea Using Machine Learning
by Jaehyung Kim, Daehyeon Kwon and Soojeong Lee
J. Mar. Sci. Eng. 2026, 14(4), 335; https://doi.org/10.3390/jmse14040335 - 9 Feb 2026
Viewed by 180
Abstract
Swimming crab (Portunus trituberculatus) is a commercially valuable species in the Yellow Sea, where recent fluctuations in resource levels have raised concerns about sustainable management. This study aimed to improve the estimation of the carapace length at 50% maturity (L50 [...] Read more.
Swimming crab (Portunus trituberculatus) is a commercially valuable species in the Yellow Sea, where recent fluctuations in resource levels have raised concerns about sustainable management. This study aimed to improve the estimation of the carapace length at 50% maturity (L50) using machine learning techniques, providing a more consistent and reproducible framework for visual maturity classification by standardizing image-based decision processes. Using geometric image augmentation (e.g., rotation, flipping, brightness adjustment), Hue–Saturation–Value (HSV) color segmentation, and algorithms, such as Extreme Gradient Boosting (XGB), Support Vector Machine (SVM), Random Forest (RF), and ensemble models, we classified the maturity of female crabs based on gonad color features. Model performance was evaluated using accuracy, AUC, and the TSS, with the ensemble model showing the highest predictive capability. The machine learning-based L50 was estimated at 64.63 mm (±1.73 mm), yielding a narrower uncertainty range than the visually derived L50 of 65.47 mm (±2.89 mm) under the same macroscopic labeling framework. These results suggest that machine learning-assisted maturity classification can enhance the precision and operational consistency of maturity estimation under a standardized framework, while biological accuracy cannot be confirmed in the absence of an independent reference, such as histological validation. Full article
(This article belongs to the Section Marine Biology)
Show Figures

Figure 1

20 pages, 1202 KB  
Perspective
The Innovative Potential of Artificial Intelligence Applied to Patient Registries to Implement Clinical Guidelines
by Sebastiano Gangemi, Alessandro Allegra, Mario Di Gioacchino, Luca Gammeri, Irene Cacciola and Giorgio Walter Canonica
Mach. Learn. Knowl. Extr. 2026, 8(2), 38; https://doi.org/10.3390/make8020038 - 7 Feb 2026
Viewed by 440
Abstract
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have [...] Read more.
Guidelines provide specific recommendations based on the best available medical knowledge, summarizing and balancing the advantages and disadvantages of various diagnostic and treatment options. Currently, consensus methods are the best and most common practices in creating clinical guidelines, even though these approaches have several limitations. However, the rapid pace of biomedical innovation and the growing availability of real-world data (RWD) from clinical registries (containing data like clinical outcomes, treatment variables, imaging, and laboratory results) call for a complementary paradigm in which recommendations are continuously stress-tested against high-quality, interoperable data and auditable artificial intelligence (AI) pipelines. AI, based on information retrieved from patient registries, can optimize the process of creating guidelines. In fact, AI can analyze large volumes of data, ensuring essential tasks such as correct feature identification, prediction, classification, and pattern recognition of all information. In this work, we propose a four-phase lifecycle, comprising data curation, causal analysis and estimation, objective validation, and real-time updates, complemented by governance and machine learning operations (MLOps). A comparative analysis with consensus-only methods, a pilot protocol, and a compliance checklist are provided. We believe that the use of AI will be a valuable support in drafting clinical guidelines to complement expert consensus and ensure continuous updates to standards, providing a higher level of evidence. The integration of AI with high-quality patient registries has the potential to substantially modernize guideline development, enabling continuously updated, data-driven recommendations. Full article
Show Figures

Figure 1

Back to TopTop