Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (18,819)

Search Parameters:
Keywords = training dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 934 KiB  
Article
STID-Mixer: A Lightweight Spatio-Temporal Modeling Framework for AIS-Based Vessel Trajectory Prediction
by Leiyu Wang, Jian Zhang, Guangyin Jin and Xinyu Dong
Eng 2025, 6(8), 184; https://doi.org/10.3390/eng6080184 (registering DOI) - 3 Aug 2025
Abstract
The Automatic Identification System (AIS) has become a key data source for ship behavior monitoring and maritime traffic management, widely used in trajectory prediction and anomaly detection. However, AIS data suffer from issues such as spatial sparsity, heterogeneous features, variable message formats, and [...] Read more.
The Automatic Identification System (AIS) has become a key data source for ship behavior monitoring and maritime traffic management, widely used in trajectory prediction and anomaly detection. However, AIS data suffer from issues such as spatial sparsity, heterogeneous features, variable message formats, and irregular sampling intervals, while vessel trajectories are characterized by strong spatial–temporal dependencies. These factors pose significant challenges for efficient and accurate modeling. To address this issue, we propose a lightweight vessel trajectory prediction framework that integrates Spatial–Temporal Identity encoding with an MLP-Mixer architecture. The framework discretizes spatial and temporal features into structured IDs and uses dual MLP modules to model temporal dependencies and feature interactions without relying on convolution or attention mechanisms. Experiments on a large-scale real-world AIS dataset demonstrate that the proposed STID-Mixer achieves superior accuracy, training efficiency, and generalization capability compared to representative baseline models. The method offers a compact and deployable solution for large-scale maritime trajectory modeling. Full article
23 pages, 20324 KiB  
Article
Hyperparameter Tuning of Artificial Neural Network-Based Machine Learning to Optimize Number of Hidden Layers and Neurons in Metal Forming
by Ebrahim Seidi, Farnaz Kaviari and Scott F. Miller
J. Manuf. Mater. Process. 2025, 9(8), 260; https://doi.org/10.3390/jmmp9080260 (registering DOI) - 3 Aug 2025
Abstract
Cold rolling is widely recognized as a key industrial process for enhancing the mechanical properties of materials, particularly hardness, through strain hardening. Despite its importance, accurately predicting the final hardness remains a challenge due to the inherently nonlinear nature of the deformation. While [...] Read more.
Cold rolling is widely recognized as a key industrial process for enhancing the mechanical properties of materials, particularly hardness, through strain hardening. Despite its importance, accurately predicting the final hardness remains a challenge due to the inherently nonlinear nature of the deformation. While several studies have employed artificial neural networks to predict mechanical properties, architectural parameters still need to be investigated to understand their effects on network behavior and model performance, ultimately supporting the design of more effective architectures. This study investigates hyperparameter tuning in artificial neural networks trained using Resilient Backpropagation by evaluating the impact of varying number of hidden layers and neurons on the prediction accuracy of hardness in 70-30 brass specimens subjected to cold rolling. A dataset of 1000 input–output pairs, containing dimensional and hardness measurements from multiple rolling passes, was used to train and evaluate 819 artificial neural network architectures, each with a different configuration of 1 to 3 hidden layers and 4 to 12 neurons per layer. Each configuration was tested over 50 runs to reduce the influence of randomness and enhance result consistency. Enhancing the network depth from one to two hidden layers improved predictive performance. Architectures with two hidden layers achieved better performance metrics, faster convergence, and lower variation than single-layer networks. Introducing a third hidden layer did not yield meaningful improvements over two-hidden-layer architectures in terms of performance metrics. While the top three-layer model converged in fewer epochs, it required more computational time due to increased model complexity and weight elements. Full article
Show Figures

Figure 1

12 pages, 369 KiB  
Article
A Novel Deep Learning Model for Predicting Colorectal Anastomotic Leakage: A Pioneer Multicenter Transatlantic Study
by Miguel Mascarenhas, Francisco Mendes, Filipa Fonseca, Eduardo Carvalho, Andre Santos, Daniela Cavadas, Guilherme Barbosa, Antonio Pinto da Costa, Miguel Martins, Abdullah Bunaiyan, Maísa Vasconcelos, Marley Ribeiro Feitosa, Shay Willoughby, Shakil Ahmed, Muhammad Ahsan Javed, Nilza Ramião, Guilherme Macedo and Manuel Limbert
J. Clin. Med. 2025, 14(15), 5462; https://doi.org/10.3390/jcm14155462 (registering DOI) - 3 Aug 2025
Abstract
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop [...] Read more.
Background/Objectives: Colorectal anastomotic leak (CAL) is one of the most severe postoperative complications in colorectal surgery, impacting patient morbidity and mortality. Current risk assessment methods rely on clinical and intraoperative factors, but no real-time predictive tool exists. This study aimed to develop an artificial intelligence model based on intraoperative laparoscopic recording of the anastomosis for CAL prediction. Methods: A convolutional neural network (CNN) was trained with annotated frames from colorectal surgery videos across three international high-volume centers (Instituto Português de Oncologia de Lisboa, Hospital das Clínicas de Ribeirão Preto, and Royal Liverpool University Hospital). The dataset included a total of 5356 frames from 26 patients, 2007 with CAL and 3349 showing normal anastomosis. Four CNN architectures (EfficientNetB0, EfficientNetB7, ResNet50, and MobileNetV2) were tested. The models’ performance was evaluated using their sensitivity, specificity, accuracy, and area under the receiver operating characteristic (AUROC) curve. Heatmaps were generated to identify key image regions influencing predictions. Results: The best-performing model achieved an accuracy of 99.6%, AUROC of 99.6%, sensitivity of 99.2%, specificity of 100.0%, PPV of 100.0%, and NPV of 98.9%. The model reliably identified CAL-positive frames and provided visual explanations through heatmaps. Conclusions: To our knowledge, this is the first AI model developed to predict CAL using intraoperative video analysis. Its accuracy suggests the potential to redefine surgical decision-making by providing real-time risk assessment. Further refinement with a larger dataset and diverse surgical techniques could enable intraoperative interventions to prevent CAL before it occurs, marking a paradigm shift in colorectal surgery. Full article
(This article belongs to the Special Issue Updates in Digestive Diseases and Endoscopy)
20 pages, 6595 KiB  
Article
Fine-Tuning Models for Histopathological Classification of Colorectal Cancer
by Houda Saif ALGhafri and Chia S. Lim
Diagnostics 2025, 15(15), 1947; https://doi.org/10.3390/diagnostics15151947 (registering DOI) - 3 Aug 2025
Abstract
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained [...] Read more.
Background/Objectives: This study aims to design and evaluate transfer learning strategies that fine-tune multiple pre-trained convolutional neural network architectures based on their characteristics to improve the accuracy and generalizability of colorectal cancer histopathological image classification. Methods: The application of transfer learning with pre-trained models on specialized and multiple datasets is proposed, where the proposed models, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep, are algorithmically fine-tuned at varying depths to improve the performance of colorectal cancer classification. These models were applied to datasets of 10,613 images from public and private repositories, external sources, and unseen data. To validate the models’ decision-making and improve transparency, we integrated Grad-CAM to provide visual explanations that influence classification decisions. Results and Conclusions: On average across all datasets, CRCHistoDense, CRCHistoIncep, and CRCHistoXcep achieved test accuracies of 99.34%, 99.48%, and 99.45%, respectively, highlighting the effectiveness of fine-tuning in improving classification performance and generalization. Statistical methods, including paired t-tests, ANOVA, and the Kruskal–Wallis test, confirmed significant improvements in the proposed methods’ performance, with p-values below 0.05. These findings demonstrate that fine-tuning based on the characteristics of CNN’s architecture enhances colorectal cancer classification in histopathology, thereby improving the diagnostic potential of deep learning models. Full article
19 pages, 1247 KiB  
Article
Improving News Retrieval with a Learnable Alignment Module for Multimodal Text–Image Matching
by Rui Song, Jiwei Tian, Peican Zhu and Bin Chen
Electronics 2025, 14(15), 3098; https://doi.org/10.3390/electronics14153098 (registering DOI) - 3 Aug 2025
Abstract
With the diversification of information retrieval methods, news retrieval tasks have gradually evolved towards multimodal retrieval. Existing methods often encounter issues such as inaccurate alignment and unstable feature matching when handling cross-modal data like text and images, limiting retrieval performance. To address this, [...] Read more.
With the diversification of information retrieval methods, news retrieval tasks have gradually evolved towards multimodal retrieval. Existing methods often encounter issues such as inaccurate alignment and unstable feature matching when handling cross-modal data like text and images, limiting retrieval performance. To address this, this paper proposes an innovative multimodal news retrieval method by introducing the Learnable Alignment Module (LAM), which establishes a learnable alignment relationship between text and images to improve the accuracy and stability of cross-modal retrieval. Specifically, the LAM, through trainable label embeddings (TLEs), enables the text encoder to dynamically adjust category information during training, thereby enhancing the alignment capability of text and images in the shared embedding space. Additionally, we propose three key alignment strategies: logits calibration, parameter consistency, and semantic feature matching, to further optimize the model’s multimodal learning ability. Extensive experiments conducted on four public datasets—Visual News, MMED, N24News, and EDIS—demonstrate that the proposed method outperforms existing state-of-the-art approaches in both text and image retrieval tasks. Notably, the method achieves significant improvements in low-recall scenarios (R@1): for text retrieval, R@1 reaches 47.34, 44.94, 16.47, and 19.23, respectively; for image retrieval, R@1 achieves 40.30, 38.49, 9.86, and 17.95, validating the effectiveness and robustness of the proposed method in multimodal news retrieval. Full article
(This article belongs to the Topic Graph Neural Networks and Learning Systems)
Show Figures

Figure 1

24 pages, 1451 KiB  
Article
Significance of Time-Series Consistency in Evaluating Machine Learning Models for Gap-Filling Multi-Level Very Tall Tower Data
by Changhyoun Park
Mach. Learn. Knowl. Extr. 2025, 7(3), 76; https://doi.org/10.3390/make7030076 (registering DOI) - 3 Aug 2025
Abstract
Machine learning modeling is a valuable tool for gap-filling or prediction, and its performance is typically evaluated using standard metrics. To enable more precise assessments for time-series data, this study emphasizes the importance of considering time-series consistency, which can be evaluated through amplitude—specifically, [...] Read more.
Machine learning modeling is a valuable tool for gap-filling or prediction, and its performance is typically evaluated using standard metrics. To enable more precise assessments for time-series data, this study emphasizes the importance of considering time-series consistency, which can be evaluated through amplitude—specifically, the interquartile range and the lower bound of the band in gap-filled time series. To test this hypothesis, a gap-filling technique was applied using long-term (~6 years) high-frequency flux and meteorological data collected at four different levels (1.5, 60, 140, and 300 m above sea level) on a ~300 m tall flux tower. This study focused on turbulent kinetic energy among several variables, which is important for estimating sensible and latent heat fluxes and net ecosystem exchange. Five ensemble machine learning algorithms were selected and trained on three different datasets. Among several modeling scenarios, the stacking model with a dataset combined with derivative data produced the best metrics for predicting turbulent kinetic energy. Although the metrics before and after gap-filling reported fewer differences among the scenarios, large distortions were found in the consistency of the time series in terms of amplitude. These findings underscore the importance of evaluating time-series consistency alongside traditional metrics, not only to accurately assess modeling performance but also to ensure reliability in downstream applications such as forecasting, climate modeling, and energy estimation. Full article
(This article belongs to the Section Data)
12 pages, 1329 KiB  
Article
Steady-State Visual-Evoked-Potential–Driven Quadrotor Control Using a Deep Residual CNN for Short-Time Signal Classification
by Jiannan Chen, Chenju Yang, Rao Wei, Changchun Hua, Dianrui Mu and Fuchun Sun
Sensors 2025, 25(15), 4779; https://doi.org/10.3390/s25154779 (registering DOI) - 3 Aug 2025
Abstract
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from [...] Read more.
In this paper, we study the classification problem of short-time-window steady-state visual evoked potentials (SSVEPs) and propose a novel deep convolutional network named EEGResNet based on the idea of residual connection to further improve the classification performance. Since the frequency-domain features extracted from short-time-window signals are difficult to distinguish, the EEGResNet starts from the filter bank (FB)-based feature extraction module in the time domain. The FB designed in this paper is composed of four sixth-order Butterworth filters with different bandpass ranges, and the four bandwidths are 19–50 Hz, 14–38 Hz, 9–26 Hz, and 3–14 Hz, respectively. Then, the extracted four feature tensors with the same shape are directly aggregated together. Furthermore, the aggregated features are further learned by a six-layer convolutional neural network with residual connections. Finally, the network output is generated through an adaptive fully connected layer. To prove the effectiveness and superiority of our designed EEGResNet, necessary experiments and comparisons are conducted over two large public datasets. To further verify the application potential of the trained network, a virtual simulation of brain computer interface (BCI) based quadrotor control is presented through V-REP. Full article
(This article belongs to the Special Issue Intelligent Sensor Systems in Unmanned Aerial Vehicles)
Show Figures

Figure 1

10 pages, 1055 KiB  
Article
Artificial Intelligence and Hysteroscopy: A Multicentric Study on Automated Classification of Pleomorphic Lesions
by Miguel Mascarenhas, Carla Peixoto, Ricardo Freire, Joao Cavaco Gomes, Pedro Cardoso, Inês Castro, Miguel Martins, Francisco Mendes, Joana Mota, Maria João Almeida, Fabiana Silva, Luis Gutierres, Bruno Mendes, João Ferreira, Teresa Mascarenhas and Rosa Zulmira
Cancers 2025, 17(15), 2559; https://doi.org/10.3390/cancers17152559 (registering DOI) - 3 Aug 2025
Abstract
Background/Objectives: The integration of artificial intelligence (AI) in medical imaging is rapidly advancing, yet its application in gynecologic use remains limited. This proof-of-concept study presents the development and validation of a convolutional neural network (CNN) designed to automatically detect and classify endometrial [...] Read more.
Background/Objectives: The integration of artificial intelligence (AI) in medical imaging is rapidly advancing, yet its application in gynecologic use remains limited. This proof-of-concept study presents the development and validation of a convolutional neural network (CNN) designed to automatically detect and classify endometrial polyps. Methods: A multicenter dataset (n = 3) comprising 65 hysteroscopies was used, yielding 33,239 frames and 37,512 annotated objects. Still frames were extracted from full-length videos and annotated for the presence of histologically confirmed polyps. A YOLOv1-based object detection model was used with a 70–20–10 split for training, validation, and testing. Primary performance metrics included recall, precision, and mean average precision at an intersection over union (IoU) ≥ 0.50 (mAP50). Frame-level classification metrics were also computed to evaluate clinical applicability. Results: The model achieved a recall of 0.96 and precision of 0.95 for polyp detection, with a mAP50 of 0.98. At the frame level, mean recall was 0.75, precision 0.98, and F1 score 0.82, confirming high detection and classification performance. Conclusions: This study presents a CNN trained on multicenter, real-world data that detects and classifies polyps simultaneously with high diagnostic and localization performance, supported by explainable AI features that enhance its clinical integration and technological readiness. Although currently limited to binary classification, this study demonstrates the feasibility and potential of AI to reduce diagnostic subjectivity and inter-observer variability in hysteroscopy. Future work will focus on expanding the model’s capabilities to classify a broader range of endometrial pathologies, enhance generalizability, and validate performance in real-time clinical settings. Full article
Show Figures

Figure 1

18 pages, 5178 KiB  
Article
Quantification of Suspended Sediment Concentration Using Laboratory Experimental Data and Machine Learning Model
by Sathvik Reddy Nookala, Jennifer G. Duan, Kun Qi, Jason Pacheco and Sen He
Water 2025, 17(15), 2301; https://doi.org/10.3390/w17152301 (registering DOI) - 2 Aug 2025
Abstract
Monitoring sediment concentration in water bodies is crucial for assessing water quality, ecosystems, and environmental health. However, physical sampling and sensor-based approaches are labor-intensive and unsuitable for large-scale, continuous monitoring. This study employs machine learning models to estimate suspended sediment concentration using images [...] Read more.
Monitoring sediment concentration in water bodies is crucial for assessing water quality, ecosystems, and environmental health. However, physical sampling and sensor-based approaches are labor-intensive and unsuitable for large-scale, continuous monitoring. This study employs machine learning models to estimate suspended sediment concentration using images captured in natural light, named RGB, and near-infrared (NIR) conditions. A controlled dataset of approximately 1300 images with SSC values ranging from 1000 mg/L to 150,000 mg/L was developed, incorporating temperature, time of image capture, and solar irradiance as additional features. Random forest regression and gradient boosting regression were trained on mean RGB values, red reflectance, time of captured, and temperature for natural light images, achieving up to 72.96% accuracy within a 30% relative error. In contrast, NIR images leveraged gray-level co-occurrence matrix texture features and temperature, reaching 83.08% accuracy. Comparative analysis showed that ensemble models outperformed deep learning models like Convolutional Neural Networks and Multi-Layer Perceptrons, which struggled with high-dimensional feature extraction. These findings suggest that using machine learning models and RGB and NIR imagery offers a scalable, non-invasive, and cost-effective way of sediment monitoring in support of water quality assessment and environmental management. Full article
Show Figures

Figure 1

15 pages, 1361 KiB  
Article
Radiomics with Clinical Data and [18F]FDG-PET for Differentiating Between Infected and Non-Infected Intracavitary Vascular (Endo)Grafts: A Proof-of-Concept Study
by Gijs D. van Praagh, Francine Vos, Stijn Legtenberg, Marjan Wouthuyzen-Bakker, Ilse J. E. Kouijzer, Erik H. J. G. Aarntzen, Jean-Paul P. M. de Vries, Riemer H. J. A. Slart, Lejla Alic, Bhanu Sinha and Ben R. Saleem
Diagnostics 2025, 15(15), 1944; https://doi.org/10.3390/diagnostics15151944 (registering DOI) - 2 Aug 2025
Abstract
Objective: We evaluated the feasibility of a machine-learning (ML) model based on clinical features and radiomics from [18F]FDG PET/CT images to differentiate between infected and non-infected intracavitary vascular grafts and endografts (iVGEI). Methods: Three ML models were developed: one based on [...] Read more.
Objective: We evaluated the feasibility of a machine-learning (ML) model based on clinical features and radiomics from [18F]FDG PET/CT images to differentiate between infected and non-infected intracavitary vascular grafts and endografts (iVGEI). Methods: Three ML models were developed: one based on pre-treatment criteria to diagnose a vascular graft infection (“MAGIC-light features”), another using radiomics features from diagnostic [18F]FDG-PET scans, and a third combining both datasets. The training set included 92 patients (72 iVGEI-positive, 20 iVGEI-negative), and the external test set included 20 iVGEI-positive and 12 iVGEI-negative patients. The abdominal aorta and iliac arteries in the PET/CT scans were automatically segmented using SEQUOIA and TotalSegmentator and manually adjusted, extracting 96 radiomics features. The best-performing models for the MAGIC-light features and PET-radiomics features were selected from 343 unique models. Most relevant features were combined to test three final models using ROC analysis, accuracy, sensitivity, and specificity. Results: The combined model achieved the highest AUC in the test set (mean ± SD: 0.91 ± 0.02) compared with the MAGIC-light-only model (0.85 ± 0.06) and the PET-radiomics model (0.73 ± 0.03). The combined model also achieved a higher accuracy (0.91 vs. 0.82) than the diagnosis based on all the MAGIC criteria and a comparable sensitivity and specificity (0.70 and 1.00 vs. 0.76 and 0.92, respectively) while providing diagnostic information at the initial presentation. The AUC for the combined model was significantly higher than the PET-radiomics model (p = 0.02 in the bootstrap test), while other comparisons were not statistically significant. Conclusions: This study demonstrated the potential of ML models in supporting diagnostic decision making for iVGEI. A combined model using pre-treatment clinical features and PET-radiomics features showed high diagnostic performance and specificity, potentially reducing overtreatment and enhancing patient outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Radiomics in Medical Diagnosis)
Show Figures

Figure 1

30 pages, 3080 KiB  
Article
Unsupervised Multimodal Community Detection Algorithm in Complex Network Based on Fractal Iteration
by Hui Deng, Yanchao Huang, Jian Wang, Yanmei Hu and Biao Cai
Fractal Fract. 2025, 9(8), 507; https://doi.org/10.3390/fractalfract9080507 (registering DOI) - 2 Aug 2025
Abstract
Community detection in complex networks plays a pivotal role in modern scientific research, including in social network analysis and protein structure analysis. Traditional community detection methods face challenges in integrating heterogeneous multi-source information, capturing global semantic relationships, and adapting to dynamic network evolution. [...] Read more.
Community detection in complex networks plays a pivotal role in modern scientific research, including in social network analysis and protein structure analysis. Traditional community detection methods face challenges in integrating heterogeneous multi-source information, capturing global semantic relationships, and adapting to dynamic network evolution. This paper proposes a novel unsupervised multimodal community detection algorithm (UMM) based on fractal iteration. The core idea is to design a dual-channel encoder that comprehensively considers node semantic features and network topological structures. Initially, node representation vectors are derived from structural information (using feature vectors when available, or singular value decomposition to obtain feature vectors for nodes without attributes). Subsequently, a parameter-free graph convolutional encoder (PFGC) is developed based on fractal iteration principles to extract high-order semantic representations from structural encodings without requiring any training process. Furthermore, a semantic–structural dual-channel encoder (DC-SSE) is designed, which integrates semantic encodings—reduced in dimensionality via UMAP—with structural features extracted by PFGC to obtain the final node embeddings. These embeddings are then clustered using the K-means algorithm to achieve community partitioning. Experimental results demonstrate that the UMM outperforms existing methods on multiple real-world network datasets. Full article
14 pages, 841 KiB  
Article
Enhanced Deep Learning for Robust Stress Classification in Sows from Facial Images
by Syed U. Yunas, Ajmal Shahbaz, Emma M. Baxter, Mark F. Hansen, Melvyn L. Smith and Lyndon N. Smith
Agriculture 2025, 15(15), 1675; https://doi.org/10.3390/agriculture15151675 (registering DOI) - 2 Aug 2025
Abstract
Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, [...] Read more.
Stress in pigs poses significant challenges to animal welfare and productivity in modern pig farming, contributing to increased antimicrobial use and the rise of antimicrobial resistance (AMR). This study involves stress classification in pregnant sows by exploring five deep learning models: ConvNeXt, EfficientNet_V2, MobileNet_V3, RegNet, and Vision Transformer (ViT). These models are used for stress detection from facial images, leveraging an expanded dataset. A facial image dataset of sows was collected at Scotland’s Rural College (SRUC) and the images were categorized into primiparous Low-Stressed (LS) and High-Stress (HS) groups based on expert behavioural assessments and cortisol level analysis. The selected deep learning models were then trained on this enriched dataset and their performance was evaluated using cross-validation on unseen data. The Vision Transformer (ViT) model outperformed the others across the dataset of annotated facial images, achieving an average accuracy of 0.75, an F1 score of 0.78 for high-stress detection, and consistent batch-level performance (up to 0.88 F1 score). These findings highlight the efficacy of transformer-based models for automated stress detection in sows, supporting early intervention strategies to enhance welfare, optimize productivity, and mitigate AMR risks in livestock production. Full article
Show Figures

Figure 1

22 pages, 2498 KiB  
Article
SceEmoNet: A Sentiment Analysis Model with Scene Construction Capability
by Yi Liang, Dongfang Han, Zhenzhen He, Bo Kong and Shuanglin Wen
Appl. Sci. 2025, 15(15), 8588; https://doi.org/10.3390/app15158588 (registering DOI) - 2 Aug 2025
Abstract
How do humans analyze the sentiments embedded in text? When attempting to analyze a text, humans construct a “scene” in their minds through imagination based on the text, generating a vague image. They then synthesize the text and the mental image to derive [...] Read more.
How do humans analyze the sentiments embedded in text? When attempting to analyze a text, humans construct a “scene” in their minds through imagination based on the text, generating a vague image. They then synthesize the text and the mental image to derive the final analysis result. However, current sentiment analysis models lack such imagination; they can only analyze based on existing information in the text, which limits their classification accuracy. To address this issue, we propose the SceEmoNet model. This model endows text classification models with imagination through Stable diffusion, enabling the model to generate corresponding visual scenes from input text, thus introducing a new modality of visual information. We then use the Contrastive Language-Image Pre-training (CLIP) model, a multimodal feature extraction model, to extract aligned features from different modalities, preventing significant feature differences caused by data heterogeneity. Finally, we fuse information from different modalities using late fusion to obtain the final classification result. Experiments on six datasets with different classification tasks show improvements of 9.57%, 3.87%, 3.63%, 3.14%, 0.77%, and 0.28%, respectively. Additionally, we set up experiments to deeply analyze the model’s advantages and limitations, providing a new technical path for follow-up research. Full article
(This article belongs to the Special Issue Advanced Technologies and Applications of Emotion Recognition)
Show Figures

Figure 1

18 pages, 6891 KiB  
Article
Physics-Based Data Augmentation Enables Accurate Machine Learning Prediction of Melt Pool Geometry
by Siqi Liu, Ruina Li, Jiayi Zhou, Chaoyuan Dai, Jingui Yu and Qiaoxin Zhang
Appl. Sci. 2025, 15(15), 8587; https://doi.org/10.3390/app15158587 (registering DOI) - 2 Aug 2025
Abstract
Accurate melt pool geometry prediction is essential for ensuring quality and reliability in Laser Powder Bed Fusion (L-PBF). However, small experimental datasets and limited physical interpretability often restrict the effectiveness of traditional machine learning (ML) models. This study proposes a hybrid framework that [...] Read more.
Accurate melt pool geometry prediction is essential for ensuring quality and reliability in Laser Powder Bed Fusion (L-PBF). However, small experimental datasets and limited physical interpretability often restrict the effectiveness of traditional machine learning (ML) models. This study proposes a hybrid framework that integrates an explicit thermal model with ML algorithms to improve prediction under sparse data conditions. The explicit model—calibrated for variable penetration depth and absorptivity—generates synthetic melt pool data, augmenting 36 experimental samples across conduction, transition, and keyhole regimes for 316 L stainless steel. Three ML methods—Multilayer Perceptron (MLP), Random Forest, and XGBoost—are trained using fivefold cross-validation. The hybrid approach significantly improves prediction accuracy, especially in unstable transition regions (D/W ≈ 0.5–1.2), where morphological fluctuations hinder experimental sampling. The best-performing model (MLP) achieves R2 > 0.98, with notable reductions in MAE and RMSE. The results highlight the benefit of incorporating physically consistent, nonlinearly distributed synthetic data to enhance generalization and robustness. This physics-augmented learning strategy not only demonstrates scientific novelty by integrating mechanistic modeling into data-driven learning, but also provides a scalable solution for intelligent process optimization, in situ monitoring, and digital twin development in metal additive manufacturing. Full article
Show Figures

Figure 1

17 pages, 3061 KiB  
Article
Model-Agnostic Meta-Learning in Predicting Tunneling-Induced Surface Ground Deformation
by Wei He, Guan-Bin Chen, Wenlian Qian, Wen-Li Chen, Liang Tang and Xiangxun Kong
Symmetry 2025, 17(8), 1220; https://doi.org/10.3390/sym17081220 (registering DOI) - 2 Aug 2025
Abstract
The present investigation presents the field measurement and prediction of tunneling-induced surface ground settlement in Tianjin Metro Line 7, China. The cross-section of a metro tunnel exhibits circular symmetry, thereby making it suitable for tunneling with a circular shield machine. The ground surface [...] Read more.
The present investigation presents the field measurement and prediction of tunneling-induced surface ground settlement in Tianjin Metro Line 7, China. The cross-section of a metro tunnel exhibits circular symmetry, thereby making it suitable for tunneling with a circular shield machine. The ground surface may deform during the tunneling stage. In the early stage of tunneling, few measurement data can be collected. To obtain a better usable prediction model, two kinds of neural networks according to the model-agnostic meta-learning (MAML) scheme are presented. One kind of deep learning strategy is a combination of the Back-Propagation Neural Network (BPNN) and the MAML model, named MAML-BPNN. The other prediction model is a mixture of the MAML model and the Long Short-Term Memory (LSTM) model, named MAML-LSTM. Founded on several measurement datasets, the prediction models of the MAML-BPNN and MAML-LSTM are successfully trained. The results show the present models possess good prediction ability for tunneling-induced surface ground settlement. Based on the coefficient of determination, the prediction result using MAML-LSTM is superior to that of MAML-BPNN by 0.1. Full article
Show Figures

Figure 1

Back to TopTop