Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (273)

Search Parameters:
Keywords = limited labeled scenarios

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3507 KiB  
Article
A Semi-Supervised Wildfire Image Segmentation Network with Multi-Scale Structural Fusion and Pixel-Level Contrastive Consistency
by Yong Sun, Wei Wei, Jia Guo, Haifeng Lin and Yiqing Xu
Fire 2025, 8(8), 313; https://doi.org/10.3390/fire8080313 - 7 Aug 2025
Abstract
The increasing frequency and intensity of wildfires pose serious threats to ecosystems, property, and human safety worldwide. Accurate semantic segmentation of wildfire images is essential for real-time fire monitoring, spread prediction, and disaster response. However, existing deep learning methods heavily rely on large [...] Read more.
The increasing frequency and intensity of wildfires pose serious threats to ecosystems, property, and human safety worldwide. Accurate semantic segmentation of wildfire images is essential for real-time fire monitoring, spread prediction, and disaster response. However, existing deep learning methods heavily rely on large volumes of pixel-level annotated data, which are difficult and costly to obtain in real-world wildfire scenarios due to complex environments and urgent time constraints. To address this challenge, we propose a semi-supervised wildfire image segmentation framework that enhances segmentation performance under limited annotation conditions by integrating multi-scale structural information fusion and pixel-level contrastive consistency learning. Specifically, a Lagrange Interpolation Module (LIM) is designed to construct structured interpolation representations between multi-scale feature maps during the decoding stage, enabling effective fusion of spatial details and semantic information, and improving the model’s ability to capture flame boundaries and complex textures. Meanwhile, a Pixel Contrast Consistency (PCC) mechanism is introduced to establish pixel-level semantic constraints between CutMix and Flip augmented views, guiding the model to learn consistent intra-class and discriminative inter-class feature representations, thereby reducing the reliance on large labeled datasets. Extensive experiments on two public wildfire image datasets, Flame and D-Fire, demonstrate that our method consistently outperforms other approaches under various annotation ratios. For example, with only half of the labeled data, our model achieves 5.0% and 6.4% mIoU improvements on the Flame and D-Fire datasets, respectively, compared to the baseline. This work provides technical support for efficient wildfire perception and response in practical applications. Full article
Show Figures

Figure 1

28 pages, 48169 KiB  
Article
Advancing Self-Supervised Learning for Building Change Detection and Damage Assessment: Unified Denoising Autoencoder and Contrastive Learning Framework
by Songxi Yang, Bo Peng, Tang Sui, Meiliu Wu and Qunying Huang
Remote Sens. 2025, 17(15), 2717; https://doi.org/10.3390/rs17152717 - 6 Aug 2025
Abstract
Building change detection and building damage assessment are two essential tasks in post-disaster analysis. Building change detection focuses on identifying changed building areas between bi-temporal images, while building damage assessment involves segmenting all buildings and classifying their damage severity. These tasks play a [...] Read more.
Building change detection and building damage assessment are two essential tasks in post-disaster analysis. Building change detection focuses on identifying changed building areas between bi-temporal images, while building damage assessment involves segmenting all buildings and classifying their damage severity. These tasks play a critical role in disaster response and urban development monitoring. Although supervised learning has significantly advanced building change detection and damage assessment, its reliance on large labeled datasets remains a major limitation. In contrast, self-supervised learning enables the extraction of meaningful data representations without explicit training labels. To address this challenge, we propose a self-supervised learning approach that unifies denoising autoencoders and contrastive learning, enabling effective data representation for building change detection and damage assessment. The proposed architecture integrates a dual denoising autoencoder with a Vision Transformer backbone and contrastive learning strategy, complemented by a Feature Pyramid Network-ResNet dual decoder and an Edge Guidance Module. This design enhances multi-scale feature extraction and enables edge-aware segmentation for accurate predictions. Extensive experiments were conducted on five public datasets, including xBD, LEVIR, LEVIR+, SYSU, and WHU, to evaluate the performance and generalization capabilities of the model. The results demonstrate that the proposed Denoising AutoEncoder-enhanced Dual-Fusion Network (DAEDFN) approach achieves competitive performance compared with fully supervised methods. On the xBD dataset, the largest dataset for building damage assessment, our proposed method achieves an F1 score of 0.892 for building segmentation, outperforming state-of-the-art methods. For building damage severity classification, the model achieves an F1 score of 0.632. On the building change detection datasets, the proposed method achieves F1 scores of 0.837 (LEVIR), 0.817 (LEVIR+), 0.768 (SYSU), and 0.876 (WHU), demonstrating model generalization across diverse scenarios. Despite these promising results, challenges remain in complex urban environments, small-scale changes, and fine-grained boundary detection. These findings highlight the potential of self-supervised learning in building change detection and damage assessment tasks. Full article
Show Figures

Figure 1

24 pages, 1313 KiB  
Review
Data Augmentation and Knowledge Transfer-Based Fault Detection and Diagnosis in Internet of Things-Based Solar Insecticidal Lamps: A Survey
by Zhengjie Wang, Xing Yang, Tongjie Li, Lei Shu, Kailiang Li and Xiaoyuan Jing
Electronics 2025, 14(15), 3113; https://doi.org/10.3390/electronics14153113 - 5 Aug 2025
Viewed by 20
Abstract
Internet of Things (IoT)-based solar insecticidal lamps (SIL-IoTs) offer an eco-friendly alternative by merging solar energy harvesting with intelligent sensing, advancing sustainable smart agriculture. However, SIL-IoTs encounter practical challenges, e.g., hardware aging, electromagnetic interference, and abnormal data patterns. Therefore, developing an effective fault [...] Read more.
Internet of Things (IoT)-based solar insecticidal lamps (SIL-IoTs) offer an eco-friendly alternative by merging solar energy harvesting with intelligent sensing, advancing sustainable smart agriculture. However, SIL-IoTs encounter practical challenges, e.g., hardware aging, electromagnetic interference, and abnormal data patterns. Therefore, developing an effective fault detection and diagnosis (FDD) system is essential. In this survey, we systematically identify and address the core challenges of implementing FDD of SIL-IoTs. Firstly, the fuzzy boundaries of sample features lead to complex feature interactions that increase the difficulty of accurate FDD. Secondly, the category imbalance in the fault samples limits the generalizability of the FDD models. Thirdly, models trained on single scenarios struggle to adapt to diverse and dynamic field conditions. To overcome these challenges, we propose a multi-level solution by discussing and merging existing FDD methods: (1) a data augmentation strategy can be adopted to improve model performance on small-sample datasets; (2) federated learning (FL) can be employed to enhance adaptability to heterogeneous environments, while transfer learning (TL) addresses data scarcity; and (3) deep learning techniques can be used to reduce dependence on labeled data; these methods provide a robust framework for intelligent and adaptive FDD of SIL-IoTs, supporting long-term reliability of IoT devices in smart agriculture. Full article
(This article belongs to the Collection Electronics for Agriculture)
Show Figures

Figure 1

31 pages, 4141 KiB  
Article
Automated Quality Control of Candle Jars via Anomaly Detection Using OCSVM and CNN-Based Feature Extraction
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Mathematics 2025, 13(15), 2507; https://doi.org/10.3390/math13152507 - 4 Aug 2025
Viewed by 160
Abstract
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, [...] Read more.
Automated quality control plays a critical role in modern industries, particularly in environments that handle large volumes of packaged products requiring fast, accurate, and consistent inspections. This work presents an anomaly detection system for candle jars commonly used in industrial and commercial applications, where obtaining labeled defective samples is challenging. Two anomaly detection strategies are explored: (1) a baseline model using convolutional neural networks (CNNs) as an end-to-end classifier and (2) a hybrid approach where features extracted by CNNs are fed into One-Class classification (OCC) algorithms, including One-Class SVM (OCSVM), One-Class Isolation Forest (OCIF), One-Class Local Outlier Factor (OCLOF), One-Class Elliptic Envelope (OCEE), One-Class Autoencoder (OCAutoencoder), and Support Vector Data Description (SVDD). Both strategies are trained primarily on non-defective samples, with only a limited number of anomalous examples used for evaluation. Experimental results show that both the pure CNN model and the hybrid methods achieve excellent classification performance. The end-to-end CNN reached 100% accuracy, precision, recall, F1-score, and AUC. The best-performing hybrid model CNN-based feature extraction followed by OCIF also achieved 100% across all evaluation metrics, confirming the effectiveness and robustness of the proposed approach. Other OCC algorithms consistently delivered strong results, with all metrics above 95%, indicating solid generalization from predominantly normal data. This approach demonstrates strong potential for quality inspection tasks in scenarios with scarce defective data. Its ability to generalize effectively from mostly normal samples makes it a practical and valuable solution for real-world industrial inspection systems. Future work will focus on optimizing real-time inference and exploring advanced feature extraction techniques to further enhance detection performance. Full article
Show Figures

Figure 1

19 pages, 1247 KiB  
Article
Improving News Retrieval with a Learnable Alignment Module for Multimodal Text–Image Matching
by Rui Song, Jiwei Tian, Peican Zhu and Bin Chen
Electronics 2025, 14(15), 3098; https://doi.org/10.3390/electronics14153098 - 3 Aug 2025
Viewed by 250
Abstract
With the diversification of information retrieval methods, news retrieval tasks have gradually evolved towards multimodal retrieval. Existing methods often encounter issues such as inaccurate alignment and unstable feature matching when handling cross-modal data like text and images, limiting retrieval performance. To address this, [...] Read more.
With the diversification of information retrieval methods, news retrieval tasks have gradually evolved towards multimodal retrieval. Existing methods often encounter issues such as inaccurate alignment and unstable feature matching when handling cross-modal data like text and images, limiting retrieval performance. To address this, this paper proposes an innovative multimodal news retrieval method by introducing the Learnable Alignment Module (LAM), which establishes a learnable alignment relationship between text and images to improve the accuracy and stability of cross-modal retrieval. Specifically, the LAM, through trainable label embeddings (TLEs), enables the text encoder to dynamically adjust category information during training, thereby enhancing the alignment capability of text and images in the shared embedding space. Additionally, we propose three key alignment strategies: logits calibration, parameter consistency, and semantic feature matching, to further optimize the model’s multimodal learning ability. Extensive experiments conducted on four public datasets—Visual News, MMED, N24News, and EDIS—demonstrate that the proposed method outperforms existing state-of-the-art approaches in both text and image retrieval tasks. Notably, the method achieves significant improvements in low-recall scenarios (R@1): for text retrieval, R@1 reaches 47.34, 44.94, 16.47, and 19.23, respectively; for image retrieval, R@1 achieves 40.30, 38.49, 9.86, and 17.95, validating the effectiveness and robustness of the proposed method in multimodal news retrieval. Full article
(This article belongs to the Topic Graph Neural Networks and Learning Systems)
Show Figures

Figure 1

23 pages, 13529 KiB  
Article
A Self-Supervised Contrastive Framework for Specific Emitter Identification with Limited Labeled Data
by Jiaqi Wang, Lishu Guo, Pengfei Liu, Peng Shang, Xiaochun Lu and Hang Zhao
Remote Sens. 2025, 17(15), 2659; https://doi.org/10.3390/rs17152659 - 1 Aug 2025
Viewed by 174
Abstract
Specific Emitter Identification (SEI) is a specialized technique for identifying different emitters by analyzing the unique characteristics embedded in received signals, known as Radio Frequency Fingerprints (RFFs), and SEI plays a crucial role in civilian applications. Recently, various SEI methods based on deep [...] Read more.
Specific Emitter Identification (SEI) is a specialized technique for identifying different emitters by analyzing the unique characteristics embedded in received signals, known as Radio Frequency Fingerprints (RFFs), and SEI plays a crucial role in civilian applications. Recently, various SEI methods based on deep learning have been proposed. However, in real-world scenarios, the scarcity of accurately labeled data poses a significant challenge to these methods, which typically rely on large-scale supervised training. To address this issue, we propose a novel SEI framework based on self-supervised contrastive learning. Our approach comprises two stages: an unsupervised pretraining phase that uses contrastive loss to learn discriminative RFF representations from unlabeled data, and a supervised fine-tuning stage regularized through virtual adversarial training (VAT) to improve generalization under limited labels. This framework enables effective feature learning while mitigating overfitting. To validate the effectiveness of the proposed method, we collected real-world satellite navigation signals using a 40-meter antenna and conducted extensive experiments. The results demonstrate that our approach achieves outstanding SEI performance, significantly outperforming several mainstream SEI methods, thereby highlighting the practical potential of contrastive self-supervised learning in satellite transmitter identification. Full article
Show Figures

Graphical abstract

18 pages, 9470 KiB  
Article
DCS-ST for Classification of Breast Cancer Histopathology Images with Limited Annotations
by Suxing Liu and Byungwon Min
Appl. Sci. 2025, 15(15), 8457; https://doi.org/10.3390/app15158457 - 30 Jul 2025
Viewed by 274
Abstract
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin [...] Read more.
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin Transformer (DCS-ST), a robust and efficient framework tailored for histopathology image classification with scarce annotations. Specifically, DCS-ST integrates a dynamic window predictor and a cross-scale attention module to enhance multi-scale feature representation and interaction while employing a semi-supervised learning strategy based on pseudo-labeling and denoising to exploit unlabeled data effectively. This design enables the model to adaptively attend to diverse tissue structures and pathological patterns while maintaining classification stability. Extensive experiments on three public datasets—BreakHis, Mini-DDSM, and ICIAR2018—demonstrate that DCS-ST consistently outperforms existing state-of-the-art methods across various magnifications and classification tasks, achieving superior quantitative results and reliable visual classification. Furthermore, empirical evaluations validate its strong generalization capability and practical potential for real-world weakly-supervised medical image analysis. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
A Nested Named Entity Recognition Model Robust in Few-Shot Learning Environments Using Label Description Information
by Hyunsun Hwang, Youngjun Jung, Changki Lee and Wooyoung Go
Appl. Sci. 2025, 15(15), 8255; https://doi.org/10.3390/app15158255 - 24 Jul 2025
Viewed by 239
Abstract
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general [...] Read more.
Nested named entity recognition (NER) is a task that identifies hierarchically structured entities, where one entity can contain other entities within its span. This study introduces a nested NER model for few-shot learning environments, addressing the difficulty of building extensive datasets for general named entities. We enhance the Biaffine nested NER model by modifying its output layer to incorporate label semantic information through a novel label description embedding (LDE) approach, improving performance with limited training data. Our method replaces the traditional biaffine classifier with a label attention mechanism that leverages comprehensive natural language descriptions of entity types, encoded using BERT to capture rich semantic relationships between labels and input spans. We conducted comprehensive experiments on four benchmark datasets: GENIA (nested NER), ACE 2004 (nested NER), ACE 2005 (nested NER), and CoNLL 2003 English (flat NER). Performance was evaluated across multiple few-shot scenarios (1-shot, 5-shot, 10-shot, and 20-shot) using F1-measure as the primary metric, with five different random seeds to ensure robust evaluation. We compared our approach against strong baselines including BERT-LSTM-CRF with nested tags, the original Biaffine model, and recent few-shot NER methods (FewNER, FIT, LPNER, SpanNER). Results demonstrate significant improvements across all few-shot scenarios. On GENIA, our LDE model achieves 45.07% F1 in five-shot learning compared to 30.74% for the baseline Biaffine model (46.4% relative improvement). On ACE 2005, we obtain 44.24% vs. 32.38% F1 in five-shot scenarios (36.6% relative improvement). The model shows consistent gains in 10-shot (57.19% vs. 49.50% on ACE 2005) and 20-shot settings (64.50% vs. 58.21% on ACE 2005). Ablation studies confirm that semantic information from label descriptions is the key factor enabling robust few-shot performance. Transfer learning experiments demonstrate the model’s ability to leverage knowledge from related domains. Our findings suggest that incorporating label semantic information can substantially enhance NER models in low-resource settings, opening new possibilities for applying NER in specialized domains or languages with limited annotated data. Full article
(This article belongs to the Special Issue Applications of Natural Language Processing to Data Science)
Show Figures

Figure 1

16 pages, 123395 KiB  
Article
Semi-Supervised Image-Dehazing Network Based on a Trusted Library
by Wan Li and Chenyang Chang
Electronics 2025, 14(15), 2956; https://doi.org/10.3390/electronics14152956 - 24 Jul 2025
Viewed by 208
Abstract
In the field of image dehazing, many deep learning-based methods have demonstrated promising results. However, these methods often neglect crucial frequency-domain information and rely heavily on labeled datasets, which limits their applicability to real-world hazy images. To address these issues, we propose a [...] Read more.
In the field of image dehazing, many deep learning-based methods have demonstrated promising results. However, these methods often neglect crucial frequency-domain information and rely heavily on labeled datasets, which limits their applicability to real-world hazy images. To address these issues, we propose a semi-supervised image-dehazing network based on a trusted library (WTS-Net). We construct a dual-branch wavelet transform network (DBWT-Net). It fuses high- and low-frequency features via a frequency-mixing module and enhances global context through attention mechanisms. Building on DBWT-Net, we embed this backbone in a teacher–student model to reduce reliance on labeled data. To enhance the reliability of the teacher network, we introduce a trusted library guided by NR-IQA. In addition, we employ a two-stage training strategy for the network. Experiments show that WTS-Net achieves superior generalization and robustness in both synthetic and real-world dehazing scenarios. Full article
Show Figures

Figure 1

23 pages, 10648 KiB  
Article
Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Haisong Chen and Minhui Wang
Electronics 2025, 14(15), 2952; https://doi.org/10.3390/electronics14152952 - 24 Jul 2025
Viewed by 223
Abstract
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a [...] Read more.
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a neural architecture search (NAS) for a few-shot HSI classification method that combines meta learning. Firstly, a multi-source domain learning framework was constructed to integrate heterogeneous natural images and homogeneous remote sensing images to improve the information breadth of few-sample learning, enabling the final network to enhance its generalization ability under limited labeled samples by learning the similarity between different data sources. Secondly, by constructing precise and robust search spaces and deploying different units at different locations, the classification accuracy and model transfer robustness of the final network can be improved. This method fully utilizes spatial texture information and rich category information of multi-source data and transfers the learned meta knowledge to the optimal architecture for HSIC execution through precise and robust search space design, achieving HSIC tasks with limited samples. Experimental results have shown that our proposed method achieved an overall accuracy (OA) of 98.57%, 78.39%, and 98.74% for classification on the Pavia Center, Indian Pine, and WHU-Hi-LongKou datasets, respectively. It is fully demonstrated that utilizing spatial texture information and rich category information of multi-source data, and through precise and robust search space design, the learned meta knowledge is fully transmitted to the optimal architecture for HSIC, perfectly achieving classification tasks with few-shot samples. Full article
Show Figures

Figure 1

22 pages, 9071 KiB  
Article
Integrating UAV-Based RGB Imagery with Semi-Supervised Learning for Tree Species Identification in Heterogeneous Forests
by Bingru Hou, Chenfeng Lin, Mengyuan Chen, Mostafa M. Gouda, Yunpeng Zhao, Yuefeng Chen, Fei Liu and Xuping Feng
Remote Sens. 2025, 17(15), 2541; https://doi.org/10.3390/rs17152541 - 22 Jul 2025
Viewed by 324
Abstract
The integration of unmanned aerial vehicle (UAV) remote sensing and deep learning has emerged as a highly effective strategy for inventorying forest resources. However, the spatiotemporal variability of forest environments and the scarcity of annotated data hinder the performance of conventional supervised deep-learning [...] Read more.
The integration of unmanned aerial vehicle (UAV) remote sensing and deep learning has emerged as a highly effective strategy for inventorying forest resources. However, the spatiotemporal variability of forest environments and the scarcity of annotated data hinder the performance of conventional supervised deep-learning models. To overcome these challenges, this study has developed efficient tree (ET), a semi-supervised tree detector designed for forest scenes. ET employed an enhanced YOLO model (YOLO-Tree) as a base detector and incorporated a teacher–student semi-supervised learning (SSL) framework based on pseudo-labeling, effectively leveraging abundant unlabeled data to bolster model robustness. The results revealed that SSL significantly improved outcomes in scenarios with sparse labeled data, specifically when the annotation proportion was below 50%. Additionally, employing overlapping cropping as a data augmentation strategy mitigated instability during semi-supervised training under conditions of limited sample size. Notably, introducing unlabeled data from external sites enhances the accuracy and cross-site generalization of models trained on diverse datasets, achieving impressive results with F1, mAP50, and mAP50-95 scores of 0.979, 0.992, and 0.871, respectively. In conclusion, this study highlights the potential of combining UAV-based RGB imagery with SSL to advance tree species identification in heterogeneous forests. Full article
(This article belongs to the Special Issue Remote Sensing-Assisted Forest Inventory Planning)
Show Figures

Figure 1

15 pages, 2481 KiB  
Review
Transfer Learning for Induction Motor Health Monitoring: A Brief Review
by Prashant Kumar
Energies 2025, 18(14), 3823; https://doi.org/10.3390/en18143823 - 18 Jul 2025
Viewed by 317
Abstract
With advancements in computational resources, artificial intelligence has gained significant attention in motor health monitoring. These sophisticated deep learning algorithms have been widely used for induction motor health monitoring due to their autonomous feature extraction abilities and end-to-end learning capabilities. However, in real-world [...] Read more.
With advancements in computational resources, artificial intelligence has gained significant attention in motor health monitoring. These sophisticated deep learning algorithms have been widely used for induction motor health monitoring due to their autonomous feature extraction abilities and end-to-end learning capabilities. However, in real-world scenarios, challenges such as limited labeled data and diverse operating conditions have led to the application of transfer learning for motor health monitoring. Transfer learning utilizes pretrained models to address new tasks with limited labeled data. Recent advancements in this domain have significantly improved fault diagnosis, condition monitoring, and the predictive maintenance of induction motors. This study reviews state-of-the-art transfer learning techniques, including domain adaptation, fine-tuning, and feature-based transfer for induction motor health monitoring. The key methodologies are analyzed, highlighting their contributions to improving fault detection, diagnosis, and prognosis in industrial applications. Additionally, emerging trends and future research directions are discussed to guide further advancements in this rapidly evolving field. Full article
Show Figures

Figure 1

46 pages, 8887 KiB  
Article
One-Class Anomaly Detection for Industrial Applications: A Comparative Survey and Experimental Study
by Davide Paolini, Pierpaolo Dini, Ettore Soldaini and Sergio Saponara
Computers 2025, 14(7), 281; https://doi.org/10.3390/computers14070281 - 16 Jul 2025
Viewed by 483
Abstract
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that [...] Read more.
This article aims to evaluate the runtime effectiveness of various one-class classification (OCC) techniques for anomaly detection in an industrial scenario reproduced in a laboratory setting. To address the limitations posed by restricted access to proprietary data, the study explores OCC methods that learn solely from legitimate network traffic, without requiring labeled malicious samples. After analyzing major publicly available datasets, such as KDD Cup 1999 and TON-IoT, as well as the most widely used OCC techniques, a lightweight and modular intrusion detection system (IDS) was developed in Python. The system was tested in real time on an experimental platform based on Raspberry Pi, within a simulated client–server environment using the NFSv4 protocol over TCP/UDP. Several OCC models were compared, including One-Class SVM, Autoencoder, VAE, and Isolation Forest. The results showed strong performance in terms of detection accuracy and low latency, with the best outcomes achieved using the UNSW-NB15 dataset. The article concludes with a discussion of additional strategies to enhance the runtime analysis of these algorithms, offering insights into potential future applications and improvement directions. Full article
Show Figures

Figure 1

21 pages, 4101 KiB  
Article
A Physics-Informed Neural Network Solution for Rheological Modeling of Cement Slurries
by Huaixiao Yan, Jiannan Ding and Chengcheng Tao
Fluids 2025, 10(7), 184; https://doi.org/10.3390/fluids10070184 - 13 Jul 2025
Viewed by 368
Abstract
Understanding the rheological properties of fresh cement slurries is essential to maintain optimal pumpability, achieve dependable zonal isolation, and preserve long-term well integrity in oil and gas cementing operations and the 3D printing cement and concrete industry. However, accurately and efficiently modeling the [...] Read more.
Understanding the rheological properties of fresh cement slurries is essential to maintain optimal pumpability, achieve dependable zonal isolation, and preserve long-term well integrity in oil and gas cementing operations and the 3D printing cement and concrete industry. However, accurately and efficiently modeling the rheological behavior of cement slurries remains challenging due to the complex fluid properties of fresh cement slurries, which exhibit non-Newtonian and thixotropic behavior. Traditional numerical solvers typically require mesh generation and intensive computation, making them less practical for data-scarce, high-dimensional problems. In this study, a physics-informed neural network (PINN)-based framework is developed to solve the governing equations of steady-state cement slurry flow in a tilted channel. The slurry is modeled as a non-Newtonian fluid with viscosity dependent on both the shear rate and particle volume fraction. The PINN-based approach incorporates physical laws into the loss function, offering mesh-free solutions with strong generalization ability. The results show that PINNs accurately capture the trend of velocity and volume fraction profiles under varying material and flow parameters. Compared to conventional solvers, the PINN solution offers a more efficient and flexible alternative for modeling complex rheological behavior in data-limited scenarios. These findings demonstrate the potential of PINNs as a robust tool for cement slurry rheological modeling, particularly in scenarios where traditional solvers are impractical. Future work will focus on enhancing model precision through hybrid learning strategies that incorporate labeled data, potentially enabling real-time predictive modeling for field applications. Full article
(This article belongs to the Special Issue Advances in Computational Mechanics of Non-Newtonian Fluids)
Show Figures

Figure 1

20 pages, 3147 KiB  
Article
Crossed Wavelet Convolution Network for Few-Shot Defect Detection of Industrial Chips
by Zonghai Sun, Yiyu Lin, Yan Li and Zihan Lin
Sensors 2025, 25(14), 4377; https://doi.org/10.3390/s25144377 - 13 Jul 2025
Viewed by 361
Abstract
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for [...] Read more.
In resistive polymer humidity sensors, the quality of the resistor chips directly affects the performance. Detecting chip defects remains challenging due to the scarcity of defective samples, which limits traditional supervised-learning methods requiring abundant labeled data. While few-shot learning (FSL) shows promise for industrial defect detection, existing approaches struggle with mixed-scene conditions (e.g., daytime and night-version scenes). In this work, we propose a crossed wavelet convolution network (CWCN), including a dual-pipeline crossed wavelet convolution training framework (DPCWC) and a loss value calculation module named ProSL. Our method innovatively applies wavelet transform convolution and prototype learning to industrial defect detection, which effectively fuses feature information from multiple scenarios and improves the detection performance. Experiments across various few-shot tasks on chip datasets illustrate the better detection quality of CWCN, with an improvement in mAP ranging from 2.76% to 16.43% over other FSL methods. In addition, experiments on an open-source dataset NEU-DET further validate our proposed method. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop