Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (78)

Search Parameters:
Keywords = adaptive re-labeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 28897 KiB  
Article
MetaRes-DMT-AS: A Meta-Learning Approach for Few-Shot Fault Diagnosis in Elevator Systems
by Hongming Hu, Shengying Yang, Yulai Zhang, Jianfeng Wu, Liang He and Jingsheng Lei
Sensors 2025, 25(15), 4611; https://doi.org/10.3390/s25154611 - 25 Jul 2025
Viewed by 149
Abstract
Recent advancements in deep learning have spurred significant research interest in fault diagnosis for elevator systems. However, conventional approaches typically require substantial labeled datasets that are often impractical to obtain in real-world industrial environments. This limitation poses a fundamental challenge for developing robust [...] Read more.
Recent advancements in deep learning have spurred significant research interest in fault diagnosis for elevator systems. However, conventional approaches typically require substantial labeled datasets that are often impractical to obtain in real-world industrial environments. This limitation poses a fundamental challenge for developing robust diagnostic models capable of performing reliably under data-scarce conditions. To address this critical gap, we propose MetaRes-DMT-AS (Meta-ResNet with Dynamic Meta-Training and Adaptive Scheduling), a novel meta-learning framework for few-shot fault diagnosis. Our methodology employs Gramian Angular Fields to transform 1D raw sensor data into 2D image representations, followed by episodic task construction through stochastic sampling. During meta-training, the system acquires transferable prior knowledge through optimized parameter initialization, while an adaptive scheduling module dynamically configures support/query sets. Subsequent regularization via prototype networks ensures stable feature extraction. Comprehensive validation using the Case Western Reserve University bearing dataset and proprietary elevator acceleration data demonstrates the framework’s superiority: MetaRes-DMT-AS achieves state-of-the-art few-shot classification performance, surpassing benchmark models by 0.94–1.78% in overall accuracy. For critical few-shot fault categories—particularly emergency stops and severe vibrations—the method delivers significant accuracy improvements of 3–16% and 17–29%, respectively. Full article
(This article belongs to the Special Issue Signal Processing and Sensing Technologies for Fault Diagnosis)
Show Figures

Figure 1

35 pages, 3157 KiB  
Article
Federated Unlearning Framework for Digital Twin–Based Aviation Health Monitoring Under Sensor Drift and Data Corruption
by Igor Kabashkin
Electronics 2025, 14(15), 2968; https://doi.org/10.3390/electronics14152968 - 24 Jul 2025
Viewed by 181
Abstract
Ensuring data integrity and adaptability in aircraft health monitoring (AHM) is vital for safety-critical aviation systems. Traditional digital twin (DT) and federated learning (FL) frameworks, while effective in enabling distributed, privacy-preserving fault detection, lack mechanisms to remove the influence of corrupted or adversarial [...] Read more.
Ensuring data integrity and adaptability in aircraft health monitoring (AHM) is vital for safety-critical aviation systems. Traditional digital twin (DT) and federated learning (FL) frameworks, while effective in enabling distributed, privacy-preserving fault detection, lack mechanisms to remove the influence of corrupted or adversarial data once these have been integrated into global models. This paper proposes a novel FL–DT–FU framework that combines digital twin-based subsystem modeling, federated learning for collaborative training, and federated unlearning (FU) to support the post hoc correction of compromised model contributions. The architecture enables real-time monitoring through local DTs, secure model aggregation via FL, and targeted rollback using gradient subtraction, re-aggregation, or constrained retraining. A comprehensive simulation environment is developed to assess the impact of sensor drift, label noise, and adversarial updates across a federated fleet of aircraft. The experimental results demonstrate that FU methods restore up to 95% of model accuracy degraded by data corruption, significantly reducing false negative rates in early fault detection. The proposed system further supports auditability through cryptographic logging, aligning with aviation regulatory standards. This study establishes federated unlearning as a critical enabler for resilient, correctable, and trustworthy AI in next-generation AHM systems. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

14 pages, 1509 KiB  
Article
A Multi-Modal Deep Learning Approach for Predicting Eligibility for Adaptive Radiation Therapy in Nasopharyngeal Carcinoma Patients
by Zhichun Li, Zihan Li, Sai Kit Lam, Xiang Wang, Peilin Wang, Liming Song, Francis Kar-Ho Lee, Celia Wai-Yi Yip, Jing Cai and Tian Li
Cancers 2025, 17(14), 2350; https://doi.org/10.3390/cancers17142350 - 15 Jul 2025
Viewed by 270
Abstract
Background: Adaptive radiation therapy (ART) can improve prognosis for nasopharyngeal carcinoma (NPC) patients. However, the inter-individual variability in anatomical changes, along with the resulting extension of treatment duration and increased workload for the radiologists, makes the selection of eligible patients a persistent challenge [...] Read more.
Background: Adaptive radiation therapy (ART) can improve prognosis for nasopharyngeal carcinoma (NPC) patients. However, the inter-individual variability in anatomical changes, along with the resulting extension of treatment duration and increased workload for the radiologists, makes the selection of eligible patients a persistent challenge in clinical practice. The purpose of this study was to predict eligible ART candidates prior to radiation therapy (RT) for NPC patients using a classification neural network. By leveraging the fusion of medical imaging and clinical data, this method aimed to save time and resources in clinical workflows and improve treatment efficiency. Methods: We collected retrospective data from 305 NPC patients who received RT at Hong Kong Queen Elizabeth Hospital. Each patient sample included pre-treatment computed tomographic (CT) images, T1-weighted magnetic resonance imaging (MRI) data, and T2-weighted MRI images, along with clinical data. We developed and trained a novel multi-modal classification neural network that combines ResNet-50, cross-attention, multi-scale features, and clinical data for multi-modal fusion. The patients were categorized into two labels based on their re-plan status: patients who received ART during RT treatment, as determined by the radiation oncologist, and those who did not. Results: The experimental results demonstrated that the proposed multi-modal deep prediction model outperformed other commonly used deep learning networks, achieving an area under the curve (AUC) of 0.9070. These results indicated the ability of the model to accurately classify and predict ART eligibility for NPC patients. Conclusions: The proposed method showed good performance in predicting ART eligibility among NPC patients, highlighting its potential to enhance clinical decision-making, optimize treatment efficiency, and support more personalized cancer care. Full article
Show Figures

Figure 1

25 pages, 1669 KiB  
Article
Zero-Shot Infrared Domain Adaptation for Pedestrian Re-Identification via Deep Learning
by Xu Zhang, Yinghui Liu, Liangchen Guo and Huadong Sun
Electronics 2025, 14(14), 2784; https://doi.org/10.3390/electronics14142784 - 10 Jul 2025
Viewed by 220
Abstract
In computer vision, the performance of detectors trained under optimal lighting conditions is significantly impaired when applied to infrared domains due to the scarcity of labeled infrared target domain data and the inherent degradation in infrared image quality. Progress in cross-domain pedestrian re-identification [...] Read more.
In computer vision, the performance of detectors trained under optimal lighting conditions is significantly impaired when applied to infrared domains due to the scarcity of labeled infrared target domain data and the inherent degradation in infrared image quality. Progress in cross-domain pedestrian re-identification is hindered by the lack of labeled infrared image data. To address the degradation of pedestrian recognition in infrared environments, we propose a framework for zero-shot infrared domain adaptation. This integrated approach is designed to mitigate the challenges of pedestrian recognition in infrared domains while enabling zero-shot domain adaptation. Specifically, an advanced reflectance representation learning module and an exchange–re-decomposition–coherence process are employed to learn illumination invariance and to enhance the model’s effectiveness, respectively. Additionally, the CLIP (Contrastive Language–Image Pretraining) image encoder and DINO (Distillation with No Labels) are fused for feature extraction, improving model performance under infrared conditions and enhancing its generalization capability. To further improve model performance, we introduce the Non-Local Attention (NLA) module, the Instance-based Weighted Part Attention (IWPA) module, and the Multi-head Self-Attention module. The NLA module captures global feature dependencies, particularly long-range feature relationships, effectively mitigating issues such as blurred or missing image information in feature degradation scenarios. The IWPA module focuses on localized regions to enhance model accuracy in complex backgrounds and unevenly lit scenes. Meanwhile, the Multi-head Self-Attention module captures long-range dependencies between cross-modal features, further strengthening environmental understanding and scene modeling. The key innovation of this work lies in the skillful combination and application of existing technologies to new domains, overcoming the challenges posed by vision in infrared environments. Experimental results on the SYSU-MM01 dataset show that, under the single-shot setting, Rank-1 Accuracy (Rank-1) andmean Average Precision (mAP) values of 37.97% and 37.25%, respectively, were achieved, while in the multi-shot setting, values of 34.96% and 34.14% were attained. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Computer Vision)
Show Figures

Figure 1

30 pages, 5474 KiB  
Article
WHU-RS19 ABZSL: An Attribute-Based Dataset for Remote Sensing Image Understanding
by Mattia Balestra, Marina Paolanti and Roberto Pierdicca
Remote Sens. 2025, 17(14), 2384; https://doi.org/10.3390/rs17142384 - 10 Jul 2025
Viewed by 270
Abstract
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts [...] Read more.
The advancement of artificial intelligence (AI) in remote sensing (RS) increasingly depends on datasets that offer rich and structured supervision beyond traditional scene-level labels. Although existing benchmarks for aerial scene classification have facilitated progress in this area, their reliance on single-class annotations restricts their application to more flexible, interpretable and generalisable learning frameworks. In this study, we introduce WHU-RS19 ABZSL: an attribute-based extension of the widely adopted WHU-RS19 dataset. This new version comprises 1005 high-resolution aerial images across 19 scene categories, each annotated with a vector of 38 features. These cover objects (e.g., roads and trees), geometric patterns (e.g., lines and curves) and dominant colours (e.g., green and blue), and are defined through expert-guided annotation protocols. To demonstrate the value of the dataset, we conduct baseline experiments using deep learning models that had been adapted for multi-label classification—ResNet18, VGG16, InceptionV3, EfficientNet and ViT-B/16—designed to capture the semantic complexity characteristic of real-world aerial scenes. The results, which are measured in terms of macro F1-score, range from 0.7385 for ResNet18 to 0.7608 for EfficientNet-B0. In particular, EfficientNet-B0 and ViT-B/16 are the top performers in terms of the overall macro F1-score and consistency across attributes, while all models show a consistent decline in performance for infrequent or visually ambiguous categories. This confirms that it is feasible to accurately predict semantic attributes in complex scenes. By enriching a standard benchmark with detailed, image-level semantic supervision, WHU-RS19 ABZSL supports a variety of downstream applications, including multi-label classification, explainable AI, semantic retrieval, and attribute-based ZSL. It thus provides a reusable, compact resource for advancing the semantic understanding of remote sensing and multimodal AI. Full article
(This article belongs to the Special Issue Remote Sensing Datasets and 3D Visualization of Geospatial Big Data)
Show Figures

Figure 1

15 pages, 20250 KiB  
Article
Transferring Face Recognition Techniques to Entomology: An ArcFace and ResNet Approach for Improving Dragonfly Classification
by Zhong Li, Shaoyan Pu, Jingsheng Lu, Ruibin Song, Haomiao Zhang, Xuemei Lu and Yanan Wang
Appl. Sci. 2025, 15(13), 7598; https://doi.org/10.3390/app15137598 - 7 Jul 2025
Viewed by 316
Abstract
Dragonfly classification is crucial for biodiversity conservation. Traditional taxonomic approaches require extensive training and experience, limiting their efficiency. Computer vision offers promising solutions for dragonfly taxonomy. In this study, we adapt the face recognition algorithms for the classification of dragonfly species, achieving efficient [...] Read more.
Dragonfly classification is crucial for biodiversity conservation. Traditional taxonomic approaches require extensive training and experience, limiting their efficiency. Computer vision offers promising solutions for dragonfly taxonomy. In this study, we adapt the face recognition algorithms for the classification of dragonfly species, achieving efficient recognition of categories with extremely small differences between classes. Meanwhile, this method can also reclassify data that were incorrectly labeled. The model is mainly built based on the classic face recognition algorithm (ResNet50+ArcFace), and ResNet50 is used as the comparison algorithm for model performance. Three datasets with different inter-class data distributions were constructed based on two dragonfly image data sources: Data1, Data2 and Data3. Ultimately, our model achieved Top1 accuracy rates of 94.3%, 85.7%, and 90.2% on the three datasets, surpassing ResNet50 by 0.6, 1.5, and 1.6 percentage points, respectively. Under the confidence thresholds of 0.7, 0.8, 0.9, and 0.95, the Top1 accuracy rates on the three datasets were 96.0%, 97.4%, 98.7%, and 99.2%, respectively. In conclusion, our research provides a novel approach for species classification. Furthermore, it can calculate the similarity between classes while predicting categories, thereby offering the potential to provide technical support for biological research on the similarity between species. Full article
Show Figures

Figure 1

33 pages, 3352 KiB  
Article
Optimization Strategy for Underwater Target Recognition Based on Multi-Domain Feature Fusion and Deep Learning
by Yanyang Lu, Lichao Ding, Ming Chen, Danping Shi, Guohao Xie, Yuxin Zhang, Hongyan Jiang and Zhe Chen
J. Mar. Sci. Eng. 2025, 13(7), 1311; https://doi.org/10.3390/jmse13071311 - 7 Jul 2025
Viewed by 364
Abstract
Underwater sonar target recognition is crucial in fields such as national defense, navigation, and environmental monitoring. However, it faces issues such as the complex characteristics of ship-radiated noise, imbalanced data distribution, non-stationarity, and bottlenecks of existing technologies. This paper proposes the MultiFuseNet-AID network, [...] Read more.
Underwater sonar target recognition is crucial in fields such as national defense, navigation, and environmental monitoring. However, it faces issues such as the complex characteristics of ship-radiated noise, imbalanced data distribution, non-stationarity, and bottlenecks of existing technologies. This paper proposes the MultiFuseNet-AID network, aiming to address these challenges. The network includes the TriFusion block module, the novel lightweight attention residual network (NLARN), the long- and short-term attention (LSTA) module, and the Mamba module. Through the TriFusion block module, the original, differential, and cumulative signals are processed in parallel, and features such as MFCC, CQT, and Fbank are fused to achieve deep multi-domain feature fusion, thereby enhancing the signal representation ability. The NLARN was optimized based on the ResNet architecture, with the SE attention mechanism embedded. Combined with the long- and short-term attention (LSTA) and the Mamba module, it could capture long-sequence dependencies with an O(N) complexity, completing the optimization of lightweight long sequence modeling. At the same time, with the help of feature fusion, and layer normalization and residual connections of the Mamba module, the adaptability of the model in complex scenarios with imbalanced data and strong noise was enhanced. On the DeepShip and ShipsEar datasets, the recognition rates of this model reached 98.39% and 99.77%, respectively. The number of parameters and the number of floating point operations were significantly lower than those of classical models, and it showed good stability and generalization ability under different sample label ratios. The research shows that the MultiFuseNet-AID network effectively broke through the bottlenecks of existing technologies. However, there is still room for improvement in terms of adaptability to extreme underwater environments, training efficiency, and adaptability to ultra-small devices. It provides a new direction for the development of underwater sonar target recognition technology. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

43 pages, 6844 KiB  
Article
CORE-ReID V2: Advancing the Domain Adaptation for Object Re-Identification with Optimized Training and Ensemble Fusion
by Trinh Quoc Nguyen, Oky Dicky Ardiansyah Prima, Syahid Al Irfan, Hindriyanto Dwi Purnomo and Radius Tanone
AI Sens. 2025, 1(1), 4; https://doi.org/10.3390/aisens1010004 - 4 Jul 2025
Viewed by 547
Abstract
This study presents CORE-ReID V2, an enhanced framework built upon CORE-ReID V1. The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID. During pre-training, CycleGAN is employed to [...] Read more.
This study presents CORE-ReID V2, an enhanced framework built upon CORE-ReID V1. The new framework extends its predecessor by addressing unsupervised domain adaptation (UDA) challenges in person ReID and vehicle ReID, with further applicability to object ReID. During pre-training, CycleGAN is employed to synthesize diverse data, bridging image characteristic gaps across different domains. In the fine-tuning, an advanced ensemble fusion mechanism, consisting of the Efficient Channel Attention Block (ECAB) and the Simplified Efficient Channel Attention Block (SECAB), enhances both local and global feature representations while reducing ambiguity in pseudo-labels for target samples. Experimental results on widely used UDA person ReID and vehicle ReID datasets demonstrate that the proposed framework outperforms state-of-the-art methods, achieving top performance in mean average precision (mAP) and Rank-k Accuracy (Top-1, Top-5, Top-10). Moreover, the framework supports lightweight backbones such as ResNet18 and ResNet34, ensuring both scalability and efficiency. Our work not only pushes the boundaries of UDA-based object ReID but also provides a solid foundation for further research and advancements in this domain. Full article
Show Figures

Figure 1

24 pages, 1270 KiB  
Article
Addressing Industry Adaptation Resistance in Combating Brand Deception: AI-Powered Technology vs. Revenue Sharing
by Peng Liu
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 154; https://doi.org/10.3390/jtaer20030154 - 1 Jul 2025
Viewed by 332
Abstract
This paper studies a supply chain comprising a supplier, a third-party remanufacturer (TPR), and a retailer. The retailer sells both genuine and remanufactured products (i.e., Model O). Leveraging information advantages, the retailer may engage in brand deception by mislabeling remanufactured products as genuine [...] Read more.
This paper studies a supply chain comprising a supplier, a third-party remanufacturer (TPR), and a retailer. The retailer sells both genuine and remanufactured products (i.e., Model O). Leveraging information advantages, the retailer may engage in brand deception by mislabeling remanufactured products as genuine to obtain extra profits (i.e., Model BD). AI-powered anti-counterfeiting technologies (AIT) (i.e., Model BA) and revenue-sharing contracts (i.e., Model C) are considered countermeasures. The findings reveal that (1) brand deception reduces (increases) sales of genuine (remanufactured) products, prompting the supplier (TPR) to lower (raise) wholesale prices. The asymmetric profit erosion effect highlights the gradual erosion of profits for the supplier, retailer, and TPR under brand deception. (2) The bi-interval adaptation effect indicates that AIT is particularly effective in industries with low adaptation resistance. When both the relabeling rate and industry adaptation resistance are low (high), Model BA (Model O) achieves a triple win. (3) Sequentially, when the industry adaptation resistance is low, AIT can significantly improve total profits, consumer surplus (CS), and social welfare (SW). Compared to Model BD, revenue-sharing offers slight advantages in CS but notable disadvantages in SW. Full article
(This article belongs to the Section e-Commerce Analytics)
Show Figures

Figure 1

19 pages, 4258 KiB  
Article
Detection and Geolocation of Peat Fires Using Thermal Infrared Cameras on Drones
by Temitope Sam-Odusina, Petrisly Perkasa, Carl Chalmers, Paul Fergus, Steven N. Longmore and Serge A. Wich
Drones 2025, 9(7), 459; https://doi.org/10.3390/drones9070459 - 25 Jun 2025
Viewed by 701
Abstract
Peat fires are a major hazard to human and animal health and can negatively impact livelihoods. Once peat fires start to burn, they are difficult to extinguish and can continue to burn for months, destroying biomass and contributing to carbon emissions globally. In [...] Read more.
Peat fires are a major hazard to human and animal health and can negatively impact livelihoods. Once peat fires start to burn, they are difficult to extinguish and can continue to burn for months, destroying biomass and contributing to carbon emissions globally. In areas with limited accessibility and periods of thick haze and fog, these fires are difficult to detect, localize, and tackle. To address this problem, thermal infrared cameras mounted on drones can provide a potential solution since they allow large areas to be surveyed relatively quickly and can detect thermal radiation from fires above and below the peat surface. This paper describes a deep learning pipeline that detects and segments peat fires in thermal images. Controlled peat fires were constructed under varying environmental conditions and thermal images were taken to form a dataset for our pipeline. A semi-automated approach was adopted to label images using Otsu’s adaptive thresholding technique, which significantly reduces the required effort often needed to tag objects in images. The proposed method uses a pre-trained ResNet-50 model as a backbone (encoder) for feature extraction and is augmented with a set of up-sampling layers and skip connections, like the UNet architecture. The experimental results show that the model can achieve an IOU score of 87.6% on an unseen test set of thermal images containing peat fires. In comparison, a MobileNetV2 model trained under the same experimental conditions achieved an IOU score of 57.9%. In addition, the model is robust to false positives, which is indicated by a precision equal to 94.2%. To demonstrate its practical utility, the model was also tested on real peat wildfires, and the results are promising, as indicated by a high IOU score of 90%. Finally, a geolocation algorithm is presented to identify the GNSS location of these fires once they are detected in an image to aid fire-fighting responses. The proposed scheme was built using a web-based platform that performs offline detection and allows peat fires to be geolocated. Full article
Show Figures

Figure 1

17 pages, 13673 KiB  
Article
Improving Doppler Radar Precipitation Prediction with Citizen Science Rain Gauges and Deep Learning
by Marshall Rosenhoover, John Rushing, John Beck, Kelsey White and Sara Graves
Sensors 2025, 25(12), 3719; https://doi.org/10.3390/s25123719 - 13 Jun 2025
Viewed by 491
Abstract
Accurate, real-time estimation of rainfall from Doppler radars remains a challenging problem, particularly over complex terrain where vertical beam sampling, atmospheric effects, and radar quality limitations introduce significant biases. In this work, we leverage citizen science rain gauge observations to develop a deep [...] Read more.
Accurate, real-time estimation of rainfall from Doppler radars remains a challenging problem, particularly over complex terrain where vertical beam sampling, atmospheric effects, and radar quality limitations introduce significant biases. In this work, we leverage citizen science rain gauge observations to develop a deep learning framework that corrects biases in radar-derived surface precipitation rates at high temporal resolution. A key step in our approach is the construction of piecewise-linear rainfall accumulation functions, which align gauge measurements with radar estimates and allow for the generation of high-quality instantaneous rain rate labels from rain gauge observations. After validating gauges through a two-stage temporal and spatial consistency filter, we train an adapted ResNet-101 model to classify rainfall intensity from sequences of surface precipitation rate estimates. Our model substantially improves precipitation classification accuracy relative to NOAA’s operational radar products within observed spatial regions, achieving large gains in precision, recall, and F1 score. While generalization to completely unseen regions remains more challenging, particularly for higher-intensity rainfall, modest improvements over baseline radar estimates are still observed in low-intensity rainfall. These results highlight how combining citizen science data with physically informed accumulation fitting and deep learning can meaningfully improve real-time radar-based rainfall estimation and support operational forecasting in complex environments. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

23 pages, 2937 KiB  
Article
Domain-Specific Knowledge Graph for Quality Engineering of Continuous Casting: Joint Extraction-Based Construction and Adversarial Training Enhanced Alignment
by Xiaojun Wu, Yue She, Xinyi Wang, Hao Lu and Qi Gao
Appl. Sci. 2025, 15(10), 5674; https://doi.org/10.3390/app15105674 - 19 May 2025
Cited by 1 | Viewed by 383
Abstract
The intelligent development of continuous casting quality engineering is an essential step for the efficient production of high-quality billets. However, there are many quality defects that require strong expertise for handling. In order to reduce reliance on expert experience and improve the intelligent [...] Read more.
The intelligent development of continuous casting quality engineering is an essential step for the efficient production of high-quality billets. However, there are many quality defects that require strong expertise for handling. In order to reduce reliance on expert experience and improve the intelligent management level of billet quality knowledge, we focus on constructing a Domain-Specific Knowledge Graph (DSKG) for the quality engineering of continuous casting. To achieve joint extraction of billet quality defects entity and relation, we propose a Self-Attention Partition and Recombination Model (SAPRM). SAPRM divides domain-specific sentences into three parts: entity-related, relation-related, and shared features, which are specifically for Named Entity Recognition (NER) and Relation Extraction (RE) tasks. Furthermore, for issues of entity ambiguity and repetition in triples, we propose a semi-supervised incremental learning method for knowledge alignment, where we leverage adversarial training to enhance the performance of knowledge alignment. In the experiment, in the knowledge extraction part, the NER and RE precision of our model achieved 86.7% and 79.48%, respectively. RE precision improved by 20.83% compared to the baseline with sequence labeling method. Additionally, in the knowledge alignment part, the precision of our model reached 99.29%, representing a 1.42% improvement over baseline methods. Consequently, the proposed model with the partition mechanism can effectively extract domain knowledge, cand the semi-supervised method can take advantage of unlabeled triples. Our method can adapt the domain features and construct a high-quality knowledge graph for the quality engineering of continuous casting, providing an efficient solution for billet defect issues. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 4404 KiB  
Article
A Fault Diagnosis Framework for Pressurized Water Reactor Nuclear Power Plants Based on an Improved Deep Subdomain Adaptation Network
by Zhaohui Liu, Enhong Hu and Hua Liu
Energies 2025, 18(9), 2334; https://doi.org/10.3390/en18092334 - 3 May 2025
Viewed by 464
Abstract
Fault diagnosis in pressurized water reactor nuclear power plants faces the challenges of limited labeled data and severe class imbalance, particularly under Design Basis Accident (DBA) conditions. To address these issues, this study proposes a novel framework integrating three key stages: (1) feature [...] Read more.
Fault diagnosis in pressurized water reactor nuclear power plants faces the challenges of limited labeled data and severe class imbalance, particularly under Design Basis Accident (DBA) conditions. To address these issues, this study proposes a novel framework integrating three key stages: (1) feature selection via a signed directed graph to identify key parameters within datasets; (2) temporal feature encoding using Gramian Angular Difference Field (GADF) imaging; and (3) an improved Deep Subdomain Adaptation Network (DSAN) using weighted Focal Loss and confidence-based pseudo-label calibration. The improved DSAN uses the Hadamard product to achieve feature fusion of ResNet-50 outputs from multiple GADF images, and then aligns both global and class-wise subdomains. Experimental results show that, on the transfer task from the NPPAD source set to the PcTran-simulated AP-1000 target set across five DBA scenarios, the framework raises the overall accuracy from 72.5% to 80.5%, increases macro-F1 to 0.75 and AUC-ROC to 0.84, and improves average minority-class recall to 74.5%, outperforming the original DSAN and four baselines by explicitly prioritizing minority-class samples and mitigating pseudo-label noise. However, our evaluation is confined to simulated data, and validating the framework on actual plant operational logs will be addressed in future work. Full article
(This article belongs to the Section B4: Nuclear Energy)
Show Figures

Figure 1

20 pages, 2423 KiB  
Article
Symmetry-Guided Prototype Alignment and Entropy Consistency for Multi-Source Pedestrian ReID in Power Grids: A Domain Adaptation Framework
by Jia He, Lei Zhang, Xiaofeng Zhang, Tong Xu, Kejun Wang, Pengsheng Li and Xia Liu
Symmetry 2025, 17(5), 672; https://doi.org/10.3390/sym17050672 - 28 Apr 2025
Viewed by 407
Abstract
This study proposes a multi-source unsupervised domain adaptation framework for person re-identification (ReID), addressing cross-domain feature discrepancies and label scarcity in electric power field operations. Inspired by symmetry principles in feature space optimization, the framework integrates (1) a Reverse Attention-based Feature Fusion (RAFF) [...] Read more.
This study proposes a multi-source unsupervised domain adaptation framework for person re-identification (ReID), addressing cross-domain feature discrepancies and label scarcity in electric power field operations. Inspired by symmetry principles in feature space optimization, the framework integrates (1) a Reverse Attention-based Feature Fusion (RAFF) module aligning cross-domain features using symmetry-guided prototype interactions that enforce bidirectional style-invariant representations and (2) a Self-Correcting Pseudo-Label Loss (SCPL) dynamically adjusting confidence thresholds using entropy symmetry constraints to balance source-target domain knowledge transfer. Experiments demonstrate 92.1% rank-1 accuracy on power industry benchmarks, outperforming DDAG and MTL by 9.5%, with validation confirming robustness in operational deployments. The symmetric design principles significantly enhance model adaptability to inherent symmetry breaking caused by heterogeneous power grid environments. Full article
Show Figures

Figure 1

17 pages, 1801 KiB  
Article
Addressing Asymmetry in Contrastive Learning: LLM-Driven Sentence Embeddings with Ranking and Label Smoothing
by Yan Huang, Shaoben Zhu, Wei Liu, Jiayi Wang and Xinheng Wei
Symmetry 2025, 17(5), 646; https://doi.org/10.3390/sym17050646 - 25 Apr 2025
Viewed by 909
Abstract
Unsupervised sentence embedding, vital for numerous NLP tasks, struggles with the inherent asymmetry of semantic relationships within contrastive learning (CL). This paper proposes Label Smoothing-based Ranking Negative Sampling (LS-RNS), a novel framework that directly tackles the semantic asymmetry between anchor and negative samples [...] Read more.
Unsupervised sentence embedding, vital for numerous NLP tasks, struggles with the inherent asymmetry of semantic relationships within contrastive learning (CL). This paper proposes Label Smoothing-based Ranking Negative Sampling (LS-RNS), a novel framework that directly tackles the semantic asymmetry between anchor and negative samples in CL. LS-RNS utilizes a Large Language Model (LLM) to assess fine-grained asymmetric similarity scores between sentences, constructing a ranking-aware negative sampling strategy combined with adaptive label smoothing. This design encourages the model to learn more effectively from informative negatives that are semantically closer to the anchor, leading to asymmetry-aware sentence embeddings. Experiments on standard Semantic Textual Similarity (STS) benchmarks (STS12–STS16, STS-B, SICK-R) show that LS-RNS achieves state-of-the-art performance. We adopt Spearman’s rank correlation coefficient as the primary evaluation metric for semantic similarity tasks, and we use classification accuracy for downstream and transfer tasks. LS-RNS achieves 79.87 on STS tasks with BERT-base (vs. 76.25 for SimCSE, +3.62) and 80.41 with RoBERTa-base (vs. 79.18 for DiffCSE). On transfer tasks, it attains 88.82 (BERT) and 87.68 (RoBERTa), consistently outperforming PromptBERT and SNCSE. On STL-10, LS-RNS improves SimCLR top-one accuracy from 79.50% to 80.52% with ResNet-18 and from 68.91% to 72.19% with VGG-16, even enabling a shallow ResNet-18 to surpass a deeper ResNet-34 baseline. These results confirm the modality-agnostic effectiveness of LS-RNS and its potential to redefine contrastive learning objectives by modeling semantic asymmetry, rather than relying solely on encoder depth or pre-training objectives. Full article
Show Figures

Figure 1

Back to TopTop