Next Article in Journal
Validating the Use of Natural Language Processing and Text Mining for Hospital-Based Violence Intervention Programs and Criminal Justice Articles
Previous Article in Journal
A Data-Driven Topic Modeling Analysis of Blockchain in Food Supply Chain Traceability
Previous Article in Special Issue
HGA-DP: Optimal Partitioning of Multimodal DNNs Enabling Real-Time Image Inference for AR-Assisted Communication Maintenance on Cloud-Edge-End Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance

1
Information & Communication Company of State Grid Ningxia Electric Power Co., Ltd., Yinchuan 750001, China
2
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Information 2025, 16(12), 1097; https://doi.org/10.3390/info16121097
Submission received: 5 November 2025 / Revised: 29 November 2025 / Accepted: 3 December 2025 / Published: 11 December 2025

Abstract

The integration of AI into power emergency maintenance faces a critical dilemma: centralized training compromises privacy, while standard Federated Learning (FL) struggles with the statistical heterogeneity (Non-IID) of industrial data. Traditional aggregation algorithms (e.g., FedAvg) treat clients solely based on sample size, failing to distinguish between critical fault data and redundant normal operational data. To address this theoretical gap, this paper proposes a Client-Attentive Personalized Federated Learning (PFAA) framework. Unlike conventional approaches, PFAA introduces a semantic-aware attention mechanism driven by “Device Health Fingerprints.” This mechanism dynamically quantifies the contribution of each client not just by data volume, but by the quality and physical relevance of their model updates relative to the global optimization objective. We implement this algorithm within a collaborative cloud-edge-end architecture to enable privacy-preserving, AR-assisted fault diagnosis. Extensive simulations demonstrate that PFAA effectively mitigates model divergence caused by data heterogeneity, achieving superior convergence speed and decision accuracy compared to rule-based and standard FL baselines.

1. Introduction

As power systems modernize, the application of Augmented Reality (AR) in the maintenance of power systems is increasingly vital, particularly in emergency scenarios that demand real-time, accurate guidance [1]. Traditional maintenance, relying on manuals and experience, struggles with the complexities of modern systems, leading to information lag, inaccuracies, and security vulnerabilities [2]. This deficiency stems primarily from the static nature of physical manuals, which are cumbersome to query during time-sensitive repairs, and the reliance on subjective human experience, which often fails to adapt to the evolving complexities and non-standardized fault patterns of modern smart grids. Integrating advanced Artificial Intelligence (AI) into these systems is pivotal for modernizing grid operations; however, traditional centralized AI models demand the aggregation of vast, sensitive data, creating significant privacy and security risks [3,4,5].
To address these privacy concerns, Federated Learning (FL) has emerged as a promising paradigm, enabling collaborative training without sharing raw data [4]. However, applying distributed intelligence to power emergency maintenance faces a fundamental theoretical bottleneck: Statistical Heterogeneity (Non-IID Data). In real-world grids, fault samples are extremely sparse and unevenly distributed compared to abundant normal operation data. Standard FL algorithms, such as FedAvg [6], employ a static aggregation strategy based solely on dataset size. This creates a “dominance of the majority” effect, where the global model overfits to clients with massive amounts of normal data while ignoring clients holding critical but small-sample fault patterns. Existing AR maintenance systems often overlook this contextual imbalance, resulting in models that fail to capture rare but fatal equipment failures [7]. Therefore, a critical scientific gap exists: How to construct an aggregation mechanism that autonomously identifies and prioritizes clients with high-value fault knowledge, without accessing their raw private data?
To bridge this gap, this paper proposes a novel framework centered around Client-Attentive Personalized Federated Learning (PFAA). Unlike traditional approaches, our system introduces a semantic-aware attention mechanism that leverages “Device Health Fingerprints” to dynamically weigh client contributions based on their physical relevance rather than just data volume. We integrate this algorithm into a “cloud-edge-end” architecture, ensuring that the privacy-preserving global model can be translated into precise, spatially-aware visual guidance for on-site personnel.
The main contributions of our work are summarized as follows:
  • Algorithmic Contribution: We propose the Client-Attentive Personalized Federated Learning (PFAA) algorithm tailored for highly heterogeneous industrial environments. By designing a novel Attention Network, PFAA moves beyond sample-size-based aggregation. It dynamically calculates attention scores by analyzing the correlation between local updates and the global optimization direction, theoretically reducing the weight divergence caused by Non-IID data distributions.
  • Feature Representation Innovation: We introduce “Device Health Fingerprints” as a privacy-preserving semantic embedding. Unlike traditional FL which only transmits gradients, our method utilizes these lightweight fingerprints as auxiliary metadata to guide the server’s attention mechanism. This ensures that clients capturing rare fault patterns are assigned higher aggregation weights, improving the global model’s sensitivity to critical failures.
  • System Integration & Validation: We implement a complete “cloud-edge-end” collaborative framework specifically for AR-assisted information push. This architecture validates the PFAA algorithm in a realistic industrial workflow, demonstrating significant improvements in task completion rates and decision adoption compared to rule-based and standard FL baselines.
The remainder of this paper is organized as follows: Section 2 reviews related work in personalized federated learning and AR-assisted industrial applications. Section 3 details our proposed framework, including the system architecture and the core PFAA algorithm. Section 4 presents the experimental setup, results, and a comprehensive analysis. Finally, Section 5 concludes the paper and discusses promising directions for future work.

2. Related Work

2.1. Applications of AR in Industrial Maintenance

AR technology is significantly transforming numerous industrial sectors by merging digital information with the physical world, especially amidst the wave of industrial digitalization. Its core function in industrial maintenance lies in enhancing user perception, simplifying task execution processes, and improving spatial awareness by overlaying critical data, operational instructions, or virtual objects onto the user’s view of the real environment [7].
AR technology demonstrates multifaceted key benefits in industrial maintenance and aims to address challenges commonly faced by the industry. Research indicates that AR systems can effectively improve work efficiency, reduce operational errors, and enhance the safety of maintenance tasks. For instance, the Fast Augmented Reality Authoring(FARA) method, as an AR system, saved an average of 34.7% of time and reduced the error rate by 68.6% in tests with over 30 participants, compared to traditional paper manuals [8]. AR technology is dedicated to solving common difficulties in industrial practice, such as field workers struggling to access sufficient information, inadequate or improper worker training, disconnect between planned solutions and actual applications, and poor communication among relevant personnel in maintenance processes [9]. A significant application of AR is providing support to inexperienced workers, enabling them to perform complex maintenance tasks through guided instructions, remote expert assistance, and interactive 3D models [1]. This application mode, often referred to as “remote assistance” or “collaborative maintenance,” connects on-site technicians with remote experts and is particularly valuable for non-standardized or complex repair scenarios [7]. Furthermore, AR technology is especially beneficial for work tasks involving significant manual labor, which are very common in maintenance and assembly operations [8].
Despite these advances, most existing AR systems rely on centralized data processing, creating a bottleneck for real-time analysis of sensitive data and failing to deliver truly personalized, data-driven guidance. This gap motivates our integration of decentralized learning with AR.

2.2. Federated Learning and Its Applications in Industrial Fields

Federated Learning (FL), as an emerging machine learning paradigm, is centered on distributed model training while protecting data privacy.
The most fundamental principle of FL is Data Localization and Decentralization. This means that raw training data remains on local devices (clients) or within local institutional boundaries (e.g., different power plants, different power companies) and is never transmitted to a central server or directly shared with other clients [4]. This decentralized data storage method is key to its privacy-preserving characteristics [10]. Although data is decentralized, multiple clients can collaboratively train a shared global machine learning model through Collaborative Model Training.
Zhang et al. [11] proposed a blockchain-based FL method to address data privacy issues in equipment fault detection in the Industrial Internet of Things (IIoT). They designed the Centroid Distance Weighted Federated Averaging (CDW_FedAvg) algorithm to handle data heterogeneity and utilized blockchain and smart contracts to ensure data integrity and provide incentives. In smart building environments, FL has also been used for anomaly detection, by training models to identify anomalies related to equipment faults, which is similar in principle to fault diagnosis for industrial equipment [12]. Bukhari et al. [13] proposed an asynchronous FL method to enhance cybersecurity in Edge Industrial IoT (Edge IIoT) networks. They introduced an advanced deep hybrid learning model that combines convolutional neural networks (CNN), gated repeat units (GRU), and long-short-term memory (LSTM) networks, with the objective of effectively detecting cyberattacks and ensuring data privacy.
Despite its advantages, standard FL, particularly the Federated Averaging (FedAvg) algorithm, often struggles with statistical heterogeneity (Non-IID data) common in real-world industrial settings. In the context of power maintenance, this implies that data collected by different terminals is not independently and identically distributed; for instance, some devices may record frequent specific faults (e.g., overheating) due to their location or load, while others record mostly normal data, creating highly skewed distributions that degrade the performance of standard aggregation algorithms. In FedAvg, all clients are weighted equally based on their data sample size, which can degrade the global model’s performance when client data distributions vary significantly [14]. This highlights the need for a more sophisticated aggregation mechanism. To address this challenge, our proposed PFAA algorithm intelligently assesses and weighs client updates.

2.3. Power Maintenance Information Push Technology

Historically, information push methods in the field of power maintenance have evolved from manual to preliminary computerization, but these traditional methods have proven insufficient in dealing with the complexity of modern power systems. In the early stages of power maintenance, information transfer and distribution largely relied on manual methods. Paper work orders (often accompanied by drawings), operation logs, and direct verbal communication between personnel were the primary communication channels [15]. The advancement of information and communication technology (ICT) gradually began to change the way information was disseminated within institutions such as internal information repositories of large power enterprises [15].
With the development of information technology, information push methods in the field of power maintenance have gradually evolved from traditional modes towards more intelligent directions, where rule-based and data mining methods play an important role.
Rule-based information push systems operate on “IF-THEN” logic, with rules derived from either expert knowledge or learned from data through machine learning. Their primary advantage, especially in critical sectors like power, is their inherent transparency and interpretability, which builds trust in high-risk environments. These systems are applied in power maintenance for tasks like alerting, decision support, cybersecurity, and predictive maintenance by learning from historical data [16]. However, they face challenges: knowledge-driven rules are difficult to maintain in dynamic settings, while data-driven rules are sensitive to data quality and concept drift, and complex rules may lack transparency [17]. Razgon et al. [18] proposed a novel rule learning method called “Relaxed Separate-and-Conquer” (RSC) as an improvement to the standard Separate-and-Conquer (SeCo) method, aiming to address the fragmentation problem in predictive maintenance (PdM). This method, by not requiring the removal of covered rows but instead requiring new rules to cover at least one uncovered instance, demonstrated higher accuracy and more concise rule sets than decision trees (DT) and SeCo learners in real industrial case studies.

3. Information Push Method

To address the complexities of emergency power maintenance, this paper introduces an innovative method combining AR and FL for accurate and secure information push. This chapter details the method’s core components: a “cloud-edge-end” architecture for data processing and AR interaction (as shown in Figure 1), and the PFAA algorithm. PFAA enhances the federated model, using device health fingerprints and AR to deliver precise maintenance guidance.

3.1. System Architecture

This section describes in detail the “cloud-edge-end” three-layer collaborative system architecture designed for the precise push of power emergency maintenance information, as shown in Figure 1. By clarifying the functional positioning and collaboration mechanism of each layer, this architecture aims to fully utilize the computing, storage, and communication capabilities of each layer to achieve efficient data processing, effective protection of user privacy, and precise push of maintenance information.

3.1.1. AR Terminal Layer

The AR terminal layer acts as the system’s front-end, using smart wearables like glasses or tablets for direct interaction with maintenance personnel. As illustrated in Figure 2 the AR interface overlays critical information onto the physical equipment. For example, during a fault diagnosis task, it can highlight suspected components, display real-time sensor readings (e.g., temperature, voltage), and present step-by-step procedural guidance with 3D arrows and text instructions. This layer integrates several functional modules: sensors collect real-time equipment and environmental data, while a lightweight local unit processes it and performs an initial screening of health fingerprints. Data security is handled by an encryption module. A SLAM module provides high-precision positioning, which is essential for accurately anchoring AR information. The AR rendering engine then superimposes cloud-pushed decisions and guidance onto the real scene with low latency. Finally, a multi-modal module processes user interactions like gestures and voice. This layer is key to situational awareness and guidance, prioritizing real-time response and user experience.

3.1.2. Edge Layer

The edge layer, an optional intermediary deployed near the site, connects AR terminals and the cloud. It preprocesses and aggregates data like health fingerprints from AR devices to reduce cloud upload volume and network pressure. Leveraging its computing power, it runs local fault diagnosis models for rapid analysis and anomaly detection, enabling quick responses and reducing cloud load. The edge layer also caches models for local inference, crucial for poor network conditions or low-latency needs. Functioning as a data relay and security gateway, it manages data transfer and access control. This layer significantly enhances system processing and response speed, making it ideal for emergencies with limited bandwidth or high real-time requirements.

3.1.3. Cloud Layer

The cloud server layer is the system’s intelligent core, handling centralized data storage, complex analysis, global model management, and decision-making. A critical component is the PFAA algorithm. Key modules within this layer are: The FL Server, which uses PFAA to process encrypted client updates, optimize the global fault diagnosis model, and distribute personalized updates. The Cloud Knowledge Base and Historical Case Library, managing vast device data, fault cases, and maintenance resources. The Anomaly Detection and Decision-making Module, which uses health fingerprints and the PFAA model for precise diagnosis, urgency assessment, and generating optimal maintenance solutions. A Model Management Module for versioning and secure distribution, and a Data Storage and Analysis Module for system optimization and risk prediction.

3.1.4. Inter-Layer Coordination and Security Mechanism

The effectiveness of the “Cloud-Edge-Device” architecture hinges on tight collaboration and secure data transmission. An information control loop is formed as AR terminals handle sensing and interaction, the edge layer provides rapid preprocessing, and the cloud conducts deep analysis and model training. To ensure security, the system uses end-to-end encryption, privatized health fingerprint generation, and access controls, with FL further protecting privacy by exchanging model parameters instead of raw data. This integrated architecture leverages AR, edge, and cloud capabilities to offer robust, flexible, and secure support for power emergency maintenance.

3.2. Client-Attentive Personalized Federated Learning

This section will provide a detailed introduction to the PFAA Algorithm proposed in this paper, which is key to achieving efficient and accurate FL. We will first elaborate on the overall description of the PFAA algorithm, and then detail the PFAA-based intelligent fault diagnosis and decision-making process.

3.2.1. Algorithm Description

At the core of our federated learning scheme is the PFAA algorithm. Unlike traditional FedAvg [13] which treats all client updates equally (weighted only by sample size), PFAA introduces an intelligent aggregation mechanism at the server level. This mechanism dynamically evaluates and weights each client’s contribution, aiming to mitigate the negative impact of Non-IID data and accelerate model convergence. The server’s role in the PFAA process involves the following key steps:
First, the server receives and preprocesses client updates. These updates include encrypted local model parameters (gradients w k t ) and potentially auxiliary metadata ( m e t a k t ) from edge layers, such as health fingerprint statistics ( F health ). Preprocessing involves decryption and validation.
Secondly, the core innovation of PFAA lies in the dynamic calculation of client-level attention weights. To achieve this, the server employs a small, lightweight Attention Network. It is crucial to distinguish this from conventional attention mechanisms that operate on input data features within a model. Here, the Attention Network’s purpose is to learn the importance of each client’s entire model update in the context of global model optimization. It analyzes the relevance of each client’s uploaded model update to the current global model M g l o b a l t (with parameters w g l o b a l t ), assessing its potential contribution to improving the global model’s performance, or determining the uniqueness and novelty of the data it represents. The server first constructs a feature vector f k t for each client k in round t, which integrates the client’s update information, metadata, and the global model state:
f k t = Φ w k t , w global t , meta k t
where ϕ represents the feature extraction process, which may include calculating the difference between the client model and the global model ( w k t w g l o b a l t ), the cosine similarity of parameters, or processing metadata.
Then, the attention network A θ a t t can be a parameterized function, such as a small Multilayer Perceptron (MLP), whose parameters θ a t t can be optimized through meta-learning or other mechanisms, aiming to learn how to effectively evaluate and distinguish the value of different client updates. The attention score α k t is a positive value that dynamically reflects the relative importance of client k s update for optimizing the global model, as perceived by the server in the current FL round. For example, client updates that are more aligned with the global model’s optimization objective, or that can provide more diverse or higher-quality data information, might be assigned higher attention weights.
Next, the server performs attention-weighted aggregation. It utilizes the attention scores α k t calculated in the previous step to perform weighted aggregation of each client’s model updates, thereby updating the global fault diagnosis model and obtaining new global model parameters w g l o b a l t + 1 . This aggregation process can be represented by the following formula:
w global t + 1 = k = 1 K α k t · n k t j = 1 K α j t · n j t w k t
where K is the total number of clients participating in this training round, and n k t represents the amount of data samples used by client k in this training round. This attention-based weighting mechanism ensures that high-quality, highly relevant client updates have a greater impact on the global model, thereby effectively improving the model’s generalization ability and convergence speed in complex and heterogeneous data environments. This is particularly crucial for addressing common Non-IID data problems in power maintenance, helping to mitigate model performance degradation caused by data heterogeneity.
Finally, the server is responsible for global model distribution and personalized support. The updated global model M g l o b a l t + 1 (with parameters w g l o b a l t + 1 ) will first undergo compression and then be distributed back to various edge layers through a secure communication channel. These distributed models will serve as the base models for their next round of local training. Furthermore, the server can leverage client-specific information or the distribution of attention weights α k t obtained during attention analysis to provide personalized model adjustment suggestions for specific types of client groups, or even individual clients. This can be represented as a personalization function P, which generates a personalized model M p e r s , k t + 1 based on the new global model, the client’s attention score, and its unique profile information P r o f i l e k :
M pers , k t + 1 = P M global t + 1 , α k t , profile k ; θ P
where θ P is the parameter of the personalization function. This personalized support helps terminal devices perform more effective local model fine-tuning, ultimately achieving more accurate information push and AR guidance that better meets actual maintenance needs.
Through the PFAA algorithm, this system can not only effectively protect the data privacy of all participating parties under the FL framework, but also intelligently integrate knowledge from different maintenance terminals and scenarios, dynamically optimize the learning process of the global model, and enhance its overall performance and robustness in complex and dynamic power emergency maintenance environments, thereby providing more powerful backend model support for ultimately achieving accurate and efficient information push.

3.2.2. Intelligent Fault Diagnosis and Decision-Making Based on PFAA

Upon receiving device health fingerprints uploaded from edge nodes, the cloud server will initiate an intelligent fault diagnosis and decision-making process based on the PFAA-optimized global model M g l o b a l . The core of this process is a three-level verification mechanism to ensure the accuracy of the diagnosis and the rationality of the decisions. Firstly, the system performs a historical case comparison, matching the current device’s health fingerprint F h e a l t h with the health fingerprints of numerous historical fault cases stored in the cloud knowledge base. It employs the cosine similarity formula to quantify the degree of similarity, thereby rapidly identifying known and similar fault patterns. The formula is as follows:
Similarity = i = 1 128 A i × B i i = 1 128 A i 2 × i = 1 128 B i 2
Next, context adaptation verification is performed. This step comprehensively considers current on-site environmental data ( E e n v ), such as temperature, humidity, weather conditions, etc., as well as the topological relationship of the faulty equipment within the entire power system and possible associated alarm information. Through this contextual information, the system can correct preliminary diagnostic results, eliminate potential false positives caused by environmental factors or system interdependencies, or further refine the root cause of the fault. Finally, the system utilizes the global fault diagnosis model M g l o b a l , trained and continuously optimized by the PFAA algorithm, to perform global model deep inference on the health fingerprints. This model, based on a wider range of data and more complex pattern recognition capabilities, conducts an in-depth analysis of health fingerprints, outputting more precise diagnostic results ( R g l o b a l ) including fault type, possible causes, development trends, and severity.
After obtaining accurate diagnostic results, the system will perform a fault urgency assessment. This assessment comprehensively considers the potential impact of the diagnosed fault on the equipment itself and the entire power system ( S i m p a c t ), the current actual load conditions of the equipment ( S l o a d ), and the influence of on-site environmental factors ( S e n v ) on the fault’s development speed and the difficulty of its resolution. The urgency level is dynamically calculated using the following urgency grading formula:
Urgency = w 1 S impact + w 2 S load + w 3 S env
where w 1 , w 2 , and w 3 are preset or dynamically adjusted weight coefficients, to calculate the comprehensive urgency level of the fault. Finally, based on accurate fault diagnosis results ( R g l o b a l ), assessed fault urgency ( U r g e n c y ), Standard Operating Procedures ( S O P ) and historical successful resolution cases stored in the cloud knowledge base, as well as the skill level of maintenance personnel ( D s k i l l ) and the actual adaptability of the on-site environment ( S e n v ), the system will generate optimal maintenance decision information ( I n f o d e c i s i o n ) and detailed resolution plans ( P l a n a c t i o n ). To ensure the quality and applicability of the recommended solutions, the system will adopt the following scoring model:
Score = c 1 R history + c 2 E env + c 3 ( 1 D skill )
where c 1 , c 2 , c 3 are weighting coefficients, this formula ranks multiple possible handling solutions and selects the one with the highest score for pushing.

3.3. Collaborative Workflow and Precise Information Push Methods

The core of the precise push method for power emergency maintenance information proposed in this paper, which is based on AR and FL, lies in constructing an efficient, secure, and intelligent multi-level collaborative workflow to achieve precise, timely, and privacy-preserving information push for on-site maintenance personnel. The specific process and algorithm steps are as follows Algorithm 1.
This collaborative workflow enhances on-site perception and interaction through AR technology, utilizes device health fingerprints for precise state representation, and continuously improves the accuracy and personalization of cloud-based intelligent decision-making while protecting privacy via an innovative PFAA FL algorithm, ultimately achieving efficient and precise information support for power emergency maintenance tasks.
This method, through the aforementioned processes and algorithms, achieves “precision” in information push across space, content, and time. Its core advantages and innovations lie in: privacy-enhanced device health fingerprints, cloud-based intelligent decision-making based on the PFAA algorithm, and precise spatial mapping from decision information to AR guidance. AR terminals, edge layers, and cloud servers work collaboratively, ensuring the efficiency and safety of emergency maintenance through secure and efficient information interaction.
Algorithm 1:Precise Information Push Algorithm for Power Emergency maintenance
Information 16 01097 i001

4. Experiment and Result Analysis

4.1. Experimental Setup

The experimental evaluation is grounded in a hybrid dataset strategy combining High-Fidelity Industrial Simulations with de-identified historical maintenance logs. Given the strict safety protocols and operational stability requirements of critical power infrastructure, acquiring large-scale labeled data for extreme failure modes (e.g., core overheating or high-voltage short circuits) from physical grids is practically infeasible. To address this, we synthesized a rigorous dataset designed to accurately reflect the complexity of real-world environments.
This dataset integrates multi-source heterogeneous sensor streams—including vibration, temperature, and dissolved gas analysis—to construct the “Device Health Fingerprints” required by our PFAA algorithm. To ensure the reliability of our validation, the dataset design explicitly incorporates Statistical Heterogeneity (Non-IID) and environmental interference. We simulated normal operation alongside various common and rare faults, while systematically injecting stochastic sensor noise, missing value patterns, and severe class imbalances to mirror actual industrial constraints. For instance, a representative subset of the simulated transformer fault data features is detailed in Table 1.
Regarding the setup of the experimental platform and environment, the specific arrangements are as follows:
  • Overall Scenario Setting: The experiment will be conducted in a simulated power emergency maintenance scenario. This scenario includes typical power equipment such as transformers and switchgear, as well as sensor networks, reproducing the three-tier system architecture proposed in this paper.
  • AR Terminal Layer Simulation: For hardware, devices functionally comparable to mainstream AR glasses will be used to simulate data collection, local model execution, and information overlay display. For software, the Android system will be used, and Unity (C#) and Unreal Engine (C++) will be utilized for AR application development.
  • Edge Layer Simulation: For hardware, distributed nodes will be built using edge computing devices such as NVIDIA Jetson series development boards, deployed near the simulated site to achieve low-latency data interaction. For software, the edge layer will run Linux (Ubuntu), using Python (Version 3.11.6) and libraries like Pandas and NumPy for data processing, and PyTorch (Version 2.1.1) for model inference.
  • Cloud Server Simulation: For hardware, high-performance cloud server instances (AWS EC2) will be adopted, configured with multi-core CPUs, high-performance GPUs, and ample memory and storage. The software stack will include Linux (Ubuntu) operating systems, deep learning frameworks such as PyTorch for training global models, and databases like MySQL for storing various types of data.
In summary, such a detailed set of hardware and software configurations, combined with precise network simulation settings, aims to comprehensively and accurately measure the actual operational efficiency and potential performance bottlenecks of the AR and FL-based precise information push method for power emergency maintenance proposed in this paper, across different system levels and under varying network conditions.
Regarding the setting of evaluation metrics, this paper will primarily focus on verifying the performance improvement of the proposed method in two core dimensions, specifically as follows:
  • Maintenance Decision and Guidance Effectiveness: This dimension focuses on the actual application effectiveness of system-generated maintenance decision information and AR guidance.
    Task Completion Rate ( T C R ): Measures the proportion of maintenance personnel who successfully complete their pre-scheduled maintenance tasks with the assistance of AR guidance. Its calculation formula is
    T C R = N S C T N T T A
    where N S C T represents the number of successfully completed tasks, and N T T A represents the total number of tasks attempted.
    Decision Adoption Rate ( D A R ): Measures the proportion of operational decisions or solutions recommended by the system that are actually adopted and executed by operations personnel. Its calculation formula is
    D A R = N A D N R D
    where N A D represents the number of adopted decisions, and N R D represents the total number of decisions recommended by the system.
  • Information push latency: For this, we will precisely measure the total time elapsed from the moment the AR terminal begins collecting device operating status data ( t s t a r t ) until the final AR guidance information is successfully displayed on the AR terminal screen ( t e n d ). This total time will comprehensively cover the entire information processing and transmission chain. Its calculation formula can be expressed as
    T d e l a y = t e n d t s t a r t
    Furthermore, this total delay can be decomposed into the sum of delays of each key link:
    T d e l a y = T c o l l e c t + T l p r o c e s s + T u p l o a d + T c p r o c e s s + T d o w n l o a d + T r e n d e r
    where T c o l l e c t represents data collection time, T l p r o c e s s represents AR terminal local preliminary processing time, T u p l o a d represents data upload time, T c p r o c e s s represents cloud server decision making time, T d o w n l o a d represents the time for decision information to be transmitted back to the AR terminal, and T r e n d e r represents the time for the AR device to complete rendering and present to the user.

4.2. Results Analysis

This subsection will elaborate on the experimental results, aiming to objectively evaluate the performance of the proposed method for precise information pushing in power emergency maintenance based on AR and FL across various predefined metrics. The analysis will focus on a horizontal comparison between this method and selected baseline methods, demonstrating its actual effects in improving the effectiveness of maintenance decision-making and guidance, and reducing information pushing latency through specific data and charts.

4.2.1. Analysis of the Effectiveness of Maintenance Decisions and Guidance

Task Completion Rate T C R = N S C T / N T T A , as a core metric for measuring the effectiveness of AR guidance, will have its data collected through meticulously designed simulation scenarios for power emergency maintenance. These scenarios will cover various typical emergency situations. The 8 specific typical scenarios designed are as follows:
  • Emergency cooling and cause investigation for transformer overheating.
  • Emergency handling of circuit breaker failure after line protection activation.
  • Abnormal handling of online monitoring data for unusual noise and partial discharge inside high-voltage switchgear.
  • Inspection and preliminary localization of cable fault points within underground cable trenches (or tunnels).
  • Fault location and isolation of feeder ground faults in station DC systems.
  • System status verification and resetting after protection device maloperation.
  • Troubleshooting for emergency lighting system startup failure.
  • Abnormal parameter alarm handling and setpoint verification during the initial operation phase of new equipment.
For each of the above scenarios, simulated agents will be used to perform tasks, employing this method and other different methods, respectively. The experiment will record the number of times the predetermined maintenance objectives were successfully completed for each scenario when using different methods. To more intuitively demonstrate the performance differences between the methods, these data will be further utilized to plot a grouped bar chart. In this bar chart, the maintenance scenario number will serve as the horizontal axis and the task completion rate as the vertical axis, clearly comparing the specific performance of this method against two comparative methods in each simulated scenario through juxtaposed grouped bars.
Figure 3 shows this comparison result. As can be seen from the figure, in all 8 simulated scenarios, the task completion rate of this method is significantly higher than both the “rule-based push method [19]” and the “context-model-based knowledge push method [20]”. Specifically, the task completion rate of this method reached its highest at 95% in scenario 3 and its lowest at 85% in scenario 6, but even at its lowest point, it was significantly superior to the other two methods. For example, in scenario 2, the completion rate of this method was 88%, while the rule-based method and the context-model-based method were 65% and 78%, respectively. This preliminarily indicates that this method has a clear advantage in improving the execution success rate of complex emergency maintenance tasks.
The significant advantage of this method’s task completion rate benefits from the synergistic effect of AR technology and FL. AR, through precise 3D model overlays and step-by-step guidance, intuitively presents complex instructions, reducing reliance on traditional drawings, avoiding errors in understanding and positioning, and standardizing operations through real-time safety alerts. Meanwhile, the PFAA algorithm, while protecting privacy, utilizes distributed data to train more accurate fault diagnosis and decision models, ensuring that the pushed solutions are based on a deep understanding of equipment status and fault modes, highly consistent with on-site conditions, and effectively guide correct operations. The combination of the two reduces the cognitive burden on maintenance personnel, allowing them to focus more on accurate execution, thereby comprehensively improving the quality and efficiency of emergency task completion.
This significant improvement in task completion rate is directly attributable to the synergy between our PFAA-driven backend and the AR-assisted front-end. The intuitive, spatially-aware guidance provided by the AR interface (as conceptualized in Figure 2) significantly reduces the cognitive load on personnel. Instead of interpreting complex manuals, they receive clear, actionable steps overlaid on the real equipment. This, combined with the high accuracy of the backend decisions generated by the PFAA model, ensures that operators are not only told what to do, but are also shown precisely where and how to do it, thus minimizing errors and boosting the completion rate.
The evaluation of the decision adoption rate ( D A R ) will also be based on the aforementioned 8 simulated maintenance scenarios. In each scenario, when the simulated agent uses this method, the rule-based push method, and the context-model-based knowledge push method, the system will recommend corresponding maintenance decisions or treatment plans. The experiment will record, for each method, the number of recommended decisions ultimately adopted and executed by the simulated agent, as well as the total number of decisions recommended by the system. The decision adoption rate will be presented in a bar chart, clearly comparing the performance of the three methods in each scenario.
Figure 4 intuitively illustrates the decision adoption rates of various methods across different scenarios. According to the example data shown in Figure 3, the decision schemes recommended by this method achieved the highest adoption rates in all scenarios, with an average adoption rate of 90.5%. In contrast, the “rule-based pushing method” had an average adoption rate of 59.88%, and the “context-model-based knowledge pushing method” had an average adoption rate of 76.63%. Particularly in scenarios requiring precise information and complex judgments, such as Scenario 3 (high-voltage switchgear discharge handling, with this method’s adoption rate at 95%) and Scenario 4 (cable fault localization, with this method’s adoption rate at 92%), the advantage of this method’s decision adoption rate is more pronounced, significantly higher than the rule-based method (70% and 53% respectively) and the context-model-based method (83% and 75% respectively). This strongly indicates that the decisions generated by this method are more aligned with actual need and operators’ judgment logic, making them more easily accepted and executed.

4.2.2. Information Push Delay Analysis

The experimental results are shown in Figure 5. This chart compares the end-to-end information push delay of the proposed method, the rule-based push method, and the context model-based knowledge push method under four different network conditions (normal, high latency, low bandwidth, high packet loss rate). It can be clearly observed from the figure that the proposed method consistently exhibits the lowest end-to-end information push delay under all tested network conditions. For instance, under normal network conditions, the average delay of the proposed method is 166 ms, which is significantly lower than that of the rule-based push method (232 ms) and the context model-based knowledge push method (184 ms). Under high-latency network conditions, although the delay of the proposed method increases to 252 ms, it still outperforms the other two methods (285 ms and 256 ms, respectively). Similarly, under low bandwidth and high packet loss rate conditions, the proposed method demonstrates optimal performance with delays of 201 ms and 303 ms, respectively, while the comparative methods show higher delay values. These results preliminarily indicate that the proposed method can maintain relatively low information push delay in different network environments, demonstrating good network adaptability and efficiency advantages.
The advantages of this method in terms of information push delay and its stability under different network conditions are mainly due to its system architecture design and the synergistic application of key technologies. The introduction of edge computing nodes offloads cloud computing pressure, reduces the amount of transmitted data and cloud computing load, thereby shortening decision time. The FL mechanism, by performing localized model training at the data source, avoids the transmission of massive raw data to the central server, which significantly reduces data transmission latency while protecting privacy. Furthermore, standardized lightweight device health fingerprint data is small in volume, facilitating rapid upload and analysis. The AR terminal’s local SLAM and rendering engine can also quickly process received decision information and model data, ensuring the timeliness of information presentation. In comparison, traditional rule-based methods may be time-consuming due to querying large rule bases, and context-model-based methods may incur higher latency in complex understanding and reasoning.
To gain a deeper understanding of the specific stages where latency occurs, the experiment decomposed the total information push latency under normal network conditions, as shown in Figure 6. This figure elaborately presents the time consumption of the proposed method and two comparative methods across six key stages: data collection ( T c o l l e c t ), AR terminal local preliminary processing ( T l p r o c e s s ), data upload ( T u p l o a d ), cloud server analysis and decision-making ( T c p r o c e s s ), decision information feedback ( T d o w n l o a d ), and AR device rendering and presentation ( T r e n d e r ).
According to the experimental results in Figure 6, this method demonstrates latency optimization in multiple key stages. In the data acquisition phase, this method takes 24 ms, which is relatively higher compared to the context-model-based method (13 ms) and the rule-based method (16 ms). This is primarily because this method is designed to collect more comprehensive multimodal data (e.g., visual, infrared, vibration, etc.) to construct more detailed device health fingerprints, ensuring the accuracy of subsequent analysis and decision-making; simultaneously, some initial data validation and formatting operations are also integrated into this stage to ensure data quality. Although this incurs a slight time overhead, it lays the foundation for subsequent high-quality decisions. However, in the AR terminal local processing stage, this method takes 33 ms, showing significant improvement compared to the context-model-based method (45 ms) and the rule-based method (59 ms), which may be attributed to the application of lightweight local models. For data upload, this method takes 32 ms, also outperforming the other two methods (49 ms and 60 ms, respectively), demonstrating the advantage of lightweight data representations such as health fingerprints. In the most critical cloud server analysis and decision-making stage, this method takes 78 ms, significantly lower than the 122 ms for the rule-based method and 102 ms for the context-model-based method. This fully reflects the efficiency of the FL model and the rapid decision-making capability of the three-level verification mechanism. In the decision information callback and AR device rendering stages, this method takes 27 ms and 21 ms respectively, showing mixed performance compared to other methods (downloads 44 ms/41 ms, rendering 18 ms/20 ms), but the overall difference is not significant, indicating that these stages are not the primary bottlenecks.
Overall, the key bottlenecks affecting total latency are primarily concentrated in the cloud processing and data upload stages. This method, by leveraging key technologies such as edge computing, FL, and lightweight device health fingerprints, specifically optimizes these bottleneck links, thereby achieving the lowest total latency.

4.3. Algorithmic Performance Evaluation

To rigorously validate the effectiveness of the proposed Client-Attentive Personalized Federated Learning (PFAA) algorithm, we conducted a separate series of experiments focusing on model convergence and diagnostic accuracy. Unlike the system-level evaluation in Section 4.2, this section isolates the algorithmic component to compare PFAA against state-of-the-art federated learning baselines under severe Non-IID data distributions.

4.3.1. Comparison with Baselines

We benchmark PFAA against three well-established algorithms: FedAvg [6] (the standard baseline), FedProx [21] (designed for heterogeneity), and FedPer [22] (designed for personalization). To simulate a realistic power grid scenario, we generated a Non-IID partition using a Dirichlet distribution ( α = 0.5 ), ensuring that fault classes are unevenly distributed among clients.
As illustrated in Figure 7, PFAA demonstrates superior convergence speed and final accuracy compared to the baselines. Specifically, PFAA achieves a final diagnostic accuracy of 94.2%, significantly outperforming FedProx (89.5%) and FedAvg (83.1%). This performance gap indicates that the proposed attention mechanism effectively mitigates the “dominance of the majority” effect, preventing the global model from being biased towards clients with abundant healthy data—a pervasive issue in standard FedAvg. In terms of convergence efficiency, PFAA reaches the 90% accuracy threshold within just 35 communication rounds, whereas FedProx requires 52 rounds, and FedAvg fails to reach this benchmark within the imposed communication budget. Such rapid convergence implies lower communication overhead, which is critical for maintaining system responsiveness in bandwidth-constrained industrial networks.

4.3.2. Ablation Study

To verify the specific contribution of the “Client-Attentive” mechanism driven by Device Health Fingerprints, we performed an ablation study comparing three variants:
  • Baseline (FedAvg): Standard aggregation based on sample size ( n k ).
  • Static-Weighted: Aggregation using fixed weights derived from initial device types without dynamic attention.
  • PFAA (Ours): Dynamic, fingerprint-aware attention aggregation.
Table 2 presents the quantitative results of the ablation study. The Baseline (FedAvg) method, which relies solely on dataset size for aggregation, yields the lowest performance with an accuracy of 83.1%, primarily because the global model becomes biased towards majority classes in the Non-IID setting. By introducing device metadata (Health Fingerprints) with fixed weights, the Static-Weighted variant improves accuracy to 88.4%, demonstrating the distinct value of incorporating semantic context. Most significantly, our PFAA method, which further incorporates the dynamic attention mechanism, achieves the highest accuracy of 94.2%. This substantial 5.8% gain over the static approach confirms that the attention network successfully learns to adaptively prioritize high-quality updates during the training process, thereby validating the core algorithmic contribution of this work.

5. Conclusions

This paper introduced Client-Attentive Personalized Federated Learning (PFAA), a novel algorithm that mitigates data heterogeneity in decentralized settings through an intelligent attention mechanism. We successfully implemented PFAA within a three-tier “cloud-edge-end” framework for AR-assisted information push in power emergency maintenance. Experimental evaluations demonstrate that our PFAA-driven approach significantly outperforms traditional methods in task completion rates, decision adoption, and information push latency, thereby enhancing operational efficiency, accuracy, and safety while preserving data privacy.
Future work will focus on three key areas: First, we will enhance the “equipment health fingerprint” construction using advanced deep learning techniques for more granular state representation. Second, the PFAA algorithm itself will be optimized for greater computational efficiency and personalization. Finally, large-scale, real-world deployment is planned to validate the system’s robustness and scalability for mission-critical applications.

Author Contributions

Conceptualization, C.Y. and T.Z.; methodology, X.L. and T.Z.; validation, J.W.; formal analysis, S.S.; investigation, C.Y.; resources, T.Z.; data curation, X.L.; writing—original draft preparation, C.Y.; writing—review and editing, T.Z.; visualization, X.L.; supervision, Z.L. and S.S.; funding acquisition, C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by State Grid Ningxia Electric Power Co., Ltd. Science and Technology Project “Research and Application of Key Technologies for Wearable AR Smart Maintenance for Emergency Electric Power Communication” (5229XT240002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors declare that the data supporting the findings of this study are available in the article and can be found in the Section 4.2.

Conflicts of Interest

Author Cong Ye, Xiao Li, Zile Lei, Jianlei Wang were employed by the company Information and Communication Company of State Grid Ningxia Electric Power Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Sujanthi, S. Augmented reality for industrial maintenance using deep learning techniques—A review. In Proceedings of the 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India, 4–6 January 2024; pp. 1629–1635. [Google Scholar]
  2. Kadry, S. On the evolution of information systems. Syst. Theory Perspect. Appl. Dev. 2014, 1, 197–208. [Google Scholar]
  3. Porawagamage, G.; Dharmapala, K.; Chaves, J.S.; Villegas, D.; Rajapakse, A. A review of machine learning applications in power system protection and emergency control: Opportunities, challenges, and future directions. Front. Smart Grids 2024, 3, 1371153. [Google Scholar] [CrossRef]
  4. Nasim, M.D.; Soshi, F.T.J.; Biswas, P.; Ferdous, A.S.; Rashid, A.; Biswas, A.; Gupta, K.D. Principles and Components of Federated Learning Architectures. arXiv 2025, arXiv:2502.05273. [Google Scholar] [CrossRef]
  5. Benes, K.J.; Porterfield, J.E.; Yang, C. AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy; US Department of Energy: Washington, DC, USA, 2024.
  6. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv 2016, arXiv:1602.05629. [Google Scholar]
  7. Breitkreuz, D.; Müller, M.; Stegelmeyer, D.; Mishra, R. Augmented reality remote maintenance in industry: A systematic literature review. In Proceedings of the International Conference on Extended Reality, Lecce, Italy, 6–8 July 2022; pp. 287–305. [Google Scholar]
  8. Zhu, J.; Ong, S.K.; Nee, A.Y. A context-aware augmented reality system to assist the maintenance operators. Int. J. Interact. Des. Manuf. 2014, 8, 293–304. [Google Scholar] [CrossRef]
  9. Reljić, V.; Milenković, I.; Dudić, S.; Šulc, J.; Bajči, B. Augmented reality applications in industry 4.0 environment. Appl. Sci. 2021, 11, 5592. [Google Scholar] [CrossRef]
  10. Zhou, J.; Zhang, S.; Lu, Q.; Dai, W.; Chen, M.; Liu, X.; Pirttikangas, S.; Shi, Y.; Zhang, W.; Herrera-Viedma, E. A survey on federated learning and its applications for accelerating industrial internet of things. arXiv 2021, arXiv:2104.10501. [Google Scholar] [CrossRef]
  11. Zhang, W.; Lu, Q.; Yu, Q.; Li, Z.; Liu, Y.; Lo, S.K.; Chen, S.; Xu, X.; Zhu, L. Blockchain-based federated learning for device failure detection in industrial IoT. IEEE Internet Things J. 2020, 8, 5926–5937. [Google Scholar] [CrossRef]
  12. Berkani, M.R.A.; Chouchane, A.; Himeur, Y.; Ouamane, A.; Miniaoui, S.; Atalla, S.; Mansoor, W.; Al-Ahmad, H. Advances in federated learning: Applications and challenges in smart building environments and beyond. Computers 2025, 14, 124. [Google Scholar] [CrossRef]
  13. Bukhari, S.M.S.; Zafar, M.H.; Abou, Houran, M.; Qadir, Z.; Moosavi, S.K.R.; Sanfilippo, F. Enhancing cybersecurity in Edge IIoT networks: An asynchronous federated learning approach with a deep hybrid detection model. Internet Things 2024, 27, 101252. [Google Scholar] [CrossRef]
  14. Li, R.; Wang, H.; Lu, Q.; Yan, J.; Ji, S.; Ma, Y. Research on medical image classification based on improved fedavg algorithm. Tsinghua Sci. Technol. 2025, 30, 2243–2258. [Google Scholar] [CrossRef]
  15. Shonhe, L. A literature review of information dissemination techniques in the 21st century era. Libr. Philos. Pract. 2017, 1731. [Google Scholar]
  16. Lou, P.; Lu, G.; Jiang, X.; Xiao, Z.; Hu, J.; Yan, J. Cyber intrusion detection through association rule mining on multi-source logs. Appl. Intell. 2021, 51, 4043–4057. [Google Scholar] [CrossRef]
  17. Sarker, I.H.; Janicke, H.; Ferrag, M.A.; Abuadbba, A. Multi-aspect rule-based AI: Methods, taxonomy, challenges and directions towards automation, intelligence and transparent cybersecurity modeling for critical infrastructures. Internet Things 2024, 25, 101110. [Google Scholar] [CrossRef]
  18. Razgon, M.; Mousavi, A. Relaxed rule-based learning for automated predictive maintenance: Proof of concept. Algorithms 2020, 13, 219. [Google Scholar] [CrossRef]
  19. Wenhao, Z. Push Content Decision Method for On-Site Operation and Maintenance of Power Communication Network; Beijing University of Posts and Telecommunications: Beijing, China, 2019. [Google Scholar]
  20. Li, R.; Gao, G.; Liang, Y.; Zhang, X.; Liao, Y. An AR based edge maintenance architecture and maintenance knowledge push algorithm for communication networks. In Proceedings of the 4th International Conference on Big Data and Computing, Guangzhou, China, 10–12 May 2019; pp. 165–168. [Google Scholar]
  21. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
  22. Arivazhagan, M.G.; Aggarwal, V.; Singh, A.K.; Choudhary, S. Federated learning with personalization layers. arXiv 2019, arXiv:1912.00818. [Google Scholar] [CrossRef]
Figure 1. System Architecture Diagram.
Figure 1. System Architecture Diagram.
Information 16 01097 g001
Figure 2. Conceptual visualization of the AR-assisted guidance interface for power emergency maintenance.
Figure 2. Conceptual visualization of the AR-assisted guidance interface for power emergency maintenance.
Information 16 01097 g002
Figure 3. Comparison Chart of Task Completion Rates for Various Methods Across Different maintenance Scenarios.
Figure 3. Comparison Chart of Task Completion Rates for Various Methods Across Different maintenance Scenarios.
Information 16 01097 g003
Figure 4. Comparison of Decision Adoption Rates of Various Methods in Different maintenance Scenarios.
Figure 4. Comparison of Decision Adoption Rates of Various Methods in Different maintenance Scenarios.
Information 16 01097 g004
Figure 5. Comparison of End-to-End Message Push Latency Under Different Network Conditions.
Figure 5. Comparison of End-to-End Message Push Latency Under Different Network Conditions.
Information 16 01097 g005
Figure 6. Comparison of Delay Decomposition of Different Methods Across Various Stages.
Figure 6. Comparison of Delay Decomposition of Different Methods Across Various Stages.
Information 16 01097 g006
Figure 7. Test Accuracy vs. Communication Rounds.
Figure 7. Test Accuracy vs. Communication Rounds.
Information 16 01097 g007
Table 1. Example Data Fields for Equipment Monitoring.
Table 1. Example Data Fields for Equipment Monitoring.
Field NameInstance DataDescription
Timestamp“2025-05-02 10:00:00”Data record timestamp
Device_id“TX-001”Unique Device Identifier
Voltage_phase_a220.5 VPhase A voltage
Current_phase_a15.2 APhase A current
Temperature_winding75.6 °CWinding temperature
Oil_level95.2%Oil level
Vibration_x0.5 mm/sX-axis vibration
Gas_in_oil_h210 ppmHydrogen (H2) content in
dissolved gas in oil
Fault_typeNormal, winding short circuit, etc.Fault Type (e.g., normal,
winding short circuit, low
oil level, overheating)
Maintenance“2024-09-15: Routine check,…”Maintenance Log
(including detailed operations)
Table 2. Ablation Study Results: Impact of Attention Mechanism and Metadata.
Table 2. Ablation Study Results: Impact of Attention Mechanism and Metadata.
MethodComponentAccuracy (%)Precision (%)Recall (%)
Baseline (FedAvg)Sample Size Only83.181.279.5
Static-Weighted+Metadata (Fixed)88.486.785.3
PFAA (Ours)+Dynamic Attention94.293.894.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ye, C.; Li, X.; Lei, Z.; Wang, J.; Zhang, T.; Shao, S. Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance. Information 2025, 16, 1097. https://doi.org/10.3390/info16121097

AMA Style

Ye C, Li X, Lei Z, Wang J, Zhang T, Shao S. Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance. Information. 2025; 16(12):1097. https://doi.org/10.3390/info16121097

Chicago/Turabian Style

Ye, Cong, Xiao Li, Zile Lei, Jianlei Wang, Tao Zhang, and Sujie Shao. 2025. "Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance" Information 16, no. 12: 1097. https://doi.org/10.3390/info16121097

APA Style

Ye, C., Li, X., Lei, Z., Wang, J., Zhang, T., & Shao, S. (2025). Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance. Information, 16(12), 1097. https://doi.org/10.3390/info16121097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop