Next Article in Journal
Research Progress on the Application of FPMD in Molten Salt Electrolysis
Next Article in Special Issue
Machine-Learning Algorithm and Decline-Curve Analysis Comparison in Forecasting Gas Production
Previous Article in Journal
Effect of Different Processing Methods on the Physical, Chemical and Nutraceutical Properties of Cachichín (Oecopetalum mexicanum) Seed: A Novel Functional Underutilized Food
Previous Article in Special Issue
Automated Fiber Placement Gap Width Prediction Using a Transformer-Based Deep Learning Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sustainability-Aware Federated Graph Attention Framework for Supply Chain Process Modeling

by
Vasileios Alexiadis
1,
Maria Drakaki
1,* and
Panagiotis Tzionas
2
1
Department of Science and Technology, University Center of International Programmes of Studies, International Hellenic University, 14th km Thessaloniki-N. Moudania, GR-57001 Thermi, Greece
2
Department of Industrial Engineering and Management, International Hellenic University, P.O. Box 141, GR-57400 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Processes 2026, 14(5), 781; https://doi.org/10.3390/pr14050781
Submission received: 27 January 2026 / Revised: 18 February 2026 / Accepted: 21 February 2026 / Published: 27 February 2026

Abstract

Modern supply chains operate as highly interconnected networks characterized by decentralization, data silos, and increasing sustainability constraints. Although Graph Neural Networks (GNNs) have demonstrated strong capability in modeling relational dependencies in such systems, their deployment is often restricted by limited inter-organizational data sharing. Federated learning (FL) enables collaborative model training without exposing proprietary data; however, existing federated approaches rarely integrate graph structure and sustainability objectives within a unified framework. This study proposes a Sustainability-Aware Federated Graph Attention Network (FedGAT) for decentralized supply chain process modeling. The framework combines Graph Attention Networks with federated optimization and introduces an emission-weighted attention modulation mechanism that embeds environmental considerations directly into the message-passing process. A multi-tier synthetic supply chain benchmark is constructed to evaluate the approach under realistic governance and data-locality constraints. Experiments are conducted across multiple random seeds, graph scales (up to 500 nodes), and client partition settings. Results show that while centralized graph learning achieves the lowest prediction error, the proposed sustainability-aware federated model maintains statistically indistinguishable predictive performance compared to standard federated baselines (paired sign test p = 1.000), while systematically reducing attention allocated to high-emission transport links. A structured label sensitivity analysis confirms that performance gains are not attributable to circular label construction. Furthermore, a λ-ablation study demonstrates a smooth and controllable trade-off between predictive accuracy and sustainability alignment through a single governance parameter. These findings establish the feasibility of privacy-preserving, sustainability-modulated graph learning for decentralized supply chain analytics and provides a principled foundation for environmentally aligned AI deployment in multi-enterprise networks.

1. Introduction

Foundational work in industrial systems modeling established that production, warehousing, and supply chain operations are dynamic, interdependent, and event-driven, motivating formal modeling and intelligent control. Agent-based systems combined with Colored Petri Nets (CPNs) were shown to capture this complexity effectively, improving coordination in warehouse order picking and resource allocation under demand uncertainty [1]. These approaches were extended to multi-echelon supply chains, where hierarchical CPNs enabled the analysis of inventory propagation, order variability, and disruption effects across tiers [2]. Empirical studies further demonstrated that inventory inaccuracy significantly degrades service levels and amplifies upstream instability, highlighting the need for system-wide coordination [3]. Reinforcement learning integrated with timed CPNs enabled adaptive manufacturing scheduling under stochastic disturbances, reinforcing the view of supply chains as networked decision systems with cascading effects [4,5].
Building on this trajectory, modern supply chains are increasingly modeled as large-scale networks of suppliers, manufacturers, distributors, and retailers, where efficiency, resilience, and sustainability are critical. AI-based process modeling has therefore become central to supply chain optimization. In this context, Graph Neural Networks (GNNs) are particularly suitable, as supply chains can be naturally represented as graphs with entities as nodes and material, informational, or financial dependencies as edges [6]. Empirical evidence shows that GNN-based models outperform conventional statistical and deep learning baselines on tasks such as forecasting and classification, often achieving substantial improvements by learning non-linear structural dependencies and propagation effects [6].
Attention-based graph models further enhance supply chain learning. Graph Attention Networks (GATs) assign non-uniform weights to neighboring nodes during aggregation, enabling models to emphasize critical supplier–customer relationships while suppressing less relevant interactions [7]. Beyond predictive gains, attention mechanisms improve interpretability by revealing influential dependencies, and recent GAT-based resilience models demonstrate strong disruption–classification performance while exposing structural vulnerabilities through learned attention patterns [8].
Despite these advances, real-world deployment is constrained by data decentralization and confidentiality. Supply chain data are distributed across organizations and are rarely shareable due to commercial, contractual, and regulatory restrictions, rendering centralized GNN training impractical in multi-enterprise settings [9]. Federated learning (FL) addresses these constraints by enabling collaborative model training without exchanging raw data. The foundational Federated Averaging approach demonstrated that high-quality neural models can be trained across decentralized, non-IID data while retaining data locally [10]. In industrial ecosystems, FL enables data-limited manufacturers and logistics partners to benefit from collective learning under confidentiality constraints [11].
A further driver shaping supply chain analytics is the need to integrate sustainability objectives. Supply chains contribute significantly to greenhouse gas emissions and resource use, motivating sustainability-aware AI that incorporates metrics such as carbon emissions or energy intensity directly into learning and decision-making. AI-driven logistics and operational optimization have been associated with measurable environmental benefits, including substantial CO2 reductions [12]. In adjacent domains, federated learning combined with control and optimization—such as reinforcement learning in energy systems—has achieved simultaneous cost and emissions reductions through decentralized coordination [13]. Nonetheless, most supply chain ML systems remain predominantly single-objective, and explicit sustainability integration during training is still uncommon, particularly under decentralization constraints.
Consequently, important gaps remain. Centralized GNNs struggle with data silos and governance, while conventional FL is typically designed for tabular data and does not naturally accommodate graph-structured dependencies spanning organizational boundaries. Only recently has federated graph learning been applied to supply chain visibility, risk modeling, and demand–inventory prediction, including early attempts to integrate sustainability considerations [14,15]. These studies demonstrate feasibility but also underscore the absence of a unified framework jointly addressing decentralization, graph-based interdependence modeling, and sustainability-aware optimization.
Despite recent advances in federated graph learning, existing approaches remain insufficient for sustainability-critical supply chain modeling for three primary reasons. First, most federated GNN frameworks optimize predictive objectives without incorporating an explicit and controllable sustainability bias within the message-passing mechanism. Second, they provide limited governance-oriented interpretability, offering no structured means to regulate the trade-off between predictive accuracy and environmental alignment. Third, sustainability signals are typically treated as auxiliary features rather than as structural modifiers of relational influence.
To address these limitations, we propose a federated sustainability-aware graph attention framework in which transport-related environmental signals directly modulate attention weights during aggregation. A tunable parameter λ enables explicit control over the strength of sustainability penalization, thereby introducing a governance-relevant trade-off between predictive performance and environmental alignment. This transforms sustainability from a passive feature into an active structural constraint within decentralized graph learning.
The remainder of the article is organized as follows. Section 2 reviews related work on GNNs, federated learning, sustainability-aware AI, and federated graph learning. Section 3 presents the methodology and system architecture. Section 4 reports experimental results. Section 5 discusses implications and limitations, and Section 6 concludes with directions for future research.

2. Related Work

To contextualize our proposed framework, we survey the relevant literature in four key areas. First, we discuss how Graph Neural Networks have been applied in supply chain modeling and optimization, highlighting their benefits over traditional methods. Next, we review federated learning approaches in industrial domains, focusing on how privacy-preserving collaboration has been enabled in manufacturing and supply chain scenarios. Third, we examine research on sustainability-aware AI methods that integrate environmental objectives into model training or decision-making, including multi-objective optimization techniques. Finally, we cover the relatively few works that have attempted to combine GNNs with federated learning, noting their achievements and limitations.

2.1. Graph Neural Networks in Supply Chain Management

Supply chains are inherently graph-structured systems, with nodes representing firms or facilities and edges encoding material, contractual, transportation, or information dependencies. This structure makes Graph Neural Networks (GNNs) particularly suitable, as they learn over non-Euclidean data by propagating information across network topology. Unlike independent-sample models, GNNs capture higher-order interdependencies and cascading effects, enabling them to model how local disruptions propagate across multiple supply chain tiers through message passing [6].
Empirical evidence consistently shows that GNNs outperform classical time-series and conventional machine learning approaches in supply chain tasks such as demand forecasting, inventory control, risk assessment, anomaly detection, and network reconstruction. Wasi et al. report performance gains typically in the range of 10–30%, attributable to GNNs’ ability to exploit shared structural context (e.g., common upstream suppliers) and enable cross-entity generalization [6]. Industrial studies further confirm these advantages, particularly under disruption scenarios where structural dependencies dominate system behavior, as demonstrated in automotive supply networks by Gupta et al. [16].
Beyond accuracy, GNNs provide structural interpretability, which is increasingly critical for decision support, risk governance, and regulatory compliance. Graph Attention Networks (GATs) enhance this capability by learning adaptive attention weights over neighbors, allowing models to emphasize influential entities and relationships. Attention-based graph models have been shown to improve both predictive performance and interpretability in large-scale supply networks, disruption prediction, and manufacturing resilience applications [17,18,19].
GNNs are also effective for uncovering latent or incomplete supply chain structure. Prior work formulates supply chain visibility as a link-prediction problem, showing that GNNs can infer hidden supplier relationships and indirect dependencies absent from enterprise records [20]. Extensions that integrate graph embeddings with heterogeneous data sources, such as textual risk signals, further improve systemic risk identification in ICT and critical infrastructure supply chains [21].
Despite these advances, most GNN-based supply chain studies assume access to a centralized and fully observable supply chain graph. In practice, this assumption is often invalid due to confidentiality, data-protection regulation (e.g., GDPR), contractual restrictions, and competition law, which collectively limit cross-enterprise data sharing. Additionally, existing GNN applications predominantly optimize predictive or operational performance, while sustainability objectives—such as emissions intensity or energy use—are typically treated as ex-post evaluation criteria rather than embedded learning targets.
These limitations motivate decentralized graph learning approaches that operate under partial visibility and organizational boundaries, while supporting multi-objective optimization aligned with emerging sustainability and regulatory requirements. This gap is addressed in subsequent sections through the integration of federated learning, graph attention mechanisms, and sustainability-aware modeling objectives.

2.2. Federated Learning in Industrial and Supply Chain Settings

Federated learning (FL), introduced by McMahan et al. through Federated Averaging, enables the effective training of deep neural networks across decentralized and non-IID data without sharing raw data [10]. Subsequent surveys identify FL as particularly suitable for cross-silo industrial environments characterized by strong heterogeneity in scale, data quality, and processes [22]. In Industrial IoT settings, hierarchical and two-stage aggregation strategies improve convergence stability and reduce communication overhead, supporting factory-scale deployment [23].
Within Industry 4.0 and Industry 5.0 frameworks, FL is increasingly regarded as a core enabler of privacy-preserving collaborative intelligence. Surveys in smart manufacturing and product lifecycle management demonstrate its applicability to quality control, predictive maintenance, and process optimization while maintaining data locality [24]. Empirical studies further show that FL enables data-limited manufacturers to achieve performance comparable to centralized learning without exposing proprietary data, as demonstrated in additive manufacturing condition monitoring [25]. These properties align with emerging trustworthy-AI principles emphasizing privacy, auditability, and human-centric governance [26].
In supply chain management, FL applications remain limited but are expanding. Nguyen et al. propose a federated framework for delivery-delay prediction across textile suppliers, demonstrating feasible privacy-preserving collaboration and improved generalization relative to single-organization baselines [27]. The same study shows that federated models can rival or exceed centralized performance by mitigating overfitting across heterogeneous participants [27]. Related work extends FL to cross-border logistics, enabling early-warning systems for disruption risk while respecting jurisdiction-specific data-sovereignty constraints [28]. Comparable federated demand-forecasting architectures in retail and agri-food supply chains rely on encrypted model updates and align with emerging sectoral data-governance frameworks [29,30].
Despite its promise, cross-silo FL introduces substantial system and governance challenges, including coordination across autonomous organizations, heterogeneous compute resources and data schemas, and exposure to inference attacks. Secure aggregation and cryptographic mechanisms are therefore required in adversarial or semi-honest settings [22]. Tang et al. address these challenges by integrating FL with Graph Neural Networks for privacy-preserving supply chain data sharing [31]. At the organizational level, FL deployments must also comply with data-protection, trade-secret, and competition regulations, which may restrict even indirect information leakage.
Overall, FL is increasingly recognized as a cornerstone technology for collaborative industrial AI, yet supply chain adoption remains largely exploratory. Existing applications predominantly rely on conventional neural architectures and tabular representations, despite the inherently relational structure of supply chains. This gap motivates the integration of FL with graph-based and attention-driven models to enable privacy-preserving, structure-aware learning across multi-enterprise supply networks.

2.3. Sustainability-Aware Artificial Intelligence

As sustainability targets intensify, AI research increasingly distinguishes between AI-for-sustainability, which seeks to reduce emissions, energy use, and resource intensity in operational systems, and “green” AI, which focuses on minimizing the environmental footprint of AI itself through efficient training, inference, and reporting. This work primarily addresses the former by embedding environmental objectives directly into AI models for supply chain process management, while accounting for computational and communication costs.
In supply chain management, sustainability has become a central operational concern, as supply chain activities contribute significantly to greenhouse gas emissions and resource use, requiring the joint optimization of environmental, cost, and service objectives [32]. Accordingly, indicators such as carbon footprint, energy intensity, waste, and circularity are increasingly treated as decision variables rather than ex-post reporting metrics. AI methods are particularly suited to this setting, as high-dimensional operational data and complex constraints limit analytical optimization. Empirical evidence shows that AI-driven analytics can improve green supply chain process integration and environmental performance when aligned with operational governance [32], while systematic reviews identify logistics optimization, demand–inventory coordination, and risk-aware planning as the most mature application areas, where emissions and energy use are tightly coupled to operational decisions [33].
A core methodological principle in sustainability-aware AI is treating environmental performance as a first-class learning objective. This includes integrating environmental impacts into loss functions, enforcing emissions constraints, or learning policies that explicitly trade-off operational and environmental KPIs under uncertainty. In supply chains, Abushaega et al. formalize this approach through a multi-objective framework combining federated learning and Graph Neural Networks to jointly optimize operational and environmental objectives [15]. Related work in energy systems demonstrates that decentralized learning can yield simultaneous cost and emissions reductions, including empirically observed savings under federated reinforcement learning for building energy management [13]. Additional studies show that federated sequence models support sustainability optimization beyond centralized deployments [34,35].
Sustainability-aware AI also encompasses measurement and accountability of AI’s own environmental footprint. The literature emphasizes standardized green evaluation metrics for comparability across models and deployments. Borraccia et al. propose hybrid metrics for assessing energy–carbon impacts of AI pipelines [36], while bibliometric analyses document rapid growth in sustainability-oriented machine learning research alongside a persistent gap between methodological advances and deployment-level impact [37,38]. Organizational studies further indicate that effective AI-for-sustainability outcomes depend on governance structures and integration into decision workflows, not solely on model design [39].
Finally, sustainability benefits from AI in supply chains are not guaranteed and may be offset by rebound effects, misaligned incentives, or increased computational burden. Practitioner reports often claim large emissions reductions from AI-driven logistics and planning interventions [40], but the research literature stresses rigorous evaluation, transparent baselines, and robust measurement to distinguish genuine environmental improvements from confounded operational effects [36,41]. While the feasibility of sustainability-aware AI is well supported, most supply chain ML systems still prioritize operational accuracy, treating sustainability as an external constraint. The novelty of our approach lies in explicitly encoding sustainability signals within a decentralized federated graph learning framework, enabling balanced optimization of operational and environmental KPIs under realistic data-governance constraints.

2.4. Combining Graph Neural Networks and Federated Learning

Bringing together GNNs and FL poses both conceptual and system-level challenges, and only a limited body of work has explored this intersection. Standard FL formulations implicitly assume that data points are independent and identically distributed across clients. Graph-structured data violates these assumptions in two fundamental ways: (i) observations (nodes and edges) are relationally coupled and (ii) the graph topology itself may be partitioned across organizational boundaries. In federated settings, this implies that informative dependencies can span clients, while each participant only observes a local subgraph. These properties complicate gradient aggregation, convergence, and representation learning. Despite these difficulties, early studies demonstrate that federated optimization can be extended to graph-based models, giving rise to the paradigm of Federated Graph Neural Networks (FedGNN).
The earliest line of work has been driven by privacy-sensitive applications such as recommender systems and financial networks. Wu et al. introduced FedGNN for a privacy-preserving recommendation, training GNNs over decentralized user–item interaction graphs while applying local privacy mechanisms to gradients and structural signals [35]. Their results show that a federated GNN can achieve recommendation quality comparable to centralized baselines, while significantly reducing information leakage under realistic threat models. This finding is important because it establishes that graph-based representation learning remains viable under strict data-locality constraints, even when the global graph never materializes.
Subsequent work extends federated graph learning to spatio-temporal domains. Meng et al. propose a cross-node federated GNN for traffic forecasting, where each client manages a regional subgraph and the global model captures dependencies that span geographic boundaries [35]. Their framework improves forecasting accuracy relative to purely local models and introduces mechanisms for mitigating non-IID graph distributions. Complementary research addresses vertical graph partitioning, in which different clients hold disjoint feature sets or edge views for the same nodes. Chen et al. propose a vertically federated GNN (VFGNN) that enables privacy-preserving node classification when features, edges, and labels are distributed across parties [36]. These works collectively demonstrate that both horizontal and vertical graph partitioning can be accommodated within federated optimization, albeit with additional architectural and cryptographic complexity.
Our framework distinguishes itself by introducing a Graph Attention Network within a federated learning architecture tailored to multi-enterprise supply chains. Attention is particularly valuable in federated graph settings because local subgraphs differ in topology, density, and informational relevance. An attention-based aggregator can dynamically weight inter-node and inter-client dependencies, allowing the global model to adapt to heterogeneous structural contexts more flexibly than fixed-weight GCNs. Moreover, we explicitly integrate sustainability objectives into the federated GNN training process—an element largely absent from the existing FedGNN literature, which typically prioritizes predictive performance and privacy guarantees alone. While Abushaega et al. take an important step toward multi-objective federated graph learning for supply chains [15], their architecture does not exploit attention mechanisms and does not explicitly address the interpretability and cross-client structural asymmetries inherent in real-world networks.
As summarized in Table 1, prior work on supply chain sustainability primarily operates either within centralized graph learning frameworks [8,14,17,20,21] or federated learning settings without sustainability-aware aggregation [9,10,24,31]. Recent AI-driven decarbonization efforts [12,32,33] focus on predictive optimization but lack federated graph structures. Although Federated Graph Neural Networks have been proposed for privacy-preserving supply chain data sharing [9,31], they do not introduce explicit environmental bias within the attention mechanism.
The proposed framework uniquely integrates federated graph learning with sustainability-modulated attention. By embedding environmental signals directly into message-passing weights and introducing λ as an explicit governance parameter, the method enables controllable trade-offs between predictive performance and environmental alignment in decentralized supply chain networks.

3. Methodology and Experimental Setup

This section describes the experimental design used to evaluate the proposed sustainability-aware federated graph attention framework. The setup is deliberately constructed to reflect key structural and data-governance characteristics of real industrial supply chains, while remaining fully reproducible and analytically controlled.
The proposed model is trained through an iterative sustainability-aware federated optimization process that preserves data locality at each client while enabling coordinated global learning. Unlike conventional federated graph training, sustainability modulation is embedded directly within the local attention mechanism, and the server additionally aggregates privacy-preserving alignment diagnostics to monitor environmental bias during training. Communication proceeds through repeated rounds between the central server and distributed clients, where model parameters are exchanged but no raw graph data or labels are shared. Algorithm 1 presents the complete governance-aware federated training workflow.
Algorithm 1 Sustainability-Aware Federated Graph Attention Training
Input: Initial global parameters W 0 ; rounds T ; clients k = 1 , , K each holding a local subgraph G k = ( V k , E k ) with node features and labels; emission attribute e i j for edges; sustainability control parameter λ (fixed or scheduled); local epochs E ; aggregation weights w k (e.g., proportional to V k or number of local training nodes).
Output: Final global parameters W T .
Initialize (Server):
Server initializes model parameters W 0 . Server selects sustainability policy λ (fixed λ = λ 0 or schedule λ t ) and broadcasts W 0 λ 1 to all clients.
For each communication round t = 1 , , T :
(a)
Client-side sustainability-aware local training (parallel for each client k ):
Receive W t 1 λ t .
Train the GAT on local subgraph G k for E epochs using sustainability-modulated attention.
For each incoming edge j i , attention coefficients are reweighted by an emission penalty after softmax:
α j i α j i e x p ( λ t e ~ j i ) m N i n ( i ) α m i e x p ( λ t e ~ m i ) ,
where e ~ j i is normalized transport emissions and λ t governs sustainability strength.
Compute local model update Δ W k t (or updated weights W k t ).
Compute local alignment diagnostics (no raw data shared), e.g.:
ρ k t = S p e a r m a n ( e ~ j i , α j i ) over local edges;
m k t = fraction of attention mass allocated to high-emission edges (top quantile).
Send Δ W k t ρ k t m k t V k to the server.
(b)
Server-side aggregation with sustainability monitoring:
Aggregate model updates:
W t W t 1 + k = 1 K w k Δ W k t ,   where   w k = V k l V l .

Aggregate diagnostics:
ρ ¯ t = k = 1 K w k ρ k t , m ¯ t = k = 1 K w k m k t .

Governance step (optional): update λ according to a predefined policy (not tuned post hoc), e.g., monotonic schedule or target-alignment rule:
λ t + 1 P o l i c y U p d a t e ( λ t , ρ ¯ t , m ¯ t ) .
(If fixed-policy, set λ t + 1 = λ t .)
(c)
Broadcast:
Server broadcasts W t λ t + 1 to all clients.
Return: final global model parameters W T .

3.1. Synthetic Supply Chain Simulation

We evaluate the proposed Federated GAT on a realistic synthetic multi-tier supply chain case study. The supply network (illustrated schematically in Figure 1) is modeled as a directed graph G = (V, E) with multiple echelons including suppliers, manufacturers, distribution centers, and retailers. Each node represents an entity (e.g., a factory, warehouse, or retail outlet) and each directed edge represents a supply relationship or material flow between entities. Nodes are assigned to tiers by operational role (upstream suppliers in Tier 1, intermediate manufacturers in Tier 2, distribution centers in Tier 3, and retailers in Tier 4), creating a layered structure analogous to a consumer goods supply chain.
To ensure structural transparency, we report descriptive statistics of the generated graphs. For the baseline configuration (N = 28 nodes, 41 edges), the mean in-degree and out-degree distributions, as well as transport-emissions statistics, are summarized withing the main text. Client partition statistics further quantify intra- and inter-client edge cuts, clarifying the degree of structural fragmentation introduced by federated partitioning. These statistics demonstrate that the synthetic network preserves tiered supply chain structure while enabling the controlled evaluation of decentralized learning effects.
In the implementation, tier sizes are explicitly defined as 10 suppliers (S1–S10), 6 manufacturers (M1–M6), 4 distribution centers (D1–D4), and 8 retailers (R1–R8) for a total of N = 28 nodes. Directed edges are generated only between adjacent tiers (S→M, M→D, D→R), producing a multi-tier dependency structure that naturally yields converging flows (multiple suppliers feeding a manufacturer) and diverging flows (a distributor feeding multiple retailers). Concretely, the stochastic edge generator draws for each source node an out-degree:
k s = max 1 , N μ k , 1 ,
then connects to a uniformly sampled subset of downstream nodes. This construction yields sparse, tier-constrained connectivity consistent with hierarchical supply networks.
To ensure that downstream tiers are not disconnected from upstream supply, we enforce a weak connectivity constraint, where for every node v in tiers 2–4, at least one inbound edge must exist. Formally, if
deg v = 0 ,
a random inbound edge u , v is added from the upstream tier. This prevents isolated nodes and ensures that message passing is well-defined for most nodes in the node-regression task.
Each node v V is assigned operational and sustainability proxy attributes drawn from tier-specific Gaussian distributions, namely capacity, CO2 intensity, and energy intensity. These distributions are explicitly encoded inducing systematic heterogeneity across tiers (e.g., manufacturers have higher mean carbon intensity than retailers) while maintaining stochastic variability under a fixed random seed, SEED = 42 . Each edge u , v E is parameterized by flow and distance (also positive truncated normal draws), and a deterministic transport-emissions proxy:
transport-emissions u v = flow u v distance u v 10 3 .
This proxy is used consistently both for feature construction and for sustainability-aware attention modulation.
To simulate federated data partitioning, we divided the global supply chain graph into subgraphs held by distinct client organizations. Figure 2 illustrates this partitioning: each company (or region) in the supply chain acts as a federated client that observes only its portion of the graph. In the implementation, 4 federated clients are created.
Table 2 quantifies the federated partitioning illustrated in Figure 2 for the baseline graph (N = 28). Clients contain between 6 and 8 nodes, indicating moderate size imbalance. Intra-client connectivity is sparse (0–3 edges per client), whereas the number of cross-client (cut) edges is substantially larger (16–19 per client). This confirms that most relational dependencies span across client boundaries. Consequently, each client observes only a highly fragmented local subgraph, and global supply chain dependencies cannot be directly reconstructed during local message passing. This validates the structural decentralization assumed in the federated learning setup.
Client assignment is performed independently within each tier using a seeded shuffle followed by round-robin allocation:
client ( v i ) = ( i m o d K ) + 1 .
This produces tier-balanced client memberships but non-identical local structures. Edges that cut across clients represent inter-company links (e.g., a supplier shipping to a manufacturer in a different organization)—these links exist in the overall graph but no single client can exploit them during local training. This setup reflects real-world data silos: each enterprise has visibility into its local operations and direct partners, but not the full end-to-end supply chain.
Critically, the federated training loop enforces strict locality: each client trains on its induced subgraph consisting only of nodes it owns and intra-client edges. Formally, client k observes
G k = V k , E k , E k = u , v E : u V k v V k .
This is implemented by filtering the global edge list. Hence, cross-client dependencies exist globally but are missing locally, creating a non-IID, partial-observability regime.

3.2. Data Generation and Sustainability Labeling

We generate synthetic operational data on the supply chain graph to train and evaluate the models. Each node is assigned a feature vector representing its operational state and sustainability profile, and each edge carries logistics attributes, including for transport emissions. In the code, the node-level feature vector is explicitly defined as a 5-dimensional descriptor:
x v = capacity v , CO 2 _ intensity v , energy _ intensity v , total _ in _ flow v , total _ in _ emis v R 5 ,
where the last two components summarize the upstream dependency and upstream logistics footprint:
total _ in _ flow v = u : u , v E flow u v , total _ in _ emis v = u : u , v E transport _ emissions u v .
This construction makes the prediction problem structurally dependent: the feature vector is not purely local, but contains aggregated information from incoming edges, which graph-based learners can exploit more naturally than independent baselines.
The feature matrix X N × 5 is standardized using dataset-level z-scoring,
X norm = X μ σ + 10 9
and converted into the tensor X t for training on CPU/GPU depending on availability.
Using this data, we formulate a synthetic target variable at each node called the “stress score.” This node-level stress score is designed as a composite outcome that increases under capacity scarcity, upstream dependency, and carbon-intensive logistics. In the implementation, the pre-normalized stress score y ˜ v is constructed as a weighted combination of interpretable drivers:
y ˜ v = 0.45 1 capacity v + ε + 0.20 total _ in _ flow v total _ in _ flow + ε + 0.25 total _ in _ emis v total _ in _ emis + ε + 0.10 CO 2 _ intensity v CO 2 _ intensity + ε + 0.05 energy _ intensity v energy _ intensity + ε ,
with ε used only for numerical stability. The value is min–max scaled to [0, 1] and perturbed with Gaussian noise of standard deviation std = 0.05:
y v = clip ( minmax ( y ˜ v ) + η , 0 , 1 ) , η N 0 , σ 2 .
By construction, this target is influenced by both operational dynamics (capacity, inbound dependency) and environmental factors (carbon/energy intensity and transport emissions), aligning with the objective of sustainability-aware modeling.
We split the dataset into training, validation, and test sets by randomly partitioning the node set (not multiple temporal snapshots). Specifically, a random permutation of node indices is drawn and masks are created using train ratio = 0.7 and validation ratio = 0.15, with the test set comprising the remaining nodes. This yields a transductive node-regression setting where the graph structure and all nodes are present during training, but only training nodes contribute to the supervised loss, while validation and test nodes are used strictly for evaluation.

3.3. Model Configurations and Training Procedure

We compare four model configurations in our experiments, corresponding to baseline approaches and our proposed method.
  • Centralized GAT: A Graph Attention Network model that has access to the entire supply chain graph during training. It represents an ideal pooled-data scenario and provides an upper-bound reference under full observability. The implemented GAT uses a single attention layer with 32 hidden dimensions and 2 heads. Message passing is performed over incoming neighborhoods. For head h , node projections are computed as
h v h = W h x v ,
and attention logits for an incoming edge u , v are computed using concatenated projected states and edge features e u v 2 (flow and transport emissions, min–max normalized over global edges):
e u v h = LeakyReLU ( a h , [ h v h h u h e u v ] ) .
Normalization is performed by softmax over u N in v :
α u v h = exp e u v h u N in v exp ( e u v h ) .
The head outputs are aggregated by weighted neighbor sums and concatenated before a linear readout.
2.
Federated GAT (FedAvg): A standard federated version of the GAT model trained with synchronous Federated Averaging. Each client trains on its local induced subgraph G k = V k , E k for 10 local epochs over 40 rounds. After each round, client models are aggregated using node-count weights:
θ t + 1 = k = 1 K V k j = 1 K | V j | θ k t .
This baseline isolates the impact of partial observability and non-IID graph partitioning without additional sustainability mechanisms.
3.
Federated Sustainability-Aware GAT: Our proposed model, which extends the Federated GAT with a sustainability-aware attention mechanism. In the implementation, sustainability is incorporated directly into the attention coefficients. After computing α u v h by softmax, the model applies an exponential penalty based on normalized transport emissions emis _ norm u v 0 , 1 (the second edge-feature component),
α u v h α u v h exp λ emis _ norm u v ,
followed by re-normalization over the incoming neighborhood,
α u v h α u v h u N in v α u v h + 10 9 .
The supervised objective is defined as the mean squared error for node stress prediction, i.e.,
L k = 1 T k v T k ( y ^ v y v ) 2 ,
where T k V k are the client’s training nodes defined by the global train mask restricted to V k .
All models were implemented in Python 3.11.9 using PyTorch 2.9.0 and trained with the Adam optimizer using learning rate LR = 10−3 and weight decay WD = 10−4. Centralized models are trained for 250 epochs. Federated models are trained for 40 rounds with 10 local epochs per round, yielding 400 local epochs per client in total, while global validation RMSE is monitored each round by evaluating the aggregated model on the full graph and the global validation mask.

3.4. Evaluation Metrics

We evaluate model performance on the held-out test nodes using standard regression metrics, root mean squared error (RMSE) and mean absolute error (MAE), between the predicted and true stress scores. For a test set T V , these are:
RMSE = 1 T v T ( y ^ v y v ) 2 .
MAE = 1 T v T y ^ v y v .
In addition to the prediction error, a key evaluation in our study is the attention–emission relationship. Since the GAT explicitly outputs per-edge attention values, we can quantify sustainability alignment by pairing each learned attention coefficient with its corresponding normalized edge emission. Concretely, the head-averaged attention coefficient for each edge is computed as:
α ¯ u v = 1 H h = 1 H α u v h ,
where emissions are given by emis _ norm u v , we compute rank-based associations (e.g., Spearman correlation) between α ¯ u v and emis _ norm u v , and complementary diagnostics such as the fraction of attention mass assigned to the highest-emission quantile of incoming edges. These measures are defined at the setup level here, while empirical values and comparative analysis are reported in the results section.

4. Results

4.1. Predictive Performance Comparison

Table 3 reports the predictive performance of the evaluated graph-based models on the held-out test node subset T V , using root mean squared error (RMSE) and mean absolute error (MAE). Reported values correspond to the mean and standard deviation over five independent random seeds, with model selection performed using validation RMSE. For centralized training, validation performance is tracked per epoch, while for federated training it is tracked per communication round.
The centralized GAT achieves the lowest prediction error across both metrics (RMSE = 0.2133 ± 0.1141, MAE = 0.1873 ± 0.1063), reflecting its unrestricted access to the full supply chain graph. In this setting, the model can exploit complete neighborhood structure and edge attributes across all tiers. Given that the synthetic node-level stress signal depends on both node-local features and aggregated upstream quantities (e.g., total inbound flow and transport emissions), centralized message passing enables coherent propagation of relational information throughout the network.
The predictive advantage of centralized graph learning can be attributed to two complementary architectural properties. First, relational aggregation over incoming neighborhoods, N i n v , smooths node representations by incorporating upstream information, mitigating label noise through structural correlation. Second, edge-aware attention weighting allows differential emphasis on upstream connections based on both node embeddings and normalized edge attributes, enabling the model to identify structurally influential pathways for downstream stress propagation.
Both federated graph attention models exhibit higher prediction error than the centralized baseline, highlighting the intrinsic challenges of decentralized graph learning under strict locality constraints. The standard Federated GAT (FedAvg) attains RMSE = 0.2606 ± 0.1124 and MAE = 0.2235 ± 0.1059. In this setting, each client trains exclusively on its induced subgraph and cross-client edges are unavailable during message passing. Consequently, global relational dependencies spanning multiple clients cannot be directly exploited during local updates. The global model obtained via weighted FedAvg reconciles fragmented client views only indirectly through parameter aggregation, leading to predictable performance degradation relative to centralized training.
The Federated Sustainability-Aware GAT achieves RMSE = 0.2593 ± 0.1131 and MAE = 0.2195 ± 0.1075, statistically indistinguishable from the standard federated baseline. A paired sign test on RMSE across seeds yields p = 1.000 , indicating no statistically significant difference between the two federated variants. Similar behavior is observed for MAE. These results demonstrate that embedding sustainability-aware modulation directly within the attention mechanism does not degrade federated predictive performance.
Importantly, sustainability is not introduced via an auxiliary loss term but is embedded directly into the attention mechanism through an emission-aware bias applied after softmax normalization (Equations (15) and (16)). With λ = 3 , high-emission transport edges are systematically down-weighted during message passing. While this mechanism imposes an explicit inductive bias, it preserves predictive accuracy relative to conventional federated graph learning.
From an applied perspective, the results highlight a structured trade-off landscape. Centralized graph learning represents a best-case scenario for predictive accuracy but is often infeasible due to data-sharing and privacy constraints. Standard federated learning preserves decentralization but incurs performance loss when global relational dependencies are critical. The Sustainability-Aware Federated GAT occupies an intermediate position: it remains fully decentralized while embedding environmental preferences directly into relational inference, achieving comparable predictive performance alongside structured sustainability alignment examined in the subsequent λ-ablation analysis.
Before examining training dynamics and sustainability–accuracy trade-offs, we assess whether the sustainability-aware formulation introduces any circularity or label-dependent bias.

4.2. Construct Validity and Label Sensitivity Analysis

A potential methodological concern in synthetic experimental settings is circularity: if sustainability-related variables contribute directly to the target label, a sustainability-aware model may appear to perform well simply because its inductive bias aligns with the label construction. In such a case, improvements in predictive performance could reflect label leakage rather than genuine modeling advantages. To address this concern rigorously, we conduct a structured label sensitivity analysis designed to decouple sustainability-aware attention from the specific stress formulation used in the baseline experiments.
Specifically, we evaluate five distinct label-generation regimes by systematically varying the contribution of sustainability-related components in the synthetic stress definition:
  • Full sustainability weighting: Original configuration.
  • Reduced sustainability weighting: Sustainability coefficients scaled by 0.5.
  • No sustainability contribution: All sustainability-related terms removed.
  • Emissions-only contribution: Only transport emissions included.
  • Node-level sustainability only: Only node carbon intensity and energy intensity included.
For each regime, the complete multi-seed federated evaluation protocol is repeated. This ensures that any observed behavior reflects model robustness rather than incidental alignment between attention bias and label structure.
The most critical test is the No sustainability contribution regime, where all sustainability variables are removed from the stress definition. In this setting, the standard Federated GAT achieves RMSE = 0.2758, while the Sustainability-Aware Federated GAT achieves RMSE = 0.2740 (Δ = −0.00177). The negligible difference indicates that the sustainability-aware attention mechanism does not rely on label-specific environmental components for predictive performance. Rather, its inductive bias remains stable even when sustainability plays no role in the target variable.
Across all five regimes, predictive differences between the two federated variants remain statistically insignificant. A paired sign test on RMSE across seeds yields p = 1.0 in each configuration, and similar patterns are observed for MAE. These results demonstrate that the sustainability-aware formulation does not artificially inflate performance through circular label construction.
Importantly, while predictive accuracy remains statistically comparable, the sustainability-aware model continues to exhibit systematically stronger environmental alignment in attention weights (as shown in the λ-ablation analysis). This confirms that the model’s environmental behavior arises from structured attention modulation rather than from overfitting to label components.
Taken together, the label sensitivity study provides strong evidence of construct validity. The sustainability-aware attention mechanism functions as an inductive bias embedded within decentralized message passing, not as a proxy for explicit label encoding. Consequently, concerns of circular performance inflation are empirically mitigated.

4.3. Training Dynamics and Convergence

Figure 3 presents the validation RMSE trajectories of the centralized and federated graph attention models, averaged over multiple random seeds, with shaded regions indicating one standard deviation. To ensure comparability across training paradigms, all curves are reported over a unified horizontal axis of 40 evaluation steps. For federated models, each step corresponds to one communication round, while for centralized training the validation trajectory is resampled from the full 250-epoch optimization history. The horizontal axis therefore represents aligned reporting checkpoints rather than equivalent optimization time.
The centralized GAT exhibits smooth and stable convergence, with a consistent reduction in validation RMSE and comparatively narrow variability across seeds. This behavior reflects the advantages of centralized graph learning, where full access to the global supply chain topology enables uninterrupted message passing across all tiers. At each optimization step, gradients are computed using complete neighborhood information and edge attributes, yielding low-variance updates and reliable convergence.
Both federated graph attention models converge more slowly and display wider uncertainty bands. This behavior is characteristic of decentralized graph learning under non-identically distributed (non-IID) subgraph partitions. Each client optimizes a local objective defined on its induced subgraph G k = V k , E k , where neighborhood structure, edge-attribute distributions, and structural roles within the supply chain differ across clients. These heterogeneous local optima are periodically reconciled through weighted parameter aggregation, which can introduce non-smooth global parameter updates and contributes to the increased variance observed in federated training. By construction, this model reshapes the message-passing process through an emission-aware attention bias that penalizes high-emission transport edges prior to renormalization. Consequently, information propagation is progressively concentrated on lower-emission pathways, leading to a more selective and structured flow of information during training. From an optimization standpoint, this mechanism introduces a controlled inductive bias that affects convergence speed but enhances consistency in how sustainability considerations are incorporated into the learned representations. In federated settings, where each client observes only a fragment of the global graph, unconstrained attention can overemphasize locally predictive but environmentally undesirable edges. The sustainability-aware formulation counteracts this tendency by steering optimization toward solutions that balance predictive performance with environmental alignment.
The modest elevation in validation RMSE relative to the standard federated baseline should therefore be interpreted as the explicit cost of embedding sustainability constraints directly into the inference pathway, rather than as evidence of instability or inefficient training. Notably, the overlap of uncertainty bands in Figure 3 indicates that the sustainability-aware model remains stable across random initializations and does not exhibit pathological convergence behavior.
Overall, Figure 3 demonstrates that sustainability-aware federated graph learning converges reliably under strict locality constraints, while embedding environmental priorities directly into the message-passing mechanism. This positions the proposed model as a principled extension of federated graph learning for sustainability-critical supply chain applications, where predictive accuracy must be balanced against environmentally informed decision-making.

4.4. Sustainability-Aware Attention Patterns

A key objective of the proposed Sustainability-Aware Federated GAT is to reduce reliance on carbon-intensive transport links during message passing. This behavior can be directly examined by analyzing learned attention coefficients, extracted during evaluation with. For each directed edge u , v , attention weights are averaged across heads to obtain a single scalar α u v .
Figure 4 shows the relationship between normalized transport emissions and mean attention weights for the standard Federated GAT ( λ = 0 ) and the sustainability-aware variant ( λ = 3 ). In the unconstrained model, attention weights are widely dispersed across the full emissions range, with no systematic dependence on emissions. Several high-emission edges receive moderate or high attention, reflecting their predictive utility under the synthetic stress generation mechanism, which explicitly incorporates inbound emissions.
In contrast, the sustainability-aware model exhibits a clear shift in attention allocation. High-emission edges are systematically down-weighted, while low- and medium-emission edges retain greater attention mass. This pattern is induced by the multiplicative emissions penalty applied within the attention mechanism, followed by neighborhood renormalization. The resulting behavior reflects a soft inductive bias: carbon-intensive links are discouraged but not entirely suppressed, allowing predictive relevance to partially offset the sustainability penalty when necessary.
From an interpretability standpoint, the learned attention distributions provide graph-native explanations of information flow. The observed reduction in attention assigned to high-emission edges confirms that sustainability considerations are embedded directly into the inference pathway, yielding representations that balance predictive performance with environmentally informed decision-making.

4.5. Ablation Study: Impact of Sustainability Weight (λ)

To systematically assess how the strength of sustainability bias influences the behavior of the proposed attention mechanism, we conduct a λ-ablation study over a predefined and fixed grid,
λ 0 , 0.25 , 0.5 , 1 , 2 , 3 , 5 , 8 ,
with all configurations evaluated across multiple random seeds. The λ values are specified a priori and are not tuned post hoc, ensuring a transparent and reproducible evaluation protocol. While predictive performance is summarized separately, this subsection focuses specifically on how sustainability alignment evolves as a function of λ, isolating the effect of the attention penalty on message-passing behavior. Figure 5 reports two complementary sustainability alignment diagnostics as functions of λ, shown as mean ± standard deviation over seeds:
(i)
The Spearman rank correlation between normalized transport emissions and learned attention weights;
(ii)
The fraction of incoming attention mass allocated to high-emission edges (top emission quantile).
Both metrics exhibit smooth and monotonic trends as λ increases. The Spearman correlation becomes progressively more negative, indicating that attention weights are increasingly de-emphasized on high-emission edges. In parallel, the share of attention mass assigned to carbon-intensive links decreases steadily. These consistent patterns confirm that the proposed exponential attention penalty induces a systematic and stable reorientation of message passing away from high-emission transport links, rather than producing noisy or irregular effects.
Notably, the variability across random seeds decreases at higher λ values, suggesting that stronger sustainability bias leads to more reproducible alignment behavior under federated training conditions. This indicates that the mechanism functions as a stable inductive bias, rather than introducing additional instability into the learning process.
Figure 6 summarizes the ablation results by plotting predictive error against sustainability alignment, yielding a trade-off frontier across λ values. Each point corresponds to a distinct operating regime of the Sustainability-Aware Federated GAT, with sustainability alignment measured as the negative Spearman correlation between emissions and attention (higher values indicate stronger de-emphasis of high-emission edges).
The resulting frontier reveals a clear and structured trade-off. As λ increases, sustainability alignment improves monotonically, while predictive error increases gradually. Importantly, the degradation in accuracy is smooth rather than abrupt, indicating that the attention penalty suppresses high-emission links in a soft and continuous manner rather than eliminating large portions of the graph structure.
Intermediate λ values (approximately λ ≈ 2–3) occupy favorable operating regimes near the knee of the trade-off curve. In this range, substantial reductions in emission–attention coupling are achieved while predictive degradation remains limited, representing practical compromises between environmental alignment and predictive fidelity.
Overall, this analysis reframes λ not as a hyperparameter to be optimized solely for accuracy, but as a policy-relevant control variable. Adjusting λ directly governs how strongly sustainability considerations shape the model’s inference pathway, enabling controlled, interpretable, and reproducible trade-offs in sustainability-critical federated supply chain settings.

4.6. Scalability and Communication Analysis

To evaluate scalability, we extend experiments to larger synthetic supply chain graphs (N28, N100, N500) and vary the number of federated clients (K ∈ {2, 4, 8}). Results for representative configurations are shown in Table 4.
Runtime increases with graph size, as expected from attention-based aggregation. The sustainability-aware model introduces moderate computational overhead (≈15–20%) but remains stable at larger scales
Communication overhead is determined by model size (517 parameters; 2068 bytes in float32). Per-round communication scales linearly with K: 8272 bytes (K = 2), 16,544 bytes (K = 4), and 33,088 bytes (K = 8). Across 40 rounds, total communication remains below 1.32 MB, and the sustainability-aware formulation introduces no additional transmission cost.

5. Discussion

This study examined whether sustainability-aware relational learning can be embedded within a federated graph learning framework without compromising predictive reliability under strict data-locality constraints. The findings yield four principal insights.
First, the performance hierarchy confirms expected structural properties of decentralized graph learning. The centralized GAT provides an upper-bound benchmark (RMSE = 0.2133 ± 0.1141), reflecting full access to global topology. Under federated optimization, performance degrades to RMSE = 0.2606 ± 0.1124, quantifying the structural cost of decentralization and cross-client edge fragmentation. This degradation is consistent with partial observability rather than model instability.
Second, embedding sustainability directly into the attention mechanism does not degrade predictive performance relative to the standard federated baseline. The Sustainability-Aware Federated GAT achieves RMSE = 0.2593 ± 0.1131, and a paired sign test across multiple random seeds yields p = 1.000. This indicates no statistically significant predictive deterioration due to the sustainability-aware modulation.
Third, structured label-sensitivity analysis mitigates concerns of circularity. When sustainability components are entirely removed from the synthetic stress definition (“no_sust” regime), the sustainability-aware model remains statistically indistinguishable from the baseline (ΔRMSE ≈ −0.0018). This confirms that environmental modulation functions as a structural inductive bias within message passing rather than exploiting label construction.
Fourth, λ-ablation experiments demonstrate a smooth and controllable trade-off between predictive accuracy and environmental alignment. As λ increases, attention allocation to high-emission links systematically decreases while prediction error rises only moderately. This monotonic behavior suggests that sustainability modulation acts as a stable structural regularizer. Interpreting λ as a governance parameter enables the mapping of different parameter intervals to distinct regulatory or corporate sustainability strategies, strengthening managerial relevance.
Scalability experiments further demonstrate predictable computational behavior. Increasing graph size from 28 nodes to 500 nodes increases federated training time from 2.64 s to 49.52 s (K = 2), while sustainability modulation introduces only modest additional overhead and no extra communication complexity. These results indicate computational feasibility for medium-scale decentralized networks.
A dedicated methodological clarification is necessary regarding the synthetic nature of the benchmark. The supply chain graphs and stress signals are procedurally generated to ensure controlled experimentation, reproducibility, and structured sensitivity analysis. The synthetic construction preserves the tiered structure, cross-client fragmentation, and heterogeneous degree patterns, allowing isolation of decentralization and governance effects. However, the present study should be interpreted as a controlled methodological validation rather than a fully empirical industrial deployment. Real-world supply chains exhibit dynamic topology, incomplete information, contractual asymmetries, and regulatory heterogeneity that extend beyond the current benchmark. Empirical validation on operational multi-enterprise datasets remains a critical direction for future work.
Overall, the findings demonstrate that sustainability-aware federated graph learning is statistically robust, structurally interpretable, and computationally scalable under controlled conditions.

6. Conclusions

This paper introduced a sustainability-aware federated graph attention framework for decentralized supply chain process modeling. The proposed method integrates Graph Attention Networks with federated optimization and embeds transport-related environmental signals directly into the message-passing mechanism through an emission-weighted attention modulation governed by parameter λ.
Multi-seed evaluation confirms that federated learning incurs a predictable decentralization cost relative to centralized training, while sustainability-aware modulation preserves predictive performance without statistically significant degradation. Label-sensitivity analysis further eliminates concerns of circularity, confirming that environmental bias operates as an inductive structural constraint rather than as label-aligned leakage.
The λ-ablation study establishes a controllable accuracy–sustainability frontier, positioning λ as an explicit governance-relevant control parameter. Scalability experiments demonstrate predictable computational scaling and limited additional overhead due to sustainability modulation.
The study should be understood as a controlled methodological validation conducted on a structurally realistic but synthetic supply chain benchmark. Its primary contribution lies in demonstrating the feasibility, statistical robustness, interpretability, and scalability of sustainability-modulated federated graph learning under analytically transparent conditions. Future work should extend this framework to dynamic real-world datasets, incorporate secure aggregation mechanisms, and evaluate multi-objective environmental indicators in operational multi-enterprise settings.
By unifying decentralization, relational modeling, and sustainability-aware optimization within a single architecture, this work provides a principled foundation for privacy-preserving and environmentally aligned AI in supply chain networks.

Author Contributions

V.A.: conceptualization, data curation, formal analysis, investigation, methodology, resources, software, validation, visualization, writing—original draft, and writing—review and editing. M.D.: conceptualization, investigation, methodology, project administration, resources, supervision, validation, writing—original draft, and writing—review and editing. P.T.: supervision, validation, and writing—review and editing All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

During the preparation of this manuscript, the authors used Grammarly (Grammarly Inc., current online edition) for language editing and grammar refinement. The authors reviewed and approved all suggested modifications and take full responsibility for the final content of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CPNColored Petri Net(s)
CPUCentral Processing Unit
CO2Carbon Dioxide
FedAvgFederated Averaging
FedGNNFederated Graph Neural Network
FLFederated Learning
GATGraph Attention Network
GCNGraph Convolutional Network
GDPRGeneral Data-Protection Regulation
GNNGraph Neural Network
GPUGraphics Processing Unit
ICTInformation and Communication Technology
IIDIndependent and Identically Distributed
IoTInternet of Things
KPIKey Performance Indicator
LRLearning Rate
MAEMean Absolute Error
MSEMean Squared Error
Non-IIDNon-Independent and Identically Distributed
PyTorchPython-based deep learning framework (PyTorch)
ReLURectified Linear Unit
RMSERoot Mean Squared Error
VFGNNVertically Federated Graph Neural Network
WDWeight Decay

References

  1. Drakaki, M.; Tzionas, P. Modelling and Performance Evaluation of an Agent-Based Warehouse Dynamic Resource Allocation Using Colored Petri Nets. Int. J. Comput. Integr. Manuf. 2016, 29, 736–753. [Google Scholar] [CrossRef]
  2. Drakaki, M.; Tzionas, P. A colored petri net-based modelling method for supply chain inventory management. Simulation 2021, 98, 257–271. [Google Scholar] [CrossRef]
  3. Drakaki, M.; Tzionas, P. Investigating the Impact of Inventory Inaccuracy on the Bullwhip Effect in RFID-Enabled Supply Chains Using Colored Petri Nets. J. Model. Manag. 2019, 14, 360–384. [Google Scholar] [CrossRef]
  4. Drakaki, M.; Tzionas, P. Manufacturing Scheduling Using Colored Petri Nets and Reinforcement Learning. Appl. Sci. 2017, 7, 136. [Google Scholar] [CrossRef]
  5. Drakaki, M.; Gören, H.; Tzionas, P. A Multi-Agent Based Decision Framework for Sustainable Supplier Selection, Order Allocation and Routing Problem. In Proceedings of the 5th International Conference on Vehicle Technology and Intelligent Transport Systems, Crete, Greece, 3–5 May 2019; SCITEPRESS—Science and Technology Publications: Crete, Greece, 2019; pp. 621–628. [Google Scholar] [CrossRef]
  6. Wasi, A.T.; Islam, M.D.; Akib, A.R.; Bappy, M.M. SCG Dataset from Graph Neural Networks in Supply Chain Analytics and Optimization: Concepts, Perspectives, Dataset and Benchmarks. Zenodo 2024. [Google Scholar] [CrossRef]
  7. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. arXiv 2018. [Google Scholar] [CrossRef]
  8. Kayalvizhi, S. Graph Neural Network-Based Predictive Modelling for Enhanced Supply Chain Resilience against Multi-Modal Disruptions. J. Inf. Syst. Eng. Manag. 2025, 10, 628–642. [Google Scholar] [CrossRef]
  9. Tang, X.; Hu, F.; Tang, X.; Hu, F. A Privacy-Enhancing Mechanism for Federated Graph Neural Networks. Symmetry 2025, 17, 565. [Google Scholar] [CrossRef]
  10. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. arXiv 2023. [Google Scholar] [CrossRef]
  11. Islam, F.; Raihan, A.S.; Ahmed, I. Applications of Federated Learning in Manufacturing: Identifying the Challenges and Exploring the Future Directions with Industry 4.0 and 5.0 Visions. arXiv 2023. [Google Scholar] [CrossRef]
  12. Frikha, M.A.; Mrad, M.; Frikha, M.A.; Mrad, M. AI-Driven Supply Chain Decarbonization: Strategies for Sustainable Carbon Reduction. Sustainability 2025, 17, 9642. [Google Scholar] [CrossRef]
  13. Sievers, J.; Henrich, P.; Beichter, M.; Mikut, R.; Hagenmeyer, V.; Blank, T.; Simon, F. Federated Reinforcement Learning for Sustainable and Cost-Efficient Energy Management. Energy AI 2025, 21, 100521. [Google Scholar] [CrossRef]
  14. Zheng, G.; Brintrup, A. A Machine Learning Approach for Enhancing Supply Chain Visibility with Graph-Based Learning. Supply Chain Anal. 2025, 11, 100135. [Google Scholar] [CrossRef]
  15. Abushaega, M.M.; Moshebah, O.Y.; Hamzi, A.; Alghamdi, S.Y. Multi-Objective Sustainability Optimization in Modern Supply Chain Networks: A Hybrid Approach with Federated Learning and Graph Neural Networks. Alex. Eng. J. 2025, 115, 585–602. [Google Scholar] [CrossRef]
  16. Gupta, I.; Martinez, A.; Correa, S.; Wicaksono, H. A Comparative Assessment of Causal Machine Learning and Traditional Methods for Enhancing Supply Chain Resiliency and Efficiency in the Automotive Industry. Supply Chain Anal. 2025, 10, 100116. [Google Scholar] [CrossRef]
  17. Lee, D.; Go, J.; Noh, T.; Song, S. Multi-Feature Representation-Based Graph Attention Networks for Predicting Potential Supply Relationships in a Large-Scale Supply Chain Network. Expert Syst. Appl. 2025, 292, 128593. [Google Scholar] [CrossRef]
  18. Park, S.; Lee, H. Predictive Supply Chain Disruption Control Framework Using Casual Network-Based Multi-Stream Deep Learning. Comput. Ind. Eng. 2025, 207, 111312. [Google Scholar] [CrossRef]
  19. Yan Hong Lim, K.; Liu, Y.; Chen, C.-H.; Gu, X. Manufacturing Resilience through Disruption Mitigation Using Attention-Based Consistently-Attributed Graph Embedded Decision Support System. Comput. Ind. Eng. 2024, 197, 110494. [Google Scholar] [CrossRef]
  20. Kosasih, E.E.; Brintrup, A. A Machine Learning Approach for Predicting Hidden Links in Supply Chain with Graph Neural Networks. Int. J. Prod. Res. 2022, 60, 5380–5393. [Google Scholar] [CrossRef]
  21. Cai, C.; Pan, L.; Li, X.; Luo, S.; Wu, Z. A Risk Identification Model for ICT Supply Chain Based on Network Embedding and Text Encoding. Expert Syst. Appl. 2023, 228, 120459. [Google Scholar] [CrossRef]
  22. Ahmed, I.; Ahmad, M.; Jeon, G. Federated Learning in Convergence ICT: A Systematic Review on Recent Advancements, Challenges, and Future Directions. Comput. Mater. Contin. 2025, 85, 4237–4273. [Google Scholar] [CrossRef]
  23. Takele, A.K.; Villányi, B.; Takele, A.K.; Villányi, B. Resource-Efficient Clustered Federated Learning Framework for Industry 4.0 Edge Devices. AI 2025, 6, 30. [Google Scholar] [CrossRef]
  24. Leng, J.; Li, R.; Xie, J.; Zhou, X.; Li, X.; Liu, Q.; Chen, X.; Shen, W.; Wang, L. Federated Learning-Empowered Smart Manufacturing and Product Lifecycle Management: A Review. Adv. Eng. Inform. 2025, 65, 103179. [Google Scholar] [CrossRef]
  25. Makanda, I.L.D.; Jiang, P.; Yang, M. Personalized Federated Unsupervised Learning for Nozzle Condition Monitoring Using Vibration Sensors in Additive Manufacturing. Robot. Comput.-Integr. Manuf. 2025, 93, 102940. [Google Scholar] [CrossRef]
  26. Li, D.; Liu, S.; Wang, B.; Yu, C.; Zheng, P.; Li, W. Trustworthy AI for Human-Centric Smart Manufacturing: A Survey. J. Manuf. Syst. 2025, 78, 308–327. [Google Scholar] [CrossRef]
  27. Nguyen, T.T.; Bekrar, A.; Le, T.M.; Artiba, A.; Chargui, T.; Trinh, T.T.H.; Snoun, A. Federated Learning-Based Framework: A New Paradigm Proposed for Supply Chain Risk Management. Eng. Proc. 2025, 97, 5. [Google Scholar] [CrossRef]
  28. Liang, X. Cross-Border Logistics Risk Warning System Based on Federated Learning. Sci. Rep. 2025, 15, 39131. [Google Scholar] [CrossRef]
  29. Gavai, A.K.; Bouzembrak, Y.; Xhani, D.; Sedrakyan, G.; Meuwissen, M.P.M.; Souza, R.G.S.; Marvin, H.J.P.; van Hillegersberg, J. Agricultural Data Privacy: Emerging Platforms & Strategies. Food Humanit. 2025, 4, 100542. [Google Scholar] [CrossRef]
  30. Fendor, Z.; van der Velden, B.H.M.; Wang, X.; Carnoli, A.J.; Mutlu, O.; Hürriyetoğlu, A. Federated Learning in Food Research. J. Agric. Food Res. 2025, 23, 102238. [Google Scholar] [CrossRef]
  31. Tang, X.; Wang, Y.; Liu, X.; Yuan, X.; Fan, C.; Hu, Y.; Miao, Q. Federated Graph Neural Network for Privacy-Preserved Supply Chain Data Sharing. Appl. Soft Comput. 2025, 168, 112475. [Google Scholar] [CrossRef]
  32. Benzidia, S.; Makaoui, N.; Bentahar, O. The Impact of Big Data Analytics and Artificial Intelligence on Green Supply Chain Process Integration and Hospital Environmental Performance. Technol. Forecast. Soc. Change 2021, 165, 120557. [Google Scholar] [CrossRef]
  33. Yadav, A.; Garg, R.K.; Sachdeva, A. Artificial Intelligence Applications for Information Management in Sustainable Supply Chain Management: A Systematic Review and Future Research Agenda. Int. J. Inf. Manag. Data Insights 2024, 4, 100292. [Google Scholar] [CrossRef]
  34. Alharithi, F.S.; Alzahrani, A.A. Enhancing Environmental Sustainability with Federated LSTM Models for AI-Driven Optimization. Alex. Eng. J. 2024, 108, 640–653. [Google Scholar] [CrossRef]
  35. Alexiadis, V.; Drakaki, M.; Tzionas, P.; Alexiadis, V.; Drakaki, M.; Tzionas, P. LSTM-Based Electricity Demand Forecasting in Smart and Sustainable Hospitality Buildings. Electronics 2025, 14, 4456. [Google Scholar] [CrossRef]
  36. Borraccia, S.; Masciari, E.; Napolitano, E.V. Green Metrics for AI: A Hybrid Strategy for Environmental Impact Assessment. Array 2025, 28, 100528. [Google Scholar] [CrossRef]
  37. Valencia-Arias, A.; Vásquez Coronado, M.H.; Clavo Medina, M.L.; Morales Garcia, M.J.; Requejo Arias, A.R.; Saavedra Tirado, J.I. Research Trends in the Application of Machine Learning in Sustainability Practices Based on a Bibliometric Analysis. Sustain. Futur. 2025, 10, 100987. [Google Scholar] [CrossRef]
  38. Nti, E.K.; Cobbina, S.J.; Attafuah, E.E.; Opoku, E.; Gyan, M.A. Environmental Sustainability Technologies in Biodiversity, Energy, Transportation and Water Management Using Artificial Intelligence: A Systematic Review. Sustain. Futures 2022, 4, 100068. [Google Scholar] [CrossRef]
  39. Zechiel, F.; Blaurock, M.; Weber, E.; Büttgen, M.; Coussement, K. How Tech Companies Advance Sustainability through Artificial Intelligence: Developing and Evaluating an AI x Sustainability Strategy Framework. Ind. Mark. Manag. 2024, 119, 75–89. [Google Scholar] [CrossRef]
  40. AI Cuts Supply Chain Emissions by over 1000 tons Annually. Available online: https://www.devdiscourse.com/article/technology/3678443-ai-cuts-supply-chain-emissions-by-over-1000-tons-annually (accessed on 17 January 2026).
  41. Slimani, S.; Omri, A.; Jabeur, S.B. When and How Does Artificial Intelligence Impact Environmental Performance? Energy Econ. 2025, 148, 108643. [Google Scholar] [CrossRef]
Figure 1. Multi-tier synthetic supply chain graph. Node size encodes capacity; node color represents carbon intensity; edge width corresponds to flow magnitude; edge transparency reflects normalized transport emissions.
Figure 1. Multi-tier synthetic supply chain graph. Node size encodes capacity; node color represents carbon intensity; edge width corresponds to flow magnitude; edge transparency reflects normalized transport emissions.
Processes 14 00781 g001
Figure 2. For visual clarity, only two representative clients are explicitly shown (from K = 4 total); remaining clients are aggregated into a “hidden” sink. Solid edges denote intra-client dependencies, dashed gray edges denote cross-client dependencies (cut edges), and dashed black edges denote parameter exchange (broadcast/upload). Edges represent structural dependencies only; operational weights and emissions are not encoded in this schematic.
Figure 2. For visual clarity, only two representative clients are explicitly shown (from K = 4 total); remaining clients are aggregated into a “hidden” sink. Solid edges denote intra-client dependencies, dashed gray edges denote cross-client dependencies (cut edges), and dashed black edges denote parameter exchange (broadcast/upload). Edges represent structural dependencies only; operational weights and emissions are not encoded in this schematic.
Processes 14 00781 g002
Figure 3. Validation RMSE trajectories for centralized and federated graph attention models. Solid lines denote the mean validation RMSE across multiple random seeds, while shaded regions indicate ±1 standard deviation around the mean.
Figure 3. Validation RMSE trajectories for centralized and federated graph attention models. Solid lines denote the mean validation RMSE across multiple random seeds, while shaded regions indicate ±1 standard deviation around the mean.
Processes 14 00781 g003
Figure 4. Mean attention weight versus normalized transport emissions for standard and Sustainability-Aware Federated GAT models.
Figure 4. Mean attention weight versus normalized transport emissions for standard and Sustainability-Aware Federated GAT models.
Processes 14 00781 g004
Figure 5. Effect of λ on sustainability alignment.
Figure 5. Effect of λ on sustainability alignment.
Processes 14 00781 g005
Figure 6. Accuracy–sustainability trade-off.
Figure 6. Accuracy–sustainability trade-off.
Processes 14 00781 g006
Table 1. Comparative positioning of federated and sustainability-aware graph learning approaches in supply chain contexts.
Table 1. Comparative positioning of federated and sustainability-aware graph learning approaches in supply chain contexts.
Study CategoryRepresentative WorksFederatedSustainability IntegrationAttention-Level Sustainability BiasExplicit Governance ParameterKey Limitation
Centralized Graph Neural Networks for Supply Chains[8,14,17,20,21]Implicit via featuresNo decentralization; no sustainability-aware aggregation
Federated learning in Manufacturing/Supply Chains[10,11,24,27,28]Typically performance-orientedNo graph-attention sustainability modulation
Federated Graph Neural Networks[9,31]Privacy-focused; sustainability not primaryNo explicit environmental alignment control
AI for Supply Chain Decarbonization[12,32,33]Mostly ✗Sustainability as optimization objectiveNo federated graph structure
Federated Sustainable Energy Optimization[13,34]Sustainability at model objective levelNo graph-based attention bias
Proposed MethodSustainability embedded in message passing✓ (λ)Synthetic validation
Table 2. Federated partition statistics for the baseline graph (N = 28, K = 4).
Table 2. Federated partition statistics for the baseline graph (N = 28, K = 4).
ClientNumber of NodesIntra-Client EdgesInter-Client Edges (Cut)
18319
28216
36216
46017
Table 3. Predictive performance comparison across centralized and federated baselines.
Table 3. Predictive performance comparison across centralized and federated baselines.
ModelRMSE (Mean ± Std)MAE (Mean ± Std)
Centralized GAT0.2133 ± 0.11410.1873 ± 0.1063
Federated GAT (FedAvg)0.2606 ± 0.11240.2235 ± 0.1059
Federated Sustainability-Aware GAT (λ = 3.0)0.2593 ± 0.11310.2195 ± 0.1075
Table 4. Scalability and training time across increasing graph sizes and client counts.
Table 4. Scalability and training time across increasing graph sizes and client counts.
GraphNodesEdgesKFed Time (s)Sus Time (s)
N28284122.642.97
N100100172210.1911.84
N500500900249.5259.41
N500500900430.2540.81
N500500900821.0625.17
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alexiadis, V.; Drakaki, M.; Tzionas, P. A Sustainability-Aware Federated Graph Attention Framework for Supply Chain Process Modeling. Processes 2026, 14, 781. https://doi.org/10.3390/pr14050781

AMA Style

Alexiadis V, Drakaki M, Tzionas P. A Sustainability-Aware Federated Graph Attention Framework for Supply Chain Process Modeling. Processes. 2026; 14(5):781. https://doi.org/10.3390/pr14050781

Chicago/Turabian Style

Alexiadis, Vasileios, Maria Drakaki, and Panagiotis Tzionas. 2026. "A Sustainability-Aware Federated Graph Attention Framework for Supply Chain Process Modeling" Processes 14, no. 5: 781. https://doi.org/10.3390/pr14050781

APA Style

Alexiadis, V., Drakaki, M., & Tzionas, P. (2026). A Sustainability-Aware Federated Graph Attention Framework for Supply Chain Process Modeling. Processes, 14(5), 781. https://doi.org/10.3390/pr14050781

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop