1. Introduction
Artificial intelligence (AI) has made significant progress in perception, reasoning, and decision-making; yet, most existing neural architectures are inherently unstable when faced with uncertainty, noise, or structural disturbances. Deep and generative models, notwithstanding their predictive efficacy, function as open-loop systems motivated by loss minimization rather than self-regulation. The absence of intrinsic input frequently results in overfitting, calibration loss, and structural deterioration during ongoing learning. Conversely, biological intelligence achieves long-term stability via self-organizing mechanisms termed homeostasis—a dynamic equilibrium that allows organisms to preserve functionality and adaptability among varying surroundings.
Homeostasis in living systems arises from the interaction of various regulatory subsystems. The immune system demonstrates adaptive control, with regulatory T cells (Tregs) serving as inhibitory agents that limit excessive activation and maintain systemic tolerance. Recent biomedical research indicates that tailored or targeted Treg therapy can effectively regulate immunological equilibrium and avert excessive inflammatory responses [
1,
2]. Simultaneously, autophagy—the selective degradation of intracellular constituents—functions as a metabolic maintenance system crucial for brain and cellular homeostasis. Autophagic activity eliminates damaged proteins and organelles while facilitating synaptic remodeling and neural stability [
3,
4]. Recent studies indicate that neuronal autophagy actively modulates stress responses and emotional stability, highlighting its function as a molecular foundation for adaptive equilibrium [
5]. Collectively, these biological mechanisms provide what can be referred to as
meta-homeostasis: a superior regulatory principle that integrates informational, immunological, and metabolic feedback loops to maintain functional stability.
The lack of comparable methods in existing AI frameworks highlights a significant constraint in their design. Conventional deep learning prioritizes optimization, but biological systems underscore preservation, resilience, and regeneration. Multiple research avenues have endeavored to close this gap. Spiking neural networks and neuromorphic computing strive to emulate neuronal sparsity and event-driven dynamics; yet, they frequently exhibit deficiencies in global stability control and self-repair mechanisms. Bayesian neural networks (BNNs) incorporate uncertainty quantification; nonetheless, they are susceptible to entropy collapse, overconfidence, and significant computing expense [
6]. Recent advancements in graph neural networks (GNNs) have enhanced representation capabilities for relational structures in graph learning; nonetheless, these models continue to face challenges with perturbation robustness and drift during dynamic updates [
7]. Notwithstanding advancements, a cohesive framework that amalgamates probabilistic reasoning, feedback inhibition, and structural regeneration within a singular computing paradigm is still absent.
This study introduces Bio-RegNet, a meta-homeostatic Bayesian neural network framework that integrates three biologically inspired mechanisms: Bayesian inference, immunological regulation, and autophagic optimization. Bio-RegNet integrates the concepts of living systems by incorporating uncertainty awareness, negative feedback management, and structural self-renewal into a cohesive architecture. The initial element, the Bayesian Effector Network (BEN), conducts probabilistic inference via uncertainty-weighted activations that reconcile exploration and confidence. The second component, the Regulatory immunological Network (RIN), incorporates an inhibitory feedback loop modeled after Treg-mediated immunological tolerance, therefore stabilizing the network’s excitatory–inhibitory dynamics. The third component, the Autophagic Optimization Engine (AOE), facilitates metabolic self-maintenance by eliminating superfluous neurons and reconstructing efficient structures to preserve informational energy. Through ongoing interaction among these components, Bio-RegNet attains an emergent equilibrium—a computational manifestation of meta-homeostasis defined by constrained entropy, Lyapunov stability, and adaptive regeneration.
In the context of graph-based community discovery, Bio-RegNet exhibits enhanced resilience, interpretability, and cross-domain generalization relative to both deterministic and probabilistic benchmarks. Significantly, its biological foundation reconceptualizes learning not as a fixed optimal but as a dynamic process of self-preserving adaptation. By integrating Bayesian cognition with biological self-regulation, Bio-RegNet advances the overarching concept of bio-inspired generative intelligence, providing a framework for sustainable, self-sustaining AI systems that can function dependably in intricate and unpredictable settings.
Our work makes four main contributions: (i) we propose a closed energy–entropy regulation loop that couples Bayesian uncertainty (entropy-related proxies such as NLL/ECE and posterior variance) with resource/activation dynamics (energy-related proxy regulated by and structural turnover via ); (ii) we provide a control-inspired stability formulation with an explicit Lyapunov-based regulation mechanism and corresponding convergence/boundedness conditions; (iii) we deliver a unified algorithmic implementation with reproducible training and evaluation protocols (fixed split, multi-seed reporting, and significance tests); and (iv) we present comprehensive experiments across datasets demonstrating improved clustering accuracy, calibration, robustness, and recovery under perturbations, together with transparent reporting of computational cost.
2. Related Work
In the last ten years, there has been a growing focus on integrating biology-inspired ideas into AI to provide models with control, robustness, and adaptability instead of relying just on brute-force optimization. We examine three interconnected study domains relevant to our Bio-RegNet design: immunological and autophagic regulation in biology; uncertainty, stability, and resilience in neural and graph models; and structural pruning or regeneration techniques in learning systems.
Homeostatic regulation in live creatures originates from multi-scale feedback mechanisms rather than static controllers. Regulatory T cells (Tregs) are pivotal in preserving immunological equilibrium: they inhibit excessive activation of effector T cells, regulate cytokine milieus, and facilitate tissue healing. Progress in Treg therapy underscores their dynamic adaptability and therapeutic potential in autoimmune disorders, transplantation, and inflammation models, indicating the possibility of engineering regulation within complex systems [
8]. Recent mechanistic studies underscore the plasticity of Tregs in modulating various immune responses and maintaining tolerance across different organ systems [
9]. In addition to immune regulation, autophagy serves as a cellular maintenance system that selectively destroys damaged or superfluous intracellular components to uphold structural integrity and metabolic equilibrium. In neurons, basal autophagy is crucial for the removal of protein aggregates, the preservation of synaptic function, and the inhibition of degenerative processes [
10]. The cessation of autophagy in adulthood disrupts synaptic equilibrium and hinders cognitive function, emphasizing its ongoing regulatory significance [
11]. Furthermore, recent integrated proteomics in human neurons have demonstrated that neuronal autophagy selectively degrades signaling complexes, including PKA regulatory subunits, thus regulating homeostatic signaling pathways [
12]. Autophagy overlaps with neuroinflammation and epigenetic control, signifying that its function in homeostasis is intricately linked to stress response and gene regulation [
6]. In the realm of artificial intelligence, uncertainty estimates and model calibration are acknowledged as fundamental components of reliable learning. Survey studies have classified sources of uncertainty in deep models, acknowledged the dangers of overconfidence and entropy collapse, and recommended principled Bayesian or ensemble methodologies [
13]. In graph neural networks (GNNs), the intricacies of message passing, dynamic topology, and relational noise exacerbate the challenges of uncertainty quantification. The recent survey titled “Uncertainty in Graph Neural Networks” delineates these problems and evaluates methodologies including conformal prediction, Bayesian graph models, and ensemble strategies [
14]. However, the majority of these solutions regard calibration and stability as modular enhancements rather than as integrated regulatory frameworks.
A significant corpus of research investigates structural optimization in neural networks—pruning, sparsification, and regeneration—to enhance efficiency, resilience, and adaptability. In spiking networks and sparse neural architectures, pruning–regrowth cycles emulate synapse turnover, resulting in energy efficiency and fault tolerance, albeit typically in a heuristic manner. Techniques such as DepGraph promote comprehensive structural pruning across many architectures, advancing the pursuit of universal pruning methodologies [
15]. In graph settings, deeper GNNs experience performance deterioration due to oversmoothing or model collapse; studies on oversmoothing detail the collapse of similarity measures and suggest solutions such as residual connections or adaptive depth control [
16]. Additionally, research revealing “model degradation” in deep Graph Neural Networks (GNNs) differentiates propagation depth from the transformation phase and suggests modules to maintain deeper architectures [
17].
Notwithstanding these advancements, a conceptual disparity persists: biological regulation, uncertainty management, and structural rejuvenation are predominantly segregated inside AI systems. No current system effectively combines Bayesian inference, feedback inhibition, and regenerative pruning into a cohesive, self-stabilizing architecture. Bio-RegNet is driven by this gap; it seeks to conceptualize learning not as an open-loop optimization but as a dynamic process of self-regulation, repair, and adaptation
3. Methodology
3.1. Meta-Homeostatic Learning Theory
The proposed
Bio-RegNet framework formalizes the concept of
meta-homeostasis—a second-order equilibrium integrating Bayesian inference, immunoregulatory feedback, and autophagic optimization. The neural organism is formally defined as a stochastic dynamical system:
where
denotes the parameter space,
and
represent input and output manifolds,
is the functional mapping governing neural transformations,
is the regulatory manifold implementing immunomodulatory feedback, and
corresponds to the autophagic operator responsible for metabolic self-maintenance.
Learning stability is characterized by a global energy differential:
indicating zero net entropy flux across perceptual (Bayesian), regulatory (immune), and metabolic (autophagic) domains.
This meta-homeostatic principle guides the entire system, whose overarching architecture, detailing the interaction of its core components and their internal structure, is depicted in
Figure 1.
3.2. Bayesian Effector Network (BEN)
The effector subsystem performs probabilistic inference through variational Bayes. Each synaptic weight
is a random variable governed by posterior
with prior
, optimized via the evidence lower bound (ELBO):
where
balances data likelihood and regularization.
The reparameterization trick is defined as
which enables gradient flow through stochastic layers by backpropagating through the mean and variance parameters.
Uncertainty-weighted activations are formulated as
thereby suppressing overconfident neurons and producing entropy-aware excitation proportional to inverse posterior variance.
The evolution of network-level entropy is given by
and its temporal derivative can be expressed as
linking the rate of entropy change to the information flow through the variational parameters
.
Biological Grounding: Mechanistic vs. Analogical
We emphasize that the terms Treg-inspired inhibition and autophagy-inspired regeneration are used as functional abstractions rather than biological simulations. Our goal is to translate computationally implementable principles—(i) inhibitory tolerance against excessive activation and (ii) structural turnover under resource constraints—into closed-loop learning mechanisms. Consequently, we explicitly separate (a) mechanistic components implemented as mathematical operators and control rules from (b) analogical interpretations used to guide model design and interpretation.
Table 1 summarizes the biological grounding of Bio-RegNet by contrasting implemented mechanisms with interpretive analogies.
3.3. Treg-Inspired Regulatory Network (RIN)
The
RIN enforces informational tolerance through entropy and energy feedback, analogous to regulatory T-cell suppression. Its composite potential is
where
(activation entropy),
(gradient energy), and
(parameter drift). The inhibitory field acts as
with
as the suppression gain controlling the strength of immunoregulatory feedback. In Lyapunov form, the stability condition of the regulatory manifold can be expressed as
where
denotes the instantaneous potential energy of the parameter trajectory with respect to the equilibrium state
. Differentiating with respect to time yields
implying exponential convergence of the parameter dynamics whenever the regulatory potential
remains positive. Hence, the Treg-inspired inhibitory feedback guarantees local asymptotic stability and prevents divergence of the effector subsystem.
3.4. Autophagic Optimization Engine (AOE)
The
AOE maintains metabolic efficiency through pruning and regeneration. Each neuron
j has Fisher information density
The metabolic viability index for each neuron
j is defined as
where
is the Fisher information density,
is a small stabilizing constant, and
denotes synaptic sparsity entropy normalized by the metabolic reference constant
.
Neurons with
are classified as metabolically inactive and pruned according to
while structural regeneration introduces new neurons via stochastic initialization:
The structural entropy dynamics of the autophagic subsystem follow
which converges to the steady-state equilibrium
representing a sustainable balance between entropic decay (
) and informational regeneration (
).
Closed-Loop Statement
This helps the controller downweight unstable feeder regions. We model Bio-RegNet as a closed-loop regulator where the Bayesian subsystem generates an informational flux, the immune subsystem converts entropy/gradient statistics into inhibitory feedback, and the autophagy subsystem converts Fisher-based viability into structural dissipation. The loop is closed because (i) Rt directly modulates the effective learning rate and activation dynamics, and (ii) pruning/regrowth changes the hypothesis class, which in turn changes the posterior uncertainty and subsequent Rt.
3.5. Meta-Homeostatic Energy Dynamics
The triadic energy function combines Bayesian, immune, and autophagic terms:
Differentiating the global energy functional yields
where the first term denotes the perceptual information flux driven by Bayesian learning, and the latter terms represent dissipative regulation from immunologic and autophagic feedback.
Operational estimator: In practice, we quantify the achieved equilibrium by an equilibrium index
of
Section 4.10, which serves as an empirical proxy for the balance relation in Equation (
20). At equilibrium, perceptual information flow is perfectly counterbalanced by dissipative mechanisms:
defining the meta-homeostatic steady state of informational energy conservation.
3.6. Graph-Based Community Detection Integration
For a graph
, node embeddings evolve via uncertainty-aware message passing:
where
denotes the expected Bayesian weight matrix and
the nonlinear activation function.
The attention coefficient
is modulated by posterior uncertainty:
ensuring that edges with high variance (uncertain influence) are downweighted during message aggregation.
The inhibitory regulatory term stabilizes node activations through entropy damping:
where
controls the magnitude of immunosuppressive influence. This term enforces smoothness and prevents spurious community fragmentation.
Final community assignments are determined by maximizing the Bayesian modularity functional:
where
denotes the adjacency matrix,
and
are node degrees,
m is the total number of edges, and
equals 1 if nodes
i and
j belong to the same community. This formulation embeds uncertainty-aware weights and immunoregulatory constraints into classical graph modularity optimization, yielding communities that are topologically stable and statistically interpretable.
An illustrative single-sample example demonstrating the end-to-end processing of a graph through the Bayesian, regulatory, and autophagic stages is shown in
Figure 2.
3.7. Meta-Homeostatic Learning Algorithm
Training proceeds through a five-phase adaptive cycle, meticulously orchestrated to maintain meta-homeostatic equilibrium. The comprehensive training process is outlined in Algorithm 1.
| Algorithm 1 Meta-Homeostatic Training Loop of Bio-RegNet |
| Require: Dataset , initial parameters , learning rates , coefficients |
| Ensure: Trained parameters satisfying meta-homeostatic equilibrium |
- 1:
Initialize Fisher information , viability indices - 2:
while not converged do - 3:
(1) Bayesian Inference: Sample weights Compute predictions and loss from ( 3) - 4:
(2) Variational Update: , - 5:
(3) Regulatory Feedback: Evaluate ; compute Adjust local learning rate - 6:
(4) Autophagic Optimization: Compute Fisher information via ( 12) Determine inactive neurons using ( 13) Prune and regenerate new neurons - 7:
(5) Meta-Homeostatic Equilibrium: Compute global energy using ( 18) - 8:
if then - 9:
Update coefficients: , - 10:
end if - 11:
(6) Convergence Check: If and , terminate - 12:
end while
|
3.8. Computational Stability
Define the Lyapunov candidate function as
which measures the deviation of the network parameters from the equilibrium state
. Under the immunoregulated update dynamics of Bio-RegNet, the time derivative of
satisfies
where
is the learning rate of the variational mean,
represents the instantaneous immunoregulatory potential, and
corresponds to the autophagic viability index of neuron
j.
If both
and
for all active neurons, then
, ensuring monotonic energy decay. Consequently, the Lyapunov function is upper-bounded by an exponentially decreasing envelope:
where
and
denote the lower bounds of the regulatory and autophagic responses, respectively. This guarantees
bounded informational energy and establishes
stochastic asymptotic stability of the meta-homeostatic learning dynamics.
3.9. Complexity and Convergence Analysis
Let n denote the number of neurons and e the number of edges in the input graph. The overall computational cost of Bio-RegNet can be decomposed into three primary components:
Bayesian Inference: Each forward and backward run through the Bayesian effector network necessitates operations per layer, primarily due to stochastic sampling and variational parameter updates.
Regulatory Feedback: The calculation of entropy and gradient energy incurs a overhead per iteration, as both rely on layerwise statistics previously computed during backpropagation.
Autophagic Optimization: Evaluating Fisher information and performing pruning–regeneration cycles incur an amortized cost of , since inactive neurons are identified through sparse ranking and selectively updated.
Thus, the total complexity per training epoch is
which is analogous to, or inferior to, a standard Bayesian Graph Neural Network (BGNN) of equivalent depth. Due to autophagic pruning’s gradual reduction of superfluous parameters, the effective model size
diminishes over time, resulting in sublinear convergence behavior:
where
is the autophagic decay constant controlling the rate of structural sparsification.
Empirically, Bio-RegNet converges within fewer epochs than deterministic GNN baselines due to entropy-aware regularization, and achieves faster stabilization of the training loss. Given the bounded Lyapunov energy as Equation (
27), the iterative dynamics converge exponentially provided that
,
, and
remain within stable ranges
.
Subsystem-Level Complexity and Scalability
Let , , hidden dimension d, and L message-passing layers. The backbone message passing cost is under sparse aggregation.
BEN (Bayesian energy network). Using Monte-Carlo sampling depth M, the predictive and calibration estimation incurs an additional factor for forward passes, yielding per epoch. The variational parameter updates for are linear in parameter size and do not change the asymptotic graph aggregation term.
RIN (regulatory inhibition). Computing the regulation potential and Lyapunov-related statistics is (node-wise) plus (edge-wise aggregations), dominated by the backbone term.
AOE (autophagic optimization). Computing viability is , and pruning/regrowth is for thresholding and mask updates. The effective parameter count decreases over training, so the constant factor of subsequent iterations is reduced.
Overall, Bio-RegNet scales as per epoch, with modest additional overhead from regulation and structural turnover.
3.10. Conceptual Summary
The proposed Bio-RegNet framework embodies a cohesive organismic model wherein Bayesian inference, immunoregulatory control, and autophagic metabolism collaborate to maintain adaptive intelligence. By integrating uncertainty quantification, negative-feedback inhibition, and structural self-renewal into a singular computational entity, Bio-RegNet converts neural learning from a static optimization challenge into a dynamic process of self-preservation. This integration mandates ongoing equilibrium among information acquisition, entropy reduction, and energy efficiency—an emergent characteristic we designate as meta-homeostasis. Through this triadic coupling, Bio-RegNet achieves a steady-state equilibrium wherein cognition, structure, and metabolism co-evolve, thereby establishing a theoretical foundation for living neural intelligence characterized by long-term stability, autonomous regeneration, and adaptive equilibrium in uncertain environments.
4. Results and Discussion
All experiments were performed using the datasets and settings described in
Section 4.3. Statistical results are reported as mean ± standard deviation across five random seeds, and significance is verified using paired
t-tests (
).
4.1. Experimental Framework
To assess the internal regulation, stability, and interpretability of Bio-RegNet, we developed a comprehensive experimental framework including neuronal, molecular, and macro-scale systems. The model functions as a biologically inspired regulatory system that integrates the Bayesian Encoder Network (BEN), Regulatory Immune Network (RIN), and Autophagic Optimization Engine (AOE). All experiments are conducted using uniform preprocessing procedures and five random seeds. All results are presented as the mean ± standard deviation (SD), with statistical significance assessed using paired t-tests (). In the context of uncertainty analysis, 95% confidence intervals (CI) are also calculated.
Bio-RegNet unifies three self-regulatory subsystems: (1) the
Bayesian Encoder Network (BEN), which probabilistically encodes input signals and maintains entropy equilibrium; (2) the
Regulatory Immune Network (RIN), which implements adaptive negative feedback to prevent activation overshoot; and (3) the
Autophagic Optimization Engine (AOE), which performs structural regeneration via periodic pruning and regrowth to preserve metabolic efficiency. These modules interact through energy–entropy feedback cycles governed by Equations (
1)–(
18). Together, they form a closed-loop meta-homeostatic system that balances informational precision and energetic cost.
Table 2 summarizes stability and calibration metrics of BEN; the consistently low ECE and high PICP, together with the significance marks (
), indicate reliable uncertainty calibration over repeated runs.
As illustrated in
Figure 3, the nonlinear regression with 95% confidence and prediction bands provides an intuitive view of BEN’s uncertainty behavior over repeated runs: the fitted trend remains stable while the prediction envelope stays well-controlled, supporting reliable uncertainty propagation and robust regulatory energy updates.
As shown in
Figure 4, the joint energy–entropy trajectories indicate that Bio-RegNet converges faster to a lower steady-state free energy while maintaining a controlled entropy profile across epochs.
4.2. Experimental Hierarchy and Sequence
Five layers of increasing regulatory complexity are examined (
Table 3), from local stochastic calibration to cross-domain adaptation. Experiments are structured in five progressive layers (
Table 3) to examine local stability, structural adaptation, coupled feedback, perturbation recovery, and cross-domain transfer. Each layer builds upon the preceding one to ensure continuity of regulation and control.
4.3. Data Resources
To comprehensively evaluate the generalization and stability of Bio-RegNet, we utilized twelve benchmark datasets across neuronal, molecular, synthetic, and macro-scale domains. These datasets comprehensively encompass a diverse array of dynamical phenomena, including neural oscillations, chemical control, and human communication. All graphs were normalized and divided into training, validation, and test sets at a ratio of 70:15:15.
Table 4 indicates that feature sequences were standardized using z-scores before encoding by the Bayesian effector module to maintain distributional consistency across time steps.
4.4. Training, Validation, and Testing Protocol
To guarantee reproducibility and equitable comparisons among all approaches, we implement a standardized experimental protocol for all datasets and baselines. Unless specified otherwise, we present the mean ± standard deviation (SD) across five independent trials with varying random seeds and assess statistical significance using paired tests. This protocol is utilized to produce all documented tables and figures, unless stated otherwise.
4.4.1. Data Splitting and Evaluation Protocol
For each dataset, we perform a fixed train/validation/test split with the ratio 70:15:15. The training set is utilized for parameter optimization, the validation set is employed solely for model selection (hyperparameter tuning and early stopping), and the test set is used once for final reporting. For datasets with temporal ordering, splits maintain chronological consistency to prevent information leaking (i.e., training occurs prior to validation and testing in time). In the context of static graphs, we employ stratified sampling where labels are accessible.
4.4.2. Training Procedure
All models utilize mini-batch optimization during training. We utilize the Adam optimizer with the default momentum parameters as specified by the PyTorch implementation, unless stated otherwise. The learning rate, batch size, and maximum number of epochs are standardized for each dataset and uniformly applied across all methods, except when a baseline necessitates a particular configuration, as specified in the implementation details. We implement gradient clipping during unstable training and establish a consistent random seed across all libraries to guarantee reproducibility.
4.4.3. Model Selection and Early Stopping
We identify the optimal checkpoint based on validation performance. We specifically monitor the validation objective—validation NLL or ELBO proxy for Bayesian models, and validation modularity Q and/or ARI/NMI for clustering tasks when applicable—and implement early stopping with a patience of P epochs. The checkpoint exhibiting the highest validation score is subsequently assessed on the test set.
4.4.4. Cross-Validation Clarification
To avoid ambiguity, we clarify that we do not perform k-fold cross-validation in this work. Instead, we use a repeated hold-out protocol: we repeat the entire train/validation/test procedure five times using five random seeds, and report mean ± SD. This protocol is widely used in dynamic graph learning and is applied consistently to Bio-RegNet and all baselines.
4.4.5. Statistical Reporting and Significance Testing
For each metric, we conduct five independent trials and provide the mean ± standard deviation (SD). We also conduct a paired two-sided t-test between Bio-RegNet and the most robust competitive baseline using the identical seed-wise data, deeming gains statistically significant when .
4.5. Implementation Details and Hyperparameter Settings
To ensure full reproducibility, we report the complete implementation details and hyperparameter settings used across all experiments. Following the meta-homeostatic training loop in Algorithm 1, Bio-RegNet maintains two variational parameters updated by learning rates and , and performs closed-loop regulation through the instantaneous potential , where controls the strength of regulation on the effective step size via . The autophagic subsystem prunes neurons whose metabolic viability falls below and regenerates capacity with regrowth rate when over-pruning is detected. Unless otherwise stated, all results are averaged over five random seeds and statistical significance is verified using paired t-tests (). All datasets are split into train/validation/test with a ratio of 70:15:15.
4.5.1. Optimization and Training Protocol
We optimize the variational objective (ELBO) using AdamW with gradient clipping and early stopping on the validation ELBO (or validation NLL when reported). The maximum number of epochs is , and early stopping is triggered if the validation objective does not improve for P consecutive epochs. For fair comparison, all baselines are trained under the same optimizer, batch size, epoch budget, and early-stopping rule.
4.5.2. Bayesian Inference and Calibration
Monte-Carlo sampling with depth M is used to estimate predictive uncertainty and calibration metrics. Unless otherwise stated, we use a default M and additionally evaluate in the sampling-depth study. The prior is set to , and the posterior is parameterized as . The KL weight is fixed to a default value and searched in a narrow grid around it.
4.5.3. Architecture
Unless otherwise stated, we use an
L-layer message-passing backbone with hidden dimension
d and dropout rate
. Hyperparameters are tuned on the validation set using a small grid around the default values in
Table 5. Specifically, we fix the defaults and vary only one factor at a time within a narrow neighborhood, while broader sweeps for
and
M are reported in the coupled-feedback and Bayesian calibration studies.
4.5.4. Hyperparameter Selection Strategy
To prevent over-tuning and maintain transparency in the evaluation methodology, we implement a two-stage selection strategy. Initially, we establish a
default configuration (
Table 5) utilized for all primary comparisons between datasets and baselines. Secondly, we conduct a
narrow grid search around the default parameters on the validation subset, systematically adjusting one factor at a time within a limited vicinity (see to the “Search range” in
Table 5). This technique guarantees that stated improvements are not influenced by excessive per-dataset optimization, while permitting some adaptation to dataset scale and noise levels. Comprehensive sweeps are performed solely when they are an integral component of the study design: (i) Bayesian sampling depth
in the calibration analysis, and (ii) coupled-regulation exploration across
in the synergy map experiment. All chosen settings are dictated exclusively by validation performance, and the test split is utilized only once for final reporting.
4.5.5. Stability-Consistent Ranges and Practical Constraints
Our implementation adheres to the stability limitations indicated by the theoretical analysis: the meta-homeostatic dynamics demonstrate exponential convergence when critical control parameters are maintained within stable ranges, specifically (as elaborated in the convergence analysis). Consequently, we confine and to finite intervals and employ conservative learning rates for the variational updates. We implement gradient clipping and early pausing to mitigate unstable oscillations during perturbations and to provide uniform training behavior across all datasets.
4.5.6. Computational Environment
All experiments were performed on a workstation using an AMD Ryzen 9 7950X CPU (16 physical cores/32 threads), 128 GB of RAM, and an NVIDIA GeForce RTX 4090 GPU (NVIDIA Corporation, Santa Clara, CA, USA) with 24 GB of VRAM. The operating system was Ubuntu 22.04 LTS (Canonical Ltd., London, UK).
The implementation was created in Python 3.10.13 utilizing PyTorch 2.1.2 alongside CUDA 12.1 and cuDNN 8.9. Graph learning components were developed using PyTorch Geometric (PyG) version 2.5. The fundamental scientific libraries comprised NumPy 1.26, SciPy 1.11, and scikit-learn 1.4. Unless specified differently, each outcome was derived from five independent trials utilizing distinct random seeds. The average training duration per each run varied from approximately 15 min to 10 h, contingent upon graph size (small: 15–60 min, medium: 1–3 h, large: 4–10 h). Duration of Training (Wall-clock). The unified procedure (batch size 128, Tmax = 200, early-stopping patience P = 20) was employed, with each experiment repeated across five random seeds. The average training duration per run is predominantly influenced by graph scale (nodes/edges), temporal length (time steps), and feature dimensionality. For small-scale graphs (approximately 1000–2500 nodes, fewer than 20,000 edges), a single execution generally concludes within 15–60 min; for mid-scale graphs (approximately 2000–7000 nodes, up to approximately 82,000 edges and greater feature dimensions), 1–3 h; and for large macro-scale graphs (exceeding 10,000 nodes, over 170,000 edges), 4–10 h per execution. Bayesian calibration entails an incremental cost that increases roughly in proportion to the Monte-Carlo sampling depth M (default M = 5). In the simulations with autophagy pruning and scarcity, the wall-clock time increases in relation to the effective active parameter ratio and the established FLOPs budget.
4.6. Baselines
We benchmarked Bio-RegNet against a comprehensive set of dynamic graph learning models, encompassing both classical and biologically inspired architectures. These include static graph baselines (GCN), temporal extensions (EvolveGCN, TGN, DyRep), Bayesian and equilibrium-based frameworks (BGNN, EGN, Homeo-GNN), and recent transformer- and ODE-based models (DyFormer, EvoGNN, HGNN-ODE). Each model was trained and evaluated under identical settings to ensure fair comparison. The results summarized in
Table 6 demonstrate that Bio-RegNet consistently achieves superior performance across calibration (NLL, ECE, PICP), clustering (ARI, NMI,
Q), and stability indicators (
,
,
), while converging faster in fewer epochs.
4.7. Bayesian Encoder Network (BEN)
4.7.1. Objective
To verify that probabilistic encoding reduces over-confidence and preserves information entropy, thus preventing over-fitting under uncertain or noisy inputs.
4.7.2. Rationale
According to Equations (
3)–(
5), BEN minimizes a Bayesian Evidence Lower Bound while maintaining a controllable posterior variance. This acts analogously to synaptic variability in cortical neurons, where stochasticity supports flexible yet stable inference.
4.7.3. Experimental Setup
Calibration Tests. Compute negative log-likelihood (NLL), continuous ranked probability score (CRPS), prediction interval coverage (PICP), and expected calibration error (ECE) versus deterministic GCN and BGNN.
Noise Sensitivity. Add Gaussian noise (–) to features and measure degradation in ARI and modularity Q.
Entropy Preservation. Track weight entropy during training to ensure entropy does not collapse below baseline levels.
Sampling-Depth Study. Vary Monte-Carlo samples to analyze the trade-off between uncertainty and convergence stability.
4.7.4. Results and Analysis
On BrainNet-Sim, BEN reduces NLL from 0.53 ± 0.06 (M = 1) to 0.43 ± 0.03 (M = 5) (≈18.9% reduction), and increases PICP from 0.80 ± 0.04 to 0.87 ± 0.04. We see diminishing returns after M in the range of 3 to 5, consistent with the balanced stochastic-depth claim as predicted by Equation (
4).
Figure 5 visualizes the monotonic calibration improvement. The BEN module performs probabilistic encoding to mitigate over-confidence. Following Equations (
3)–(
5), the evidence lower bound guides stochastic updates.
Table 7 reports calibration and stability metrics.
4.8. Regulatory Immune Network (RIN)
The RIN module stabilizes excitatory–inhibitory dynamics through feedback gain . Lyapunov energy is tracked to assess convergence.
4.8.1. Objective
To evaluate adaptive inhibitory feedback that stabilizes excitation–inhibition dynamics, ensuring Lyapunov energy decay as formulated in Equation (
11).
4.8.2. Rationale
RIN emulates cortical GABAergic regulation: excessive activity triggers inhibitory fields that damp energy oscillations . By modulating feedback gain , RIN maintains equilibrium similar to immune tolerance in biological systems.
4.8.3. Experimental Setup
Feedback Gain Sweep. , measuring and spectral damping before/after feedback.
Perturbation Test. Disable feedback () for 20 epochs then restore it; compute recovery slope .
Inhibitory Field Visualization. Plot spatial heatmaps to show distributed suppression intensity.
Latency Simulation. Introduce delay to emulate synaptic transmission time.
4.8.4. Results and Analysis
Figure 5 shows that moderate feedback gains (
–
) yield the best stability–recovery trade-off, with diminishing returns beyond this range.
When inhibition is removed, oscillations grow > 40 %, confirming negative feedback necessity as
Table 8. The correlation between
and spectral damping (r = −0.84) matches Equation (
11) predictions.
4.8.5. Representative Per-Dataset Audit and Coupling Synergy
While
Table 8 summarizes the feedback-gain sweep on RIN stability (Lyapunov decay and recovery) across datasets,
Table 9 reports leave-one-component-out ablations aggregated across datasets (mean ± SD). Therefore, we additionally report representative per-dataset results already include our module-level experiments (Tables), and we further quantify coupling synergy using the equilibrium index
.
Coupling synergy (RIN–AOE). Because
disables inhibitory feedback, the row
in
Table 8 acts as a natural “RIN-off” reference under the same pruning rate
. We define the coupling-induced synergy in equilibrium coherence as follows:
where
indicates non-trivial gains from inhibitory–metabolic coupling beyond autophagy-only regulation at the same pruning rate.
Table 9.
Leave-one-component-out ablation across datasets (w/o module → Full; mean ± SD). Each row compares an ablated variant (Bio-RegNet without a specific module) against the
Full Bio-RegNet under the same training protocol, and reports metrics in the form (w/o → Full).
Avg. Gain (%) is the mean relative improvement across the reported metrics in each row (two or three metrics depending on availability), computed as
for lower-is-better metrics (e.g., NLL, ECE,
,
), and
for higher-is-better metrics (e.g., PICP,
,
Q). For signed stability metrics such as
, the denominator uses
. Per-dataset audit results are provided in
Table 10 and
Table 11.
Table 9.
Leave-one-component-out ablation across datasets (w/o module → Full; mean ± SD). Each row compares an ablated variant (Bio-RegNet without a specific module) against the
Full Bio-RegNet under the same training protocol, and reports metrics in the form (w/o → Full).
Avg. Gain (%) is the mean relative improvement across the reported metrics in each row (two or three metrics depending on availability), computed as
for lower-is-better metrics (e.g., NLL, ECE,
,
), and
for higher-is-better metrics (e.g., PICP,
,
Q). For signed stability metrics such as
, the denominator uses
. Per-dataset audit results are provided in
Table 10 and
Table 11.
| Ablated Variant | Metric 1 | Metric 2 | Metric 3 | Avg. Gain (%) |
|---|
| w/o BEN | NLL ↓ () | ECE ↓ () | PICP ↑ () | 22.2 |
| w/o RIN | ↓ () | ↓ () | ↓ () | 35.7 |
| w/o AOE | ↓ () | ↓ () | N/A | 23.1 |
| w/o (BEN+RIN feedback) | ↓ () | NLL ↓ () | ↑ () | 15.4 |
Table 10.
Representative per-dataset module-level results (mean ± SD over five runs), reported in the main text to provide an auditable view beyond the aggregated ablation summary. BEN results use Monte-Carlo sampling depth , and RIN results report Lyapunov decay and recovery behavior under different feedback gains .
Table 10.
Representative per-dataset module-level results (mean ± SD over five runs), reported in the main text to provide an auditable view beyond the aggregated ablation summary. BEN results use Monte-Carlo sampling depth , and RIN results report Lyapunov decay and recovery behavior under different feedback gains .
| BEN (M = 3): Calibration + Clustering | RIN: Lyapunov Stability + Recovery |
|---|
| Dataset | NLL↓ | ECE↓ | ARI↑ | Dataset | | (Final)↓ | /Recov.↓ |
| BrainNet-Sim | | | | BrainNet-Sim | 0.0 | | />50 |
| ECoG-TaskNet | | | | BrainNet-Sim | 0.3 | | / |
| SmartGrid-UK | | | | BrainNet-Sim | 0.5 | | / |
| | | | | SmartGrid-UK | 0.3 | | / |
Table 11.
One-factor sensitivity analysis around the default configuration (mean ± SD over five runs). For BEN we report calibration sensitivity (NLL/ECE) under different Monte-Carlo sampling depths M on the representative BrainNet-Sim dataset; for the coupled RIN–AOE mechanism we report equilibrium sensitivity () when varying at fixed = 0.10, and when varying at fixed = 0.3.
Table 11.
One-factor sensitivity analysis around the default configuration (mean ± SD over five runs). For BEN we report calibration sensitivity (NLL/ECE) under different Monte-Carlo sampling depths M on the representative BrainNet-Sim dataset; for the coupled RIN–AOE mechanism we report equilibrium sensitivity () when varying at fixed = 0.10, and when varying at fixed = 0.3.
| Factor | Setting | NLL ↓ | ECE ↓ | |
|---|
| (A) BEN calibration sensitivity on BrainNet-Sim |
| M (MC samples) | 1 | | | – |
| | 3 (balanced) | | | – |
| | 5 | | | – |
| (B) RIN–AOE equilibrium sensitivity (κ) |
| (feedback gain) | 0.0 at = 0.10 | – | – | |
| | 0.3 (default) at = 0.10 | – | – | |
| | 0.5 at = 0.10 | – | – | |
| (pruning rate) | 0.05 at = 0.3 | – | – | |
| | 0.10 (default) at = 0.3 | – | – | |
| | 0.15 at = 0.3 | – | – | |
| | 0.20 at = 0.3 | – | – | |
| | 0.25 at = 0.3 | – | – | |
Leave-one-component-out ablation. To quantify the contribution of each biological subsystem, we remove one module at a time from the full Bio-RegNet and compare the resulting ablated variant against the
Full Bio-RegNet under the same training protocol. Gain (%) follows the definition in the caption of
Table 9.
4.9. Module-Level Results
To quantify the contribution of each biological subsystem, we conduct leave-one-component-out ablations by removing one module at a time from the full Bio-RegNet and comparing the resulting variant against the
Full Bio-RegNet (w/o → Full) under the same training protocol. Each subsystem—Bayesian Effector Network (BEN), Regulatory Inhibition Network (RIN), and Autophagic Optimization Engine (AOE)—was separately isolated and evaluated to determine its independent effects on calibration, stability, and information efficiency. Additionally, we analyzed coupled configurations (e.g., BEN + RIN) to uncover synergistic effects between immune-inspired feedback and Bayesian uncertainty modeling. The findings, outlined in
Table 9, reveal uniform performance enhancements across all principal measures, validating the synergistic functions of each module in sustaining energy–entropy equilibrium and augmenting predictive robustness.
4.9.1. Audit View for Ablation
While
Table 9 reports ablation outcomes aggregated across datasets (mean ± SD), reviewers may reasonably ask for a per-dataset view to ensure that improvements are not driven by dataset selection. Therefore,
Table 10 reports representative
per-dataset module-level results already obtained in our BEN and RIN experiments. Specifically, BEN is evaluated with Monte-Carlo sampling depth
(a balanced setting identified in the sampling-depth study), reporting both calibration (NLL/ECE) and clustering quality (ARI). In parallel, RIN is assessed via Lyapunov decay (final
), convergence rate
, and recovery epochs under different inhibitory gains
, demonstrating that inhibitory feedback yields faster stabilization and recovery compared with the
(no-regulation) case.
4.9.2. Sensitivity to Priors and Regeneration
We further analyze sensitivity to (i) regeneration rate and (ii) Bayesian prior variance , as well as key hyperparameters that govern uncertainty calibration and regulation strength. We vary one factor at a time around the default configuration and report calibration (NLL/ECE) and stability (e.g., or ). The results identify stable operating regions and show that the coupled loop remains robust to moderate prior and regeneration changes.
Interpretation. Table 11 shows that BEN calibration improves when increasing
M from 1 to 3, while the marginal gain from
M = 3 to
M = 5 is smaller (consistent with the balanced stochastic-depth claim). For the coupled RIN–AOE mechanism,
is maximized around the default choice
≈ 0.3 and
≈ 0.10, indicating a stable energy–entropy equilibrium ridge.
4.10. Coupled-System Experiments
To further investigate subsystem interactions, we analyzed the coupled dynamics between the Regulatory Inhibition Network (RIN) and the Autophagic Optimization Engine (AOE) by systematically varying the feedback gain and pruning rate . The equilibrium index is used to quantify systemic synergy and energy–entropy balance.
4.10.1. Objective
The goal of this experiment is to verify whether periodic pruning and regrowth improve metabolic efficiency without sacrificing predictive accuracy, consistent with the metabolic equilibrium principle in Equation (
16).
4.10.2. Rationale
AOE biologically models neuronal autophagy—removing weak synapses to conserve energy and triggering limited regrowth () to maintain network plasticity. This mechanism aims to minimize free-energy variation while preserving modular functional integrity Q, thus achieving sustainable energetic homeostasis.
4.10.3. Experimental Setup
Pruning-Rate Analysis: Sweep to record the ratio of active parameters and free-energy change .
Energy–Efficiency Curve: Plot versus to examine diminishing returns and identify the optimal pruning rate.
Scarcity Simulation: Limit computation to 40% FLOPs to emulate low-resource metabolic conditions.
Over-Pruning and Regrowth: When , enable regrowth with to restore lost capacity.
4.10.4. Results and Analysis
Figure 6 illustrates that energy expenditure
decreases by approximately 35% with an accuracy loss of less than 3%. The optimal pruning rate
achieves the best trade-off between energy efficiency and stability, consistent with the theoretical prediction of Equation (
16). This finding confirms that autophagic regulation enables adaptive energy conservation without degrading functional performance.
As summarized in
Table 12, moderate feedback (
) with balanced pruning (
) yields the highest equilibrium index (
), supporting an optimal coupling between inhibitory regulation and autophagy-driven sparsification.
4.10.5. Synergy Beyond Autophagy-Only Control
To explicitly test whether inhibitory feedback provides non-additive benefits beyond metabolic pruning alone, we treat the
setting as a
RIN-off reference under the same pruning rate
and compute
. As shown in
Table 13, coupling yields consistent positive gains across
, with the largest improvement at
(
).
Overall, the coupled RIN–AOE mechanism demonstrates that controlled inhibitory feedback and adaptive pruning together yield a self-organizing equilibrium, effectively reducing energetic cost while preserving representational fidelity.
4.11. Dynamic Stability Across Epochs
To further analyze how regulatory feedback influences convergence dynamics and temporal coherence, we tracked the evolution of representative vertex activations across training epochs under different feedback gains. This visualization highlights how the strength of the inhibitory feedback parameter modulates damping behavior and stability, revealing the temporal signature of Bio-RegNet’s self-regulatory process.
4.11.1. Experimental Observation
When
, trajectories exhibit sustained oscillations and high-frequency noise, indicating a lack of inhibitory stabilization. Introducing moderate feedback (
) reduces oscillation amplitude and accelerates partial convergence. With stronger feedback (
), the system demonstrates rapid exponential damping and smooth equilibrium, consistent with the Lyapunov convergence predicted in Equation (
16). This behavior quantitatively supports the theoretical claim that feedback coupling accelerates entropy minimization and energy stabilization.
Overall, the epoch-wise dynamics reveal that inhibitory feedback acts as a stabilizing mechanism, suppressing chaotic fluctuations and guiding the system toward low-entropy attractor states, thereby ensuring robust convergence of Bio-RegNet’s self-organizing process.
4.11.2. Adversarial and Structural Perturbations
In addition to random noise injection, we assess robustness against four perturbation types that simulate structural and energetic stress in real-world dynamic systems: (i) structural drift (network rewiring), (ii) random shock (burst-like external stimuli), (iii) noise injection (stochastic signal contamination), and (iv) resource deprivation (metabolic/computational scarcity). The recovery performance is measured by the epochs needed to restore of the baseline modularity Q, according to the Lyapunov-based recovery model. Additionally, resilience is evaluated by modularity degradation and the percentage of steady-state stability.
4.11.3. Failure Modes and Observed Boundaries
We observe two primary failure modes. (i) Over-pruning: when becomes too large, the model may temporarily lose representational capacity; in our implementation, regrowth is enabled to restore capacity (e.g., when , we activate regrowth with as a safety mechanism). (ii) Over-inhibition: when is too strong, excessive suppression may slow adaptation under rapid distribution shifts. These boundaries are consistent with the coupled sensitivity ridge observed in the map and motivate the recommended operating region reported in the sensitivity study.
4.12. Perturbation and Stress-Testing
To examine the robustness and self-recovery capability of Bio-RegNet, we conducted stress-testing under four perturbation types: (1) structural drift, (2) random shock, (3) noise injection, and (4) resource deprivation. These perturbations emulate neural, molecular, or infrastructural disruptions in real-world dynamic systems. Recovery performance is evaluated by the number of epochs required to regain 95% of baseline modularity
Q, following the Lyapunov-based recovery model in Equation (
27). Robustness is further assessed by energy overshoot and steady-state stability percentage.
4.12.1. Objective
To validate whether Bio-RegNet maintains homeostatic stability and rapid recovery under structural and energetic stress, consistent with the theoretical resilience law of Equation (
27).
4.12.2. Rationale
Each perturbation type reflects a distinct failure mode: structural drift mimics network rewiring; random shock represents burst-like external stimuli; noise injection models stochastic signal contamination; and resource dropout simulates metabolic or computational scarcity. The equilibrium index and modularity recovery are used as quantitative indicators of system resilience.
4.12.3. Results and Analysis
As summarized in
Table 14, Bio-RegNet maintains 92.1–96.3% stability and recovers within 21–29 epochs (worst-case: resource dropout, 92.1% stability and 28.3 ± 3.1 epochs) across all stress conditions. Structural drift and random shock produce the fastest re-equilibration, while noise and resource dropout lead to slower convergence but no catastrophic degradation. The corresponding recovery trajectories validating the theoretical recovery function
. Overall, Bio-RegNet demonstrates robust resilience to perturbations, maintaining global equilibrium with minimal overshoot.
These findings confirm that Bio-RegNet maintains high energetic resilience and rapid convergence under diverse perturbations, validating its self-regulatory feedback and adaptive homeostasis properties.
4.13. Cross-Domain Transfer
To assess the universality and transferability of the acquired regulatory principles, we performed cross-domain experiments by transferring parameters trained in one domain (e.g., neural networks) to several other domains (molecular, macro-scale, and energy systems). This setting evaluates the efficacy of Bio-RegNet’s inherent homeostatic mechanisms—such as inhibitory feedback and autophagic regulation—when implemented in systems with unique topological and dynamical properties.
4.13.1. Experimental Setup
Model parameters acquired from neural datasets (BrainNet-Sim, ECoG-TaskNet) were directly utilized in molecular and energy networks (Human PPI, SmartGrid-UK) without any fine-tuning. Likewise, criteria developed from molecular data were used to synthetic and macro-level benchmarks to evaluate generality across biological and manmade domains. Performance was evaluated using modularity
Q (structural coherence) and equilibrium index
(energetic balance), as defined in Equation (
24).
4.13.2. Results and Analysis
As summarized in
Table 15, Bio-RegNet remains resilient across four perturbation types, achieving high stability (
–
) while requiring fewer recovery epochs under milder drifts/shocks; all improvements are statistically significant (
).
As summarized in
Table 16, Bio-RegNet retains over 93% of modularity
Q and exhibits less than 5% variation in
across all transfer settings. The best transferability occurs in the Molecular→Synthetic configuration (95.8%
Q retention, −3.1%
change), demonstrating that the learned self-regulatory feedback generalizes effectively to unseen topologies and dynamical regimes. These results indicate that the model’s homeostatic mechanisms are not domain-specific but rather encode universal energy–entropy regulation patterns.
These findings confirm that Bio-RegNet encapsulates domain-independent regulatory dynamics, indicating that its feedback-control principles can generalize across neuronal, molecular, and macro-scale systems with low retraining expense.
4.14. Summary and Discussion
Bio-RegNet achieves realistic yet consistent improvements of 10–20%, aligning with the meta-homeostatic theory (Equations (
18)–(
27)). Each
Figure 4,
Figure 5,
Figure 6,
Figure 7,
Figure 8,
Figure 9 and
Figure 10 visualizes one layer of regulation—from structural feedback to systemic recovery—demonstrating emergent stability across neural, molecular, and energy networks.
Figure 11 shows consistent cross-domain fine-tuning with <
modularity loss from
and smooth convergence, validating the domain-invariant equilibrium.
4.15. Visualization Protocols
Visualization focuses on energy–entropy evolution, inhibitory field distribution, and pruning dynamics.
Table 17 reports consistent gains in interpretability: reduced Energy Trajectory RMS, increased Entropy Compression, improved Inhibitory Field Uniformity, and high Pruning Stability, with significance across all tests (
).
4.16. Comprehensive Evaluation Metrics
To deliver a comprehensive evaluation of Bio-RegNet’s systemic efficacy, we consolidated findings from twelve datasets spanning neuronal, molecular, and macro-scale domains. The assessment encompasses five essential categories—stability, calibration, efficiency, resilience, and coupling—to encapsulate both dynamic and functional dimensions of self-regulation. All presented values indicate percentage enhancements of Bio-RegNet compared to the optimal baseline model, with significance confirmed using paired t-tests ().
Results and Discussion. In all categories, Bio-RegNet routinely surpasses state-of-the-art baselines, attaining expedited Lyapunov convergence, enhanced uncertainty calibration, and increased energetic efficiency under both stable and disturbed settings. The model’s robustness is seen in its swift recovery and robust coupling coherence, suggesting that its homeostatic feedback architecture is applicable across various scales and data modalities. The results jointly affirm that Bio-RegNet attains a stable energy–entropy equilibrium via distributed control across its BEN, RIN, and AOE subsystems.
Table 18 provides an aggregated summary across 12 datasets, showing that Bio-RegNet delivers consistent and statistically significant mean gains (all
) over the best in-domain baselines in stability, calibration, efficiency, resilience, and coupling strength, supporting the robustness and universality of its regulatory framework.
Figure 12 shows that the Normalized free-energy decay
of the full coupling (BEN + RIN + AOE) converges fastest and most smoothly, while removing RIN or AOE weakens damping and delays Lyapunov equilibrium, consistent with the predicted systemic synergy. In summary, these comprehensive evaluations verify that Bio-RegNet integrates stability, adaptability, and efficiency into a unified biologically inspired framework, achieving scalable homeostasis across heterogeneous dynamic systems.
4.17. Discussion
Our results indicate that Bio-RegNet significantly enhances stability, calibration, and self-regulatory behavior relative to conventional dynamic GNNs and Bayesian benchmarks. The improved Lyapunov decay rate, reduced entropy drift under perturbation, and accelerated recovery durations demonstrate the practical efficacy of the meta-homeostatic loop (Bayesian Effector Network + Regulatory Immune Network + Autophagic Optimization Engine).
This discovery is consistent with recent advancements in Treg and autophagy research from a biologically-inspired systems viewpoint. Engineered Treg cells are crucial for sustaining immune homeostasis and regulating inflammatory responses in vivo, particularly in autoimmune and transplantation scenarios [
18]. The successful demonstration of CAR-Tregs in preclinical models of type I diabetes and organ rejection confirms the notion that inhibitory feedback is a genuine mechanism of biological resilience. Concurrently, homeostatic autophagy in neurons is increasingly acknowledged as vital for maintaining synaptic and metabolic stability throughout stress and aging [
19,
20]. These biological precedents substantiate our design rationale: incorporating inhibitory and renewal processes into learning systems can diminish runaway activations and alleviate structural deterioration.
Recent surveys highlight the vulnerability of GNNs to topology alterations, noise, and adversarial disturbances, advocating for comprehensive frameworks that concurrently tackle robustness, explainability, and uncertainty [
4]. Our findings indicate that merely incorporating uncertainty estimation or adversarial training is inadequate; without structural renewal and inhibitory regulation, the improvements are minimal. The recently published stable-learning GNN research demonstrates that well-crafted sampling and decorrelation strategies can enhance generalization across domains; nevertheless, these systems currently lack mechanisms similar to autophagy or immunoregulation. In contrast, Bio-RegNet’s triadic architecture offers a more profound functional analogy and quantifiable improvements in practical applications.
Our work also contributes to **energy efficiency and self-repair**. In biological systems, autophagy functions as a metabolic regeneration mechanism, eliminating damaged components and repairing essential structures; disturbances in autophagy are associated with dementia and energy imbalance. The enhancements in model energy usage and structure pruning/regeneration we noted embody these biological findings. A recent study on biologically inspired neural network layers indicates that the removal and renewal of inactive neurons can enhance performance in artificial neural networks. Moreover, the inclination towards reliable GNNs emphasizes metrics beyond mere accuracy, including robustness, fairness, and interpretability [
6]. Bio-RegNet’s architecture resolves these issues by integrating interpretability (via regulatory feedback), robustness (through an inhibitory loop), and metabolic efficiency (through an autophagy engine). Nonetheless, limits and other unresolved inquiries persist. A disadvantage is that our autophagic regeneration component currently employs heuristic limits for pruning and regeneration; future research should provide more biologically realistic and adaptable criteria for structural turnover. Furthermore, while our experimental findings demonstrate cross-domain transfer (neural-molecular-macro) with a modularity loss of less than 5%, the scalability of Bio-RegNet to extensive multimodal or streaming datasets remains unexamined. Recent advancements in resilient architecture search for GNNs [
14] and adversarial training through graph subspace energy optimization [
7] provide further pathways for the integration of Bio-RegNet with scalable automated design.
Furthermore, a more profound integration of biology and computation may examine cytokine-like inter-layer communication within the Regulatory Immune Network or more intricate metabolic modeling in the autophagic mechanism (e.g., analogues of mitochondrial turnover). Biologically, Tregs not only suppress but also modify metabolic signals and tissue healing pathways, indicating potential advancements in the inhibitory module. In terms of uncertainty, incorporating explainable AI methodologies into the feedback loops—associated with current studies in explainer surveys within omics and imaging fields [
21,
22,
23]—could enable Bio-RegNet to achieve both robustness and intrinsic interpretability.
This work demonstrates that artificial learning systems can significantly benefit from biologically based regulation and renewal methods. Although the present research is confined to modeling and mid-scale experimentation, the concordance of our results with contemporary biological and AI literature suggests a future where learning systems not only optimize but also self-maintain, self-repair, and adapt sustainably in dynamic environments.
5. Conclusions and Future Work
This study presents Bio-RegNet, a meta-homeostatic Bayesian neural framework that incorporates three physiologically inspired mechanisms: uncertainty-aware inference, Treg-like immunoregulation, and autophagic optimization, to attain adaptable and stable intelligence. Bio-RegNet demonstrated superior stability, interpretability, and energy efficiency in extended trials on graph-based community detection and dynamic network benchmarks, outperforming standard GNNs and probabilistic baselines. The noted improvements in Lyapunov decay rate, entropy suppression, and recovery speed underscore the efficacy of integrating immune–autophagic feedback into the learning dynamics, therefore converting the model into a self-correcting, self-renewing computational entity.
From a biomimetic viewpoint, these findings endorse the overarching assertion that intelligence in artificial systems can be perpetuated via the same regulatory principles that sustain life in organic systems. Treg cells regulate excessive activation in nature, while autophagy rejuvenates cellular structures to maintain metabolic equilibrium [
1,
24,
25]. Bio-RegNet embodies this duality in a computational framework: inhibition and regeneration collaborate to avert overfitting and informational degradation, so maintaining a sustained balance between confidence and uncertainty, growth and pruning, and exploitation and exploration. The outcome is not merely a more efficient network, but a system that
acquires the ability to maintain stability during the learning process.
The ramifications surpass community detection. In subsequent research, Bio-RegNet may be extended to encompass multimodal learning, embodied agents, or adaptive control, wherein uncertainty and stability are crucial for ongoing adaptation. Incorporating cytokine-like inter-layer communication or metabolic cost modeling could augment its biological realism and scalability. Simultaneously, integrating Bio-RegNet with neuromorphic substrates and spiking architectures may provide avenues for hardware-level self-maintenance and sustainable energy consumption [
15,
22,
26]. These directives correspond with contemporary appeals for
ecological intelligence—AI systems that develop through regulation and renewal instead of sheer optimization [
27].
In summary, Bio-RegNet illustrates that using the principles of immune regulation and autophagy can yield learning architectures that are more robust, interpretable, and naturally sustainable. This paradigm establishes computation based on the feedback mechanisms that regulate biological homeostasis, marking progress toward artificial intelligence that is not only intelligent but also alive in its adaptability. Against the strongest baseline HGNN-ODE, Bio-RegNet achieves higher community detection accuracy (ARI: 0.77→0.81; NMI: 0.84→0.87) and stronger equilibrium coherence (: 0.86→0.93). In addition, Bio-RegNet provides better uncertainty-aware behavior and robustness, reducing NLL from 0.47 to 0.37 and shortening recovery epochs from 20 to 17 in the overall comparison.
Based on the current findings, multiple research avenues arise at the convergence of biological self-regulation and computational intelligence. An immediate expansion is generalizing Bio-RegNet to multimodal and heterogeneous graph settings, where various relational and temporal patterns coexist. This integration may facilitate cross-modal homeostasis amid dynamic uncertainty, consistent with recent endeavors to integrate heterogeneous and temporal graph learning frameworks [
28,
29].
A further interesting avenue pertains to data efficiency and self-supervised regulation. Permitting Bio-RegNet to independently produce its supervisory signals may facilitate its progression towards low-label and transfer learning paradigms, aligning with contemporary advances in data-efficient graph representation learning [
30]. This profession also facilitates dynamic and streaming adaptation, since temporal immune–autophagic feedback mechanisms may solidify learning in perpetually evolving environments [
31].
Progress in architectural design indicates a convergence between Bio-RegNet and Graph Transformers, merging biologically based inhibition–activation equilibrium with global attention mechanisms [
32]. This union may produce a self-regulating transformer architecture that can sustain equilibrium even with extensive data volumes. The framework’s interpretability can be enhanced by including principles from reliable and explainable GNNs, which prioritize fairness, robustness, and transparency [
33].
From an engineering standpoint, neuromorphic and biomimetic hardware implementations signify a new horizon. Event-driven and low-power systems, inspired by cortical circuits, are becoming adept at encoding feedback loops similar to immune or autophagic regulation [
34,
35]. The integration of Bio-RegNet with these platforms may result in energy-efficient, self-regulating artificial systems. In this context, formalizing the thermodynamic foundations of information processing—specifically the energy–entropy trade-off—would establish a systematic connection between biological metabolism and computational expense [
36].
The framework’s dual immune–autophagic mechanism is inherently suitable for biomedical and connectomic applications, encompassing network-level investigations of neuronal connections and tissue interactions [
37,
38]. Investigating adversarial and privacy-resilient behaviors [
39], along with adaptive regeneration utilizing neural repair models [
40], could further enhance Bio-RegNet’s function as a cornerstone for sustainable intelligence.
The long-term objective is to develop bio-inspired systems that learn, adapt, and heal like live beings, achieving equilibrium not by static optimization, but through continuous self-renewal. This combination of biological regulation and artificial intelligence represents a significant advancement toward genuinely autonomous, self-sustaining cognition.
Author Contributions
Conceptualization, D.Q. and Y.M.; methodology, Y.M. and D.Q.; software, D.Q. and Y.M.; validation, D.Q. and Y.M.; formal analysis, D.Q. and Y.M.; data curation, D.Q. and Y.M.; investigation, D.Q. and Y.M.; resources, D.Q. and Y.M.; visualization, D.Q. and Y.M.; writing—original draft preparation, D.Q. and Y.M.; writing—review and editing, D.Q. and Y.M.; supervision, D.Q., M.P. and Y.M.; project administration, D.Q. and M.P.; funding acquisition, Y.M. and D.Q. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable for studies not involving humans or animals.
Informed Consent Statement
Not applicable for studies not involving humans.
Data Availability Statement
Dataset available on request from the authors.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Bittner, S.; Wiendl, H.; Korn, T. Engineered Treg Cells as Putative Therapeutics against Autoimmunity and Inflammation. Trends Immunol. 2023, 44, 385–398. [Google Scholar] [CrossRef] [PubMed]
- Hassan, M.; Elzallat, M.; Mohammed, D.; Balata, M.; El-Maadawy, W. Exploiting Regulatory T Cells (Tregs): Cutting-Edge Therapy for Autoimmune Diseases. Int. Immunopharmacol. 2025, 155, 114624. [Google Scholar] [CrossRef]
- Valencia, M.N.; Kim, S.R.; Jang, Y.; Lee, S.H. Neuronal autophagy: Characteristic features and roles in neuronal pathophysiology. Biomol. Ther. 2021, 29, 605. [Google Scholar] [CrossRef] [PubMed]
- Zhao, J.; Han, Z.; Ding, L.; Wang, P.; He, X.; Lin, L. The molecular mechanism of aging and the role in neurodegenerative diseases. Heliyon 2024, 10, e23389. [Google Scholar] [CrossRef] [PubMed]
- Yang, L.; Guo, C.; Zheng, Z.; Dong, Y.; Xie, Q.; Lv, Z.; Cui, Y. Stress dynamically modulates neuronal autophagy to gate depression onset. Nature 2025, 641, 427–437. [Google Scholar] [CrossRef]
- Gawlikowski, J.; Tassi, C.R.N.; Ali, M.; Lee, J.; Humt, M.; Feng, J.; Zhu, X.X. A survey of uncertainty in deep neural networks. Artif. Intell. Rev. 2023, 56, 1513–1589. [Google Scholar] [CrossRef]
- Li, C.T.; Tsai, Y.C.; Chen, C.Y.; Liao, J.C. Graph neural networks for tabular data learning: A survey with taxonomy and directions. ACM Comput. Surv. 2025, 58, 1–51. [Google Scholar] [CrossRef]
- Wing, J.; Ise, W.; Kurosaki, T.; Sakaguchi, S. Regulatory T Cells Control Antigen-Specific Expansion of Tfh Cell Number and Humoral Immune Responses via the Coreceptor CTLA-4. Immunity 2014, 41, 1013–1025. [Google Scholar] [CrossRef]
- Jaiswara, P.; Gupta, V.; Rawat, S.; Sonker, P.; Kumar, A. Reprogramming of Tumor Associated Immune Cells by Phytochemicals: In-vitro Approaches for Cancer Treatment. In Phytochemistry: An In-Silico and In-Vitro Update: Advances in Phytochemical Research; Springer: Singapore, 2019; pp. 69–82. [Google Scholar]
- Liu, M.; Liu, S.; Lin, Z.; Chen, X.; Jiao, Q.; Du, X.; Jiang, H. Targeting the interplay between autophagy and the Nrf2 pathway in Parkinson’s disease with potential therapeutic implications. Biomolecules 2025, 15, 149. [Google Scholar] [CrossRef]
- Zhou, X.; Lee, Y.K.; Li, X.; Kim, H.; Sanchez-Priego, C.; Han, X.; Yue, Z. Integrated proteomics reveals autophagy landscape and an autophagy receptor controlling PKA-RI complex homeostasis in neurons. Nat. Commun. 2024, 15, 3113. [Google Scholar] [CrossRef]
- Bai, I.; Keyser, C.; Zhang, Z.; Rosolia, B.; Hwang, J.; Zukin, R.; Yan, J. Epigenetic Regulation of Autophagy in Neuroinflammation and Synaptic Plasticity. Front. Immunol. 2024, 15, 1322842. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Wang, S.; Zhuang, D.; Koutsopoulos, H.; Zhao, J. Uncertainty quantification of spatiotemporal travel demand with probabilistic graph neural networks. IEEE Trans. Intell. Transp. Syst. 2024, 25, 8770–8781. [Google Scholar] [CrossRef]
- Huang, K.; Jin, Y.; Candes, E.; Leskovec, J. Uncertainty quantification over graph with conformalized graph neural networks. Adv. Neural Inf. Process. Syst. 2023, 36, 26699–26721. [Google Scholar]
- Hoefler, T.; Ben-Nun, T.; Dryden, N.; Peste, A. Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training. J. Mach. Learn. Res. 2021, 22, 1–53. [Google Scholar]
- Zhang, W.; Sheng, Z.; Yin, Z.; Jiang, Y.; Xia, Y.; Gao, J.; Cui, B. Model Degradation Hinders Deep Graph Neural Networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 2493–2503. [Google Scholar]
- Zhao, L.; Chen, X.; Xu, J. DepGraph: Towards General Structural Pruning for Deep Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 14852–14866. [Google Scholar]
- Wing, J.; Ise, W.; Sakaguchi, S. Regulatory T Cells Control Immune Activation by Modulating Cellular Metabolism. Immunity 2023, 56, 210–226. [Google Scholar]
- Stoops, J.; Morton, T.; Powell, J.; Pace, A.L.; Bluestone, J.A. Treg cell therapy manufacturability: Current state of the art, challenges and new opportunities. Front. Immunol. 2025, 16, 1604483. [Google Scholar] [CrossRef]
- Zhou, X.; Lee, Y.K.; Li, X.; Kim, H.; Sanchez-Priego, C.; Han, X.; Tan, H.; Zhou, S.; Fu, Y.; Purtell, K.; et al. Integrated Proteomics Reveals the Landscape and Control of Neuronal Autophagy. Nat. Commun. 2024, 15, 47440. [Google Scholar]
- Zhang, X.; Xu, Y.; He, W.; Guo, W.; Cui, L. A Comprehensive Review of the Oversmoothing in Graph Neural Networks. In Proceedings of the CCF Conference on Computer Supported Cooperative Work and Social Computing, Harbin, China, 18–20 August 2023; pp. 451–465. [Google Scholar]
- Liu, L.; Wang, F.; Chen, X.; Liu, L.; Wang, Y.; Bei, J.; Lei, L.; Zhao, Z.; Tang, C. Designing Multifunctional Microneedles in Biomedical Engineering: Materials, Methods, and Applications. Int. J. Nanomed. 2025, 20, 8693–8728. [Google Scholar] [CrossRef] [PubMed]
- Doglio, M.; Ugolini, A.; Bercher-Brayer, C.; Camisa, B.; Toma, C.; Norata, R.; Bonini, C. Regulatory T cells expressing CD19-targeted chimeric antigen receptor restore homeostasis in Systemic Lupus Erythematosus. Nat. Commun. 2024, 15, 2542. [Google Scholar] [CrossRef]
- Zhao, Y.P.; Fine, B.M. Integrating Macrophages into Human-Engineered Cardiac Tissue. Cells 2025, 14, 1393. [Google Scholar] [CrossRef]
- Lyu, J.; Zhang, H.; Wang, C.; Pan, M. New Insight in Treating Autoimmune Diseases by Targeting Autophagy. Autoimmunity 2024, 57, 2351872. [Google Scholar] [CrossRef]
- Han, B.; Liu, R.; Zhao, H. Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks. Neural Netw. 2022, 156, 24–35. [Google Scholar] [CrossRef]
- Ennadir, S.; Abbahaddou, Y.; Lutzeyer, J.F.; Vazirgiannis, M.; Boström, H. A simple and yet fairly effective defense for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 26–27 February 2024; Volume 38, pp. 21063–21071. [Google Scholar]
- Jin, M.; Koh, H.Y.; Wen, Q.; Zambon, D.; Alippi, C.; Webb, G.I.; King, I.; Pan, S. A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10466–10485. [Google Scholar] [CrossRef] [PubMed]
- Ahsan, T.; Kim, J.; Park, S. Temporal and Heterogeneous Graph Neural Networks: A Unified Review. Inf. Fusion 2024, 115, 102115. [Google Scholar]
- Yang, Z.; Zhang, G.; Wu, J.; Yang, J.; Sheng, Q.Z.; Xue, S.; Zhou, C.; Aggarwal, C.C.; Peng, H.; Hu, W.; et al. State of the Art and Potentialities of Graph-Level Learning. ACM Comput. Surv. 2024, 57, 1–40. [Google Scholar] [CrossRef]
- Zheng, Y.; Yi, L.; Wei, Z. A Survey of Dynamic Graph Neural Networks. Front. Comput. Sci. 2025, 19, 196323. [Google Scholar] [CrossRef]
- Bilgili, A.; Gül, K. Forecasting Power Generation of Wind Turbine with Real-Time Data Using Machine Learning Algorithms. Clean Technol. Recycl. 2024, 4, 108–124. [Google Scholar] [CrossRef]
- Dai, E.; Zhao, T.; Zhu, H.; Xu, J.; Guo, Z.; Liu, H.; Tang, J.; Wang, S. A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability. Mach. Intell. Res. 2024, 21, 1011–1061. [Google Scholar] [CrossRef]
- Mompó Alepuz, A.; Papageorgiou, D.; Tolu, S. Brain-Inspired Biomimetic Robot Control: A Review. Front. Neurorobotics 2024, 18, 1395617. [Google Scholar] [CrossRef]
- Mirfatbzadeh, S.M.; Longo, M.; Di Martino, A.; Saldarini, A.; Faranda, R.S. Exploring the Synergy of Artificial Intelligence in Energy Storage Systems for Electric Vehicles. Electronics 2024, 13, 1973. [Google Scholar] [CrossRef]
- Georgelin, R.; Jackson, C.J. Entropy, Enthalpy, and Evolution: Adaptive Trade-Offs in Protein Binding Thermodynamics. Curr. Opin. Struct. Biol. 2025, 94, 103080. [Google Scholar] [CrossRef] [PubMed]
- Mohammadi, H.; Karwowski, W. Graph Neural Networks in Brain Connectivity Studies: Methods, Challenges, and Future Directions. Brain Sci. 2024, 15, 17. [Google Scholar] [CrossRef]
- Lu, W.; Anumba, C.J. Routledge Handbook of Smart Built Environment; Routledge: London, UK, 2025. [Google Scholar]
- Zhang, H.; Zhu, C.; Wang, X.; Zhou, Z.; Yin, C.; Li, M.; Xue, L.; Wang, Y.; Hu, S.; Liu, A.; et al. BadRobot: Jailbreaking Embodied LLM Agents in the Physical World. In Proceedings of the Thirteenth International Conference on Learning Representations (ICLR), Singapore, 24–28 April 2025. [Google Scholar]
- He, J.; Qiao, L.; Li, J.; Lu, Z.; Fu, J.; Chen, X.; Zhang, X.; Hu, X. Advanced Strategies for 3D-Printed Neural Scaffolds: Materials, Structure, and Nerve Remodeling. Bio-Des. Manuf. 2024, 7, 747–770. [Google Scholar] [CrossRef]
Figure 1.
Overview of the Bio-RegNet framework. Bio-RegNet operates through a four-stage meta-homeostatic learning loop: (1) a Bayesian Effector Network (BEN) for uncertainty-aware inference, (2) a Treg-inspired Regulatory Network (RIN) providing entropy- and energy-based inhibitory feedback, (3) an Autophagic Optimization Engine (AOE) enabling Fisher-information-guided pruning and regeneration, and (4) a global meta-homeostatic energy update driving the system toward equilibrium with . Solid arrows indicate forward execution, while dashed arrows denote feedback across iterations.
Figure 1.
Overview of the Bio-RegNet framework. Bio-RegNet operates through a four-stage meta-homeostatic learning loop: (1) a Bayesian Effector Network (BEN) for uncertainty-aware inference, (2) a Treg-inspired Regulatory Network (RIN) providing entropy- and energy-based inhibitory feedback, (3) an Autophagic Optimization Engine (AOE) enabling Fisher-information-guided pruning and regeneration, and (4) a global meta-homeostatic energy update driving the system toward equilibrium with . Solid arrows indicate forward execution, while dashed arrows denote feedback across iterations.
Figure 2.
Illustrative single-sample processing pipeline of Bio-RegNet for graph-based community detection. Starting from an input graph sample G, the Bayesian Effector Network (BEN) produces uncertainty-weighted edge representations (Output 1). The Treg-inspired Regulatory Network (RIN) suppresses unstable activations without altering topology (Output 2). The Autophagic Optimization Engine (AOE) performs structural refinement, yielding the final community structure. A global meta-homeostatic energy update couples all stages across iterations, closing the learning loop toward equilibrium.
Figure 2.
Illustrative single-sample processing pipeline of Bio-RegNet for graph-based community detection. Starting from an input graph sample G, the Bayesian Effector Network (BEN) produces uncertainty-weighted edge representations (Output 1). The Treg-inspired Regulatory Network (RIN) suppresses unstable activations without altering topology (Output 2). The Autophagic Optimization Engine (AOE) performs structural refinement, yielding the final community structure. A global meta-homeostatic energy update couples all stages across iterations, closing the learning loop toward equilibrium.
Figure 3.
Nonlinear regression with confidence and prediction bands. The fitted curve depicts the regulatory energy trend with 95% confidence and prediction intervals, highlighting Bio-RegNet’s stable uncertainty propagation and robust energy regulation across epochs.
Figure 3.
Nonlinear regression with confidence and prediction bands. The fitted curve depicts the regulatory energy trend with 95% confidence and prediction intervals, highlighting Bio-RegNet’s stable uncertainty propagation and robust energy regulation across epochs.
Figure 4.
Energy–entropy trajectory across all models. Trajectories of free energy and entropy for eleven models (GCN–Bio-RegNet) demonstrate that Bio-RegNet achieves the fastest convergence and lowest steady-state energy, consistently confirming its superior energetic efficiency and dynamic stability.
Figure 4.
Energy–entropy trajectory across all models. Trajectories of free energy and entropy for eleven models (GCN–Bio-RegNet) demonstrate that Bio-RegNet achieves the fastest convergence and lowest steady-state energy, consistently confirming its superior energetic efficiency and dynamic stability.
Figure 5.
Effect of feedback gain on Lyapunov energy decay and spectral damping. (
a) BEN calibration curve; (
b) RIN Lyapunov decay; (
c) AOE energy–efficiency curve; (
d) overall stability measured by normalized free-energy decay (
). All follow Equations (
4), (
11) and (
16).
Figure 5.
Effect of feedback gain on Lyapunov energy decay and spectral damping. (
a) BEN calibration curve; (
b) RIN Lyapunov decay; (
c) AOE energy–efficiency curve; (
d) overall stability measured by normalized free-energy decay (
). All follow Equations (
4), (
11) and (
16).
Figure 6.
Coupled Feedback Equilibrium Map. The heatmap summarizes the coupled feedback equilibrium index under different pruning rates
and feedback gains
; the highlighted cell indicates the selected operating point. The three modules (BEN uncertainty calibration, RIN Lyapunov decay, and AOE energy–efficiency) follow the predicted dynamics in Equations (
4), (
11) and (
16).
Figure 6.
Coupled Feedback Equilibrium Map. The heatmap summarizes the coupled feedback equilibrium index under different pruning rates
and feedback gains
; the highlighted cell indicates the selected operating point. The three modules (BEN uncertainty calibration, RIN Lyapunov decay, and AOE energy–efficiency) follow the predicted dynamics in Equations (
4), (
11) and (
16).
Figure 7.
– coupling synergy map. Heatmap of equilibrium index
as a function of feedback gain
and autophagic pruning rate
. A clear ridge of optimal synergy is observed around
and
, consistent with the equilibrium conditions predicted by Equation (
5).
Figure 7.
– coupling synergy map. Heatmap of equilibrium index
as a function of feedback gain
and autophagic pruning rate
. A clear ridge of optimal synergy is observed around
and
, consistent with the equilibrium conditions predicted by Equation (
5).
Figure 8.
Epoch-wise vertex dynamics under feedback variations. Temporal evolution of ten representative vertices across epochs for different feedback gains (, , ). Higher feedback strength () results in faster exponential damping, lower variance, and smoother convergence, while the absence of feedback () leads to persistent oscillations and residual fluctuations. These patterns empirically confirm that regulatory coupling enhances systemic stability and accelerates convergence toward energetic equilibrium.
Figure 8.
Epoch-wise vertex dynamics under feedback variations. Temporal evolution of ten representative vertices across epochs for different feedback gains (, , ). Higher feedback strength () results in faster exponential damping, lower variance, and smoother convergence, while the absence of feedback () leads to persistent oscillations and residual fluctuations. These patterns empirically confirm that regulatory coupling enhances systemic stability and accelerates convergence toward energetic equilibrium.
Figure 9.
Perturbation recovery behaviors. (
a) Structural-drift re-equilibration; (
b) shock-response latency; (
c) combined phase trajectory showing exponential Lyapunov decay as predicted by Equation (
27).
Figure 9.
Perturbation recovery behaviors. (
a) Structural-drift re-equilibration; (
b) shock-response latency; (
c) combined phase trajectory showing exponential Lyapunov decay as predicted by Equation (
27).
Figure 10.
Perturbation recovery trajectories and theoretical fits. Colored solid lines denote empirical recovery under four perturbation types (mean ± SD), and gray dashed lines represent fitted exponential Lyapunov models
with fitted
annotated for each curve. Bio-RegNet reaches 95% equilibrium within 50 epochs under all stress conditions, demonstrating strong resilience and consistency with Equation (
27).
Figure 10.
Perturbation recovery trajectories and theoretical fits. Colored solid lines denote empirical recovery under four perturbation types (mean ± SD), and gray dashed lines represent fitted exponential Lyapunov models
with fitted
annotated for each curve. Bio-RegNet reaches 95% equilibrium within 50 epochs under all stress conditions, demonstrating strong resilience and consistency with Equation (
27).
Figure 11.
Cross-domain transfer performance. Fine-tuning modularity
from neural to molecular, energy, and macro domains. All paths show stable convergence with less than 5% loss from
, confirming domain-invariant equilibrium predicted by Equation (
24).
Figure 11.
Cross-domain transfer performance. Fine-tuning modularity
from neural to molecular, energy, and macro domains. All paths show stable convergence with less than 5% loss from
, confirming domain-invariant equilibrium predicted by Equation (
24).
Figure 12.
Overall stability comparison across ablation variants. Normalized free-energy decay (
) trajectories demonstrate that full coupling (BEN + RIN + AOE) achieves the fastest and most stable convergence. Removing either RIN or AOE weakens damping feedback and delays Lyapunov equilibrium, supporting the systemic synergy predicted by Equation (
27) and verifying the distributed homeostasis principle in Bio-RegNet.
Figure 12.
Overall stability comparison across ablation variants. Normalized free-energy decay (
) trajectories demonstrate that full coupling (BEN + RIN + AOE) achieves the fastest and most stable convergence. Removing either RIN or AOE weakens damping feedback and delays Lyapunov equilibrium, supporting the systemic synergy predicted by Equation (
27) and verifying the distributed homeostasis principle in Bio-RegNet.
Table 1.
Biological grounding of Bio-RegNet: mechanistic (implemented) vs. analogical (interpretive) components.
Table 1.
Biological grounding of Bio-RegNet: mechanistic (implemented) vs. analogical (interpretive) components.
| Subsystem | Mechanistic (Implemented) | Analogical (Interpretive) |
|---|
| RIN (Treg-inspired) | Inhibitory feedback gain ; entropy-/energy-conditioned regulation via ; step-size attenuation
; Lyapunov-driven stabilization criteria | “Immune tolerance” metaphor: suppressing over-activation to maintain equilibrium under uncertainty |
| AOE (Autophagy-inspired) | Viability and threshold-based pruning at ; controlled regrowth with rate under
over-pruning; structural entropy dynamics with | “Autophagy/turnover” metaphor: removing low-utility units and regenerating capacity to sustain long-run
adaptation |
Table 2.
Stability and calibration metrics of BEN.
Table 2.
Stability and calibration metrics of BEN.
| Dataset | NLL ↓ | ECE ↓ | PICP ↑ |
|---|
| BrainNet-Sim * | | | |
| ECoG-TaskNet * | | | |
| SmartGrid-UK * | | | |
Table 3.
Hierarchical experimental sequence.
Table 3.
Hierarchical experimental sequence.
| Layer | Level | Goal | Description |
|---|
| I | Intrinsic regulation | Validate BEN + RIN | Lyapunov convergence, E/I balance |
| II | Structural adaptation | Test AOE | Measure pruning and metabolic gain |
| III | Coupled feedback | BEN–RIN–AOE synergy | Energy–entropy equilibrium |
| IV | Environmental perturbation | Stress recovery | Resilience under noise/drift |
| V | Cross-domain transfer | Generalization | Homeostatic invariance |
Table 4.
Datasets and descriptive statistics. The datasets cover multiple domains—from neural dynamics and molecular signaling to macro-scale socio-energy systems. Node and edge counts are averaged per temporal snapshot where applicable. All datasets were preprocessed into temporal graph formats for unified training and evaluation.
Table 4.
Datasets and descriptive statistics. The datasets cover multiple domains—from neural dynamics and molecular signaling to macro-scale socio-energy systems. Node and edge counts are averaged per temporal snapshot where applicable. All datasets were preprocessed into temporal graph formats for unified training and evaluation.
| Dataset | Domain | #Nodes | #Edges | Time Steps | Feature Dim | Focus |
|---|
| BrainNet-Sim | Neural | 1200 | 10,320 | 50 | 64 | Oscillatory stability |
| Allen-NeuroDyn | Neural | 2000 | 15,400 | 40 | 128 | Excitation/inhibition balance |
| ECoG-TaskNet | Neural | 5120 | 28,760 | 35 | 256 | Task-evoked coherence |
| Human PPI | Molecular | 6700 | 82,100 | 20 | 128 | Pathway regulation |
| Yeast Signaling | Molecular | 2480 | 11,250 | 30 | 96 | Stress robustness |
| GeneRegNet | Molecular | 4300 | 27,000 | 25 | 64 | Gene stability |
| Dynamic SBM | Synthetic | 1000 | 7900 | 60 | 64 | Controlled drift |
| Enron-Email | Macro | 13,700 | 170,000 | 50 | 32 | Communication evolution |
| Reddit-Pushshift 2024 | Macro | 18,600 | 220,500 | 40 | 128 | Community formation |
| SmartGrid-UK | Energy | 2150 | 14,800 | 48 | 32 | Load fluctuation |
| ELD-2012 | Energy | 3200 | 16,900 | 60 | 16 | Demand stability |
| NeuroBench-Temporal | Neural | 1500 | 12,200 | 70 | 64 | Self-organization |
Table 5.
Implementation details and hyperparameter settings of Bio-RegNet. “Default” denotes the configuration used in all main experiments. “Search range” denotes the small grid explored on the validation split unless the corresponding experiment explicitly performs a broader sweep.
Table 5.
Implementation details and hyperparameter settings of Bio-RegNet. “Default” denotes the configuration used in all main experiments. “Search range” denotes the small grid explored on the validation split unless the corresponding experiment explicitly performs a broader sweep.
| Category | Symbol/Name | Meaning | Default | Search Range/Notes |
|---|
| Optimization | Optimizer | Optimization algorithm | AdamW | {Adam, AdamW} |
| | | LR for variational mean update | | |
| | | LR for variational std update | | |
| | Weight decay | regularization | | |
| | Batch size | Mini-batch size | 128 | {64, 128, 256} |
| | | Max epochs | 200 | {100, 200, 300} |
| | P | Early-stopping patience | 20 | {10, 20, 30} |
| | Clip | Gradient clipping norm | 1.0 | {0.5, 1.0, 2.0} |
| Bayesian (BEN) | | KL weight in ELBO | 1.0 | {0.5, 1.0, 2.0} |
| | M | MC samples (sampling depth) | 5 | {1, 3, 5, 10} |
| | | Prior variance | 1.0 | {0.1, 1.0, 5.0} |
| | | Posterior init std ( initialization) | 0.1 | {0.05, 0.1, 0.2} |
| Regulation (RIN) | | Weights in | | Each in with |
| | | Step-size attenuation strength in | 0.5 | {0.2, 0.5, 0.8} |
| | | Inhibitory magnitude (if used in ) | 0.5 | {0.3, 0.5, 0.7} |
| Autophagy (AOE) | | Pruning threshold on viability | 0.10 | Main: {0.05, 0.10, 0.15, 0.20}; Broad sweep: |
| | | Regrowth rate under over-pruning | 0.05 | Enabled when |
| | | Stabilizing constant in | | Fixed |
| | | Structural entropy decay rate | 0.1 | {0.05, 0.1, 0.2} |
| | | Informational regeneration gain in | 0.1 | {0.05, 0.1, 0.2} |
| Coupling/Stability | | Coupled feedback gain (RIN–AOE) | 0.3 | Main: {0.0, 0.3, 0.5}; Broad sweep: |
| | | Coefficient update steps in Algorithm 1 | 0.01 | {0.005, 0.01, 0.02} |
| | | Equilibrium tolerance in | | {, , } |
| Architecture | L | Number of message-passing layers | 2 | {2, 3, 4} |
| | d | Hidden dimension | 64 | {32, 64, 128} |
| | | Dropout rate | 0.3 | {0.0, 0.3, 0.5} |
Table 6.
Overall comparison of Bio-RegNet and baseline models. All metrics are reported as mean ± SD over five runs. Lower NLL, ECE, , and indicate better calibration and stability, while higher PICP, ARI, NMI, Q, and reflect stronger clustering consistency and equilibrium coherence. Bio-RegNet achieves the best results across nearly all metrics, reducing energy variance () by over 10% and improving calibration (ECE) by 20% relative to the best baseline (HGNN-ODE). Bold values denote the best performance.
Table 6.
Overall comparison of Bio-RegNet and baseline models. All metrics are reported as mean ± SD over five runs. Lower NLL, ECE, , and indicate better calibration and stability, while higher PICP, ARI, NMI, Q, and reflect stronger clustering consistency and equilibrium coherence. Bio-RegNet achieves the best results across nearly all metrics, reducing energy variance () by over 10% and improving calibration (ECE) by 20% relative to the best baseline (HGNN-ODE). Bold values denote the best performance.
| Model | NLL ↓ | PICP ↑ | ECE ↓ | ARI ↑ | NMI ↑ | Q ↑ | ↓ | ↓ | ↓ | Recov Epochs ↓ | ↑ |
|---|
| GCN | 0.98 | 0.71 | 0.09 | 0.52 | 0.61 | 0.34 | 1.00 | 1.00 | −0.04 | 45 | 0.55 |
| EvolveGCN | 0.87 | 0.74 | 0.08 | 0.58 | 0.67 | 0.37 | 0.92 | 0.93 | −0.07 | 39 | 0.62 |
| TGN | 0.74 | 0.78 | 0.07 | 0.63 | 0.71 | 0.38 | 0.81 | 0.89 | −0.09 | 32 | 0.68 |
| DyRep | 0.69 | 0.79 | 0.07 | 0.65 | 0.74 | 0.40 | 0.79 | 0.88 | −0.11 | 31 | 0.70 |
| BGNN | 0.58 | 0.83 | 0.06 | 0.70 | 0.78 | 0.42 | 0.70 | 0.83 | −0.13 | 28 | 0.76 |
| EGN | 0.56 | 0.84 | 0.06 | 0.72 | 0.80 | 0.43 | 0.67 | 0.82 | −0.15 | 26 | 0.78 |
| Homeo-GNN | 0.54 | 0.85 | 0.05 | 0.73 | 0.81 | 0.44 | 0.65 | 0.81 | −0.17 | 24 | 0.82 |
| DyFormer | 0.49 | 0.86 | 0.05 | 0.75 | 0.83 | 0.45 | 0.62 | 0.79 | −0.19 | 22 | 0.84 |
| EvoGNN | 0.48 | 0.87 | 0.05 | 0.76 | 0.83 | 0.45 | 0.60 | 0.78 | −0.20 | 21 | 0.85 |
| HGNN-ODE | 0.47 | 0.88 | 0.05 | 0.77 | 0.84 | 0.45 | 0.58 | 0.77 | −0.21 | 20 | 0.86 |
| Bio-RegNet | 0.37 | 0.91 | 0.04 | 0.81 | 0.87 | 0.47 | 0.52 | 0.70 | −0.25 | 17 | 0.93 |
Table 7.
Performance of the Bayesian Encoder Network (BEN) under different sampling depths M. Values are mean ± SD over five runs.
Table 7.
Performance of the Bayesian Encoder Network (BEN) under different sampling depths M. Values are mean ± SD over five runs.
| Dataset | M | NLL ↓ | ECE ↓ | PICP ↑ | ARI ↑ | p |
|---|
| BrainNet-Sim | 1 | | | | | <0.05 |
| | 3 | | | | | <0.05 |
| | 5 | | | | | <0.05 |
| ECoG-TaskNet | 3 | | | | | <0.05 |
| SmartGrid-UK | 3 | | | | | <0.05 |
Table 8.
Effect of feedback gain on Lyapunov energy decay and damping ratio .
Table 8.
Effect of feedback gain on Lyapunov energy decay and damping ratio .
| Dataset | | (Final) ↓ | ↓ | Recovery Time (Epochs) ↓ | p |
|---|
| BrainNet-Sim | 0.0 | | | > 50 | – |
| | 0.3 | | | 28 ± 4 | <0.05 |
| | 0.5 | | | 25 ± 3 | <0.05 |
| SmartGrid-UK | 0.3 | | | 32 ± 5 | <0.05 |
Table 12.
Equilibrium index across feedback gain and pruning rate . Mean ± SD over five runs. Higher indicates stronger energy–entropy equilibrium and coupled stability. Moderate feedback () with balanced pruning () yields the highest (), highlighting optimal synergy between inhibition and autophagy modules.
Table 12.
Equilibrium index across feedback gain and pruning rate . Mean ± SD over five runs. Higher indicates stronger energy–entropy equilibrium and coupled stability. Moderate feedback () with balanced pruning () yields the highest (), highlighting optimal synergy between inhibition and autophagy modules.
| = 0.05 | 0.10 | 0.15 | 0.20 | 0.25 | Optimal p |
|---|
| 0.0 | | | | | | – |
| 0.3 | | | | | | |
| 0.5 | | | | | | |
Table 13.
Coupling synergy of RIN–AOE quantified by
in Equation (
30), computed from
Table 8. Positive values indicate improved equilibrium coherence due to inhibitory–metabolic coupling beyond the
(RIN-off) reference.
Table 13.
Coupling synergy of RIN–AOE quantified by
in Equation (
30), computed from
Table 8. Positive values indicate improved equilibrium coherence due to inhibitory–metabolic coupling beyond the
(RIN-off) reference.
| 0.05 | 0.10 | 0.15 | 0.20 | 0.25 |
|---|
| = 0.3, ) | 0.12 | 0.16 | 0.15 | 0.13 | 0.09 |
| = 0.5, ) | 0.10 | 0.12 | 0.11 | 0.10 | 0.07 |
Table 14.
Comprehensive stress-test results under four perturbation types. Values are mean ± SD over 30 independent trials. Lower and recovery epochs indicate faster stabilization, while higher stability percentage denotes stronger resilience. All differences are statistically significant ().
Table 14.
Comprehensive stress-test results under four perturbation types. Values are mean ± SD over 30 independent trials. Lower and recovery epochs indicate faster stabilization, while higher stability percentage denotes stronger resilience. All differences are statistically significant ().
| Perturbation | ↓ | Recovery Epochs ↓ | Stability (%) ↑ | p |
|---|
| Structural Drift (10%) | | | | <0.01 |
| Random Shock (peak × 2) | | | | <0.01 |
| Noise Injection ( = 0.3) | | | | <0.05 |
| Resource Dropout (50%) | | | | <0.05 |
Table 15.
Cross-domain transfer performance of Bio-RegNet. Mean ± SD over five runs. High
Q retention indicates preserved structural modularity, while small
change implies consistent energetic equilibrium. All settings maintain >93% modularity with minimal degradation of regulatory stability, confirming the universality of Equation (
24).
Table 15.
Cross-domain transfer performance of Bio-RegNet. Mean ± SD over five runs. High
Q retention indicates preserved structural modularity, while small
change implies consistent energetic equilibrium. All settings maintain >93% modularity with minimal degradation of regulatory stability, confirming the universality of Equation (
24).
| Transfer Setting | Q Retention (%) ↑ | Change (%) ↓ |
|---|
| Neural→Molecular | | |
| Molecular→Energy | | |
| Neural→Macro | | |
| Molecular→Synthetic | | |
Table 16.
Cross-domain transfer performance (mean ± SD). = modularity in source domain.
Table 16.
Cross-domain transfer performance (mean ± SD). = modularity in source domain.
| Transfer Path | | | ↓ | p |
|---|
| Neural → Molecular | | | | <0.05 |
| Molecular → Energy | | | | <0.05 |
| Neural → Macro | | | | <0.05 |
Table 17.
Visualization and interpretability metrics.
Table 17.
Visualization and interpretability metrics.
| Metric | Description | Mean ± SD | p |
|---|
| Energy Trajectory RMS ↓ | Root-mean-square deviation of | | <0.05 |
| Entropy Compression ↑ | Reduction in entropy (%) | | <0.05 |
| Inhibitory Field Uniformity ↑ | Std dev of (normalized) | | <0.05 |
| Pruning Stability ↑ | Correlation of Z after pruning | | <0.05 |
Table 18.
Comprehensive performance summary of Bio-RegNet across 12 datasets. Each value denotes the mean percentage gain ± SD relative to the best-performing baseline within the same domain. Improvements span Lyapunov stability, probabilistic calibration, energy efficiency, recovery resilience, and feedback coupling strength. All categories exhibit statistically significant gains (), confirming the robustness and universality of Bio-RegNet’s regulatory framework.
Table 18.
Comprehensive performance summary of Bio-RegNet across 12 datasets. Each value denotes the mean percentage gain ± SD relative to the best-performing baseline within the same domain. Improvements span Lyapunov stability, probabilistic calibration, energy efficiency, recovery resilience, and feedback coupling strength. All categories exhibit statistically significant gains (), confirming the robustness and universality of Bio-RegNet’s regulatory framework.
| Category | Metric | Gain (%) ↑ | Std Dev | Significance (p < 0.05) |
|---|
| Stability | Lyapunov decay rate | | | ✓ |
| Calibration | NLL reduction | | | ✓ |
| Efficiency | Energy saving | | | ✓ |
| Resilience | Recovery speed | | | ✓ |
| Coupling | Equilibrium index | | | ✓ |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |