Next Article in Journal
Divergent Assembly of Bacteria and Fungi During Saline–Alkali Wetland Degradation
Previous Article in Journal
Prospective Mapping of Transcriptional Changes Associated with Lipid and Carotenoid Production in Rhodotorula glutinis Using Different Feeding Approaches
Previous Article in Special Issue
AI and Machine Learning in Biology: From Genes to Proteins
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights

School of Life Sciences and Technology, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Biology 2026, 15(1), 62; https://doi.org/10.3390/biology15010062 (registering DOI)
Submission received: 27 November 2025 / Revised: 17 December 2025 / Accepted: 28 December 2025 / Published: 29 December 2025
(This article belongs to the Special Issue Artificial Intelligence Research for Complex Biological Systems)

Simple Summary

Most labs study how genes change when a single gene is switched on or off, but they rarely have the budget to collect long, repeated measurements or to standardize pipelines across groups. We built a tool that works directly on a lab’s own “before and after” gene-editing experiments and turns them into clear, mechanism-level readouts of cause and effect. In tests, our method produced accurate predictions under tight time and hardware limits and revealed when genes act together rather than one at a time. The networks it learned agreed with outside evidence from large public gene resources and independent protein measurements, and they recovered many known regulators in ferroptosis. Because the same workflow can be rerun on other datasets and on standard CPU-only hardware (no GPU needed), small teams can analyze their own data, compare to public studies, and plan sharper follow-up experiments. Our results are therefore most relevant for labs that must train models from scratch on their own data; when large pretrained models can be fine-tuned on massive public datasets, they remain powerful complementary options.

Abstract

We present LazyNet, a compact one-step neural-ODE model for single-cell CRISPR activation/interference (A/I) that operates directly on two-snapshot (“pre → post”) measurements and yields parameters with clear mechanistic meaning. The core log–linear–exp residual block exactly represents multiplicative effects, so synergistic multi-locus responses appear as explicit components rather than opaque composites. On a 53k-cell × 18k-gene neuronal Perturb-seq matrix, a three-replica LazyNet ensemble trained under a matched 1 h budget achieved strong threshold-free ranking and competitive error (genome-wide r ≈ 0.67) while running on CPUs. For comparison, we instantiated transformer (scGPT-style) and state-space (RetNet/CellFM-style) architectures from random initialization and trained them from scratch on the same dataset and within the same 1 h cap on a GPU platform, without any large-scale pretraining or external data. Under these strictly controlled, low-data conditions, LazyNet matched or exceeded their predictive performance while using far fewer parameters and resources. A T-cell screen included only for generalization showed the same ranking advantage under the identical evaluation pipeline. Beyond prediction, LazyNet exposes directed, local elasticities; averaging Jacobians across replicas produces a consensus interaction matrix from which compact subgraphs are extracted and evaluated at the module level. The resulting networks show coherent enrichment against authoritative resources (large-scale co-expression and curated functional associations) and concordance with orthogonal GPX4-knockout proteomes, recovering known ferroptosis regulators and nominating testable links in a lysosomal–mitochondrial–immune module. These results position LazyNet as a practical option for from-scratch, low-data CRISPR A/I studies where large-scale pretraining of foundation models is not feasible.

1. Introduction

Single-cell RNA sequencing (scRNA-seq) has transformed transcriptomics by revealing the rich cell-to-cell heterogeneity masked in bulk averages, uncovering rare subpopulations, transitional states, and lineage hierarchies across tissues and species [1,2]. Yet each cell is still frozen in time—captured under one environmental condition and at a single instant—so most temporal trajectories and stimulus-response behaviors remain invisible. Systematically probing these dimensions demands direct perturbations: transcriptional readouts following targeted genetic edits illuminate how regulatory circuits maintain identity, pinpoint druggable control nodes, and expose synergistic gene pairs that can amplify combination therapies or guide precise cell-engineering strategies [3,4,5,6,7,8,9,10,11,12,13,14,15,16]. CRISPR activation or inhibition screens joined with scRNA-seq (“Perturb-seq”) meet this need by bar-coding guide RNAs so that every sequenced cell records its own perturbation history, enabling thousands of knockouts, knockdowns, or activations to be multiplexed in one experiment [17,18,19]. The resulting high-dimensional maps of genotype-to-phenotype relationships have already sharpened target validation pipelines, suggested rescue strategies for disease mutations, and provided training data for machine-learning models that aspire to predict unseen edits. These advances have converged in ambitious blueprints for an in silico “AI virtual cell” (AIVC) capable of simulating cellular behavior across modalities, contexts, and perturbations [20].
Several recent frameworks aim to predict perturbation responses from single-cell data, including transformer-based models (e.g., scGPT), large foundation models (e.g., CellFM), and graph-based approaches (e.g., GEARS) [21,22,23]. While powerful, these pipelines are typically designed to be pretrained or adapted on very large single-cell corpora and often require multi-GPU hardware, substantial storage, and/or curated pathway priors. Their latent representations can also be difficult to map onto concrete biochemical rate laws, complicating mechanistic interpretation in routine lab settings. Moreover, recent analyses indicate that deep models have not yet been shown to systematically outperform simpler baselines for gene-perturbation prediction in every setting, underscoring the need for methods that are both accurate and practical [24]. Neural-ODE approaches offer improved mechanistic fidelity but often assume fixed functional forms or entail numerically intensive integration over high-dimensional states, which limits tractability at the transcriptome scale [25,26,27,28,29,30,31,32,33,34].
LazyNet addresses this gap with a neural ODE that embeds a log–linear–exp residual block inside a single explicit Euler step. Working in log space compresses multiplicative gene–gene effects into a sparse, directly interpretable rate matrix, avoiding attention-style quadratic costs and heavy ODE solvers. Crucially, the model is designed for CPU-level efficiency (no GPU required), operates directly on count matrices without external pathway priors, and matches the common two-snapshot CRISPR A/I design by treating the pre- and postperturbation profiles as a finite-difference sample of the underlying dynamics.
Many CRISPR A/I studies, however, are small, self-contained projects: a single screen with tens of thousands of cells at most, limited local compute, and sometimes privacy, licensing, or logistical barriers that preclude aggregating data for large-scale pretraining or hosting heavy foundation models. In this regime, practitioners must typically train models from scratch on their own dataset under strict time and hardware budgets, often on CPU-only nodes. Our focus in this work is precisely this from-scratch, low-data setting. In the comparisons that follow, transformer and state-space baselines such as scGPT-style and CellFM/RetNet-style architectures are therefore used as non-pretrained architectural controls trained from random initialization on the same dataset, rather than in their usual large-scale pretraining–fine-tuning mode.
In this study we show that, when all models are trained from scratch on the same neuronal Perturb-seq matrix under a fixed 1 h budget, LazyNet attains competitive or superior predictive accuracy to transformer- and state-space-style baselines while training in hours on a single 24-core CPU node and that its elasticity-based analysis yields compact, biologically coherent subnetworks. These experiments are intended to reflect the practical setting of a stand-alone CRISPR A/I project in which large-scale pretraining of foundation models is not feasible, rather than to evaluate fully pretrained scGPT or CellFM in their natural high-data regime. Applied to ferroptosis, the framework recovers established regulators and nominates a lysosomal–mitochondrial–immune module for experimental follow-up, illustrating how CPU-tractable, interpretable modeling can turn routine perturbation screens into causal hypotheses at the transcriptome scale.

2. Materials and Methods

2.1. LazyNet Architecture

From Figure 1, the full gene-expression vector (INPUT) is propagated through a two-stage transformation composed of a logarithmic layer (LOG) followed by an exponential layer (EXP). These twin layers realize multiplicative biology-inspired interactions in log space while retaining computational simplicity. The transformed signal is added back to the original input via a residual skip connection (black bar and “+”), producing an updated vector of identical dimensionality (OUTPUT). Stacking such blocks corresponds to explicit Euler integration of an underlying ordinary differential equation, allowing LazyNet to model gene-regulatory dynamics within a ResNet framework while continuously adjusting weights during training [35].
LazyNet has already been applied successfully to both synthetic ODE benchmarks and real-world dynamical systems. Its core idea is to embed a log → linear → exp residual block inside explicit Euler approximation, so that every weight acts as a direct ODE rate constant and the network learns interpretable dynamics with minimal parameters.
An ODE of the form can be discretized with forward Euler:
f n + 1 = f n + F ( f n ) Δ t
LazyNet represents the rate law F with a single log–linear–exp (LLE) block:
F L a z y N e t ( x ) = e x p ( W 2 · l o g ( W 1 x + b 1 ) + b 2 )
f n + 1 = f n + F _ L a z y N e t ( f n ) Δ t
Here x R + ; W 1 , W 2 are learnable weight matrices and b 1 , b 2 are biases. Taking the logarithm converts multiplicative gene–gene interactions into additions, allowing a single neuron to encode an entire monomial exactly, while the outer exponential restores the original scale. More details and examples can be seen in Supplementary S4.
During training, each observed state f n is provided as input, the network predicts f n + 1 , and the prediction is compared with ground truth. With sufficiently varied samples V and adequate temporal coverage T , the learned mapping approaches the true governing equations:
l i m V m a x , T F L a z y N e t = F a c t u a l
Because the ODE is Markovian, every update depends only on the current state, not on the full history; LazyNet therefore learns the system dynamics efficiently and with minimal parameter count.
In a CRISPR-A/I screen, the before-and-after expression profiles can be treated as Euler approximation of the underlying ODE—thereby sampling the true regulatory function F a c t u a l directly.

2.2. Theoretical Rationale for Efficiency

Most biochemical rate laws can be written as a finite sum of monomials of the form c · x 1 α 1 x α d . A representation that reproduces each monomial exactly captures the full reaction space.
LazyNet achieves this with a single log–linear–exp (LLE) block (2), where x R + , W R K × d , a n d   b R K . Choosing one row of W equal to the integer exponent vector α and the corresponding bias b = l o g c reproduces the monomial exactly. Hence an entire K-term rate law needs only K ( d + 1 ) trainable scalars.
By contrast, a degree-N Taylor polynomial on the same positive domain requires ~ l o g 1 ε d parameters to reach uniform error ε . Thus, the parameter ratio grows super-polynomially with both dimension d and desired accuracy. Fewer parameters translate to fewer multiply–add operations per mini-batch and a smaller memory footprint, directly accelerating training.
Statistically, concentrating capacity onto a small set of mechanistically meaningful weights improves the effective signal-to-noise ratio: each observation updates compact exponent and coefficient parameters rather than being diffused across a large dictionary. Empirically, this manifests as faster convergence at larger learning rates, better small-sample generalization, and more stable optimization under residual connections.
Importantly, the multiplicative form encodes synergistic (higher-order, multi-locus) interactions as single monomials rather than as emergent products of many additive units. This is not only an efficiency gain; it directly addresses a substantive modeling need by making synergic effects first-class, precisely parameterized objects that can be learned robustly from limited data.
Transformer-based baselines such as scGPT and state-space models such as RetNet avoid the explicit coefficient blow-up of polynomial bases but incur other costs [21,22]. Attention layers scale as O ( L 2 ) in sequence length L , while state-space kernels move toward linear time yet introduce millions of additional parameters. Neither family encodes a monomial in closed form; multiplicative gene–gene terms are learned implicitly via gradient descent, typically demanding more data and iterations. In contrast, the LLE map captures multiplicative structure exactly with a forward pass that is linear in d (and in the retained term count K ), yielding a favorable accuracy–efficiency–interpretability tradeoff.
For the kind of perturbation-effect prediction considered here, recent benchmarks suggest that deep models, including those inspired by large language models, do not yet consistently outperform simple baselines [24]. This motivates exploring architectures that encode multiplicative biology more explicitly, such as the log–linear–exp block in LazyNet. Neural-ODE approaches can be accurate but usually require a prespecified differential form—e.g., fixed symbolic templates or universal Hill-type non-linearities [28,29,30,31,32,33,34]—which constrains adaptability and can produce models that are numerically correct yet mechanistically misspecified. LazyNet retains an ODE viewpoint without fixing the functional form: placing the LLE block inside an explicit Euler residual update allows approximation of a broad class of interaction terms, so synergy is captured through learned exponents and weights rather than hand-engineered equations, improving both flexibility and mechanistic interpretability.

2.3. scRNA Data Preparation and Gene Selection

  • Neuronal Perturb-seq (GSE152988).
We analyze GSE152988, a CRISPR activation/interference screen in 53,495 human iPSC-derived neurons (67,077 genes including non-coding RNAs) with two expression snapshots per cell (a prestate “time-0” and a postperturbation state) [36]. With no intermediate measurements, we treat the poststate as a single explicit update from the baseline, matching the minimal ODE setting used throughout this work. No external dynamics or pathway priors were introduced.
To construct “time-0” without leakage, we do not edit individual cells. Instead, we first compute an averaged baseline cell from library-size-normalized non-targeting controls (per batch/stratum as applicable). For each perturbation targeting gene g, the time-0 input is this averaged baseline vector with a single edit at g to encode the intervention: for CRISPRi/KO we set g to a small floor; for CRISPRa/OE we set g to the observed postperturbation level of g (estimated across the perturbed cells), while all other genes remain at control baselines. Because only the scalar for the targeted gene is updated—and never any multi-gene pattern or any individual test cell—this captures the intended one-gene intervention while preventing multi-gene peeking into the poststate.
After time-0 construction, we restrict to 18,152 Ensembl GRCh38 protein-coding genes for modeling and evaluation. Cells are grouped by (target gene × gRNA) (and by batch when available), and whole groups are assigned to train/validation/test at 80%/10%/10% to avoid guide- or batch-specific leakage across splits. On the working expression scale used for modeling, the held-out neuronal iPSC test targets have mean of 0.6082 and standard deviation of 3.212 (min 0, max 1203) as the data landscape.
Because only two snapshots are available, absolute rates are not identifiable; the product of step size and elasticity is what can be learned. Accordingly, downstream evaluation emphasizes one-step prediction errors and relative signals (e.g., rankings) that are invariant to monotone rescaling of the latent time step. When dense time series are available, a multi-step fit resolves Δt and improves identifiability (see Section 4 and [30,31,32]).
  • T-cell guide-capture screen (GSE190604).
We further use a large human T-cell CRISPR guide-capture dataset (10× Genomics RNA with per-cell guide calls) comprising 36,601 genes × 103,805 cells [37]. Expression matrices come from the “Gene Expression” count matrix and guide calls from the corresponding CellRanger export. Minimal QC is applied (genes detected in ≥3 cells; cells with ≥200 detected genes) to preserve a blind and reproducible comparison.
Each barcode is joined to its guide-call summary. Baseline cells (0/unknown guides) are used only to estimate time-0. Singlets (exactly one guide) form the training set; multiplets (≥2 guides) form the test set. After matching and filters, the split yields ~61k training cells and 28,707 test cells.
Counts are library-size-normalized with a small pseudocount and log-transformed. We select the top 25% highly variable genes (HVGs) across cells (~one quarter of 36,601 genes) and carry this fixed HVG panel into all downstream modeling and evaluation. On this normalized log scale, the held-out test targets have mean ≈ 0.19 and SD ≈ 2.30 (min 0, max 2620), which we report to orient error magnitudes in later sections.

2.4. Baseline Models

We compared LazyNet to two public baselines chosen to span the principal inductive biases for single-cell perturbation prediction without curated priors: a transformer (scGPT) and a state-space model (RetNet, representing the CellFM-style state-space family) [21,22,23]. These baselines were selected because they (i) have maintained, open implementations with documented end-to-end pipelines; (ii) do not require curated cell-type labels or pathway priors (in contrast to, e.g., GEARS); (iii) are compatible with two-snapshot Perturb-seq inputs; and (iv) have prior use on closely related single-cell prediction tasks.
Together, scGPT and RetNet provide architectural diversity—attention versus state-space dynamics—yielding an appropriate, apples-to-apples contrast with LazyNet under a common data regime. In our experiments, both scGPT and RetNet are instantiated as randomly initialized architectures trained from scratch on the same dataset as LazyNet, without loading any large-scale pretrained checkpoints or external data. They should therefore be interpreted as non-pretrained architectural controls in a strictly from-scratch, low-data setting, rather than as evaluations of fully pretrained foundation models in their intended pretraining–fine-tuning regime.

2.5. Training Protocol

LazyNet is trained on our Intel Xeon CPU server (Intel Xeon Gold 6126, 24 cores, 768 GB RAM) without accelerators (Accessed by Big Data Computing Center of Southeast University, Nanjing, China).
The main runs use a Huber loss (δ = 0.1) for robustness to outliers and a three-replica LazyNet ensemble (independent random seeds; arithmetic averaging at inference). This ensemble smooths idiosyncratic local minima while preserving the interpretability of the learned elasticities. Complete training hyperparameters, exact command lines, and seeds are provided in the replication package.
In the ablations, the model depth is the most sensitive knob: increasing the log–exp pair beyond one layer frequently led to numerical instabilities (NaNs) or higher error, consistent with overfitting in a setting where a one-step ODE is already a low-complexity mechanism and LazyNet is highly expressive. Width is less sensitive, but a moderate range (several hundred units) performs best; widths that are too small underfit, while very wide models tend to overfit or fail to converge usefully within the time budget. The additive residual path is essential for ODE fitting—removing it substantially degrades accuracy or prevents stable training. We also varied the loss family (Huber/L1/L2 and δ), mini-batch size, and integration step length. Within the ~1 h per-replica budget (3 replicas), batch size and step length had negligible effect on test metrics, whereas expanding LazyNet beyond moderate capacity often undertrained within the time cap, yielding lower accuracy despite more parameters.
scGPT and RetNet are trained on a single NVIDIA V100 GPU with three CPU cores allocated (Accessed by Big Data Computing Center of Southeast University, Nanjing, China). To match LazyNet’s elapsed-time constraint, each baseline is capped at 1 h of wall-clock time per dataset, inclusive of data loading and evaluation, and is trained from random initialization without any external pretraining or pretrained checkpoints. We rely on the authors’ documented defaults where applicable, enable mixed precision and gradient accumulation as needed to respect memory limits, and do not introduce curated priors or external labels. The last checkpoint produced within the time cap is evaluated on the common test split. Regression fidelity is assessed with RMSE, MAE, and Pearson’s r on the modeling gene panel defined in Section 2.3. Threshold-free ranking quality is summarized by ROC-AUC and PR-AUC computed directly from continuous predictions. When reported, F1 (micro and macro) uses a single per-gene operating point learned once on the training data via a fixed quantile rule and held fixed for evaluation. All models use identical train/validation/test partitions, deterministic data loaders, fixed random seeds, and the same evaluation scripts.

2.6. Inference and Visualization

To produce a directed, dynamical snapshot for network analysis, we averaged ten independently trained LazyNet checkpoints into a single ensemble and evaluated it on the full single-cell matrix (train + validation + test). From this ensemble we computed a Jacobian of absolute elasticities at a common baseline expression vector, restricted to the original 18,152 Ensembl protein-coding genes [38]. For each literature-anchored seed gene (seven total; Supplementary S1A), we ranked both downstream (row) and upstream (column) elasticities, retained neighbors with |∂xᵢ/∂xⱼ| ≥ 0.001 that also exceeded either the 95th or 99th percentile of that seed’s ranked list, and then kept the top-N survivors. Because a single-baseline Jacobian yields few candidates per seed, we set the Benjamini–Hochberg threshold to q = 1 (effectively no FDR filtering) and report the filtered ranked sets. Evaluation proceeds in two parts: first, whether each seed’s neighborhood recovers interactions reported in the literature; second, cross-seed recall, i.e., how often other seed genes appear among a given seed’s ranked neighbors.
The goal is to recover directed, dynamical relationships consistent with the ODE-based formulation—edges that reflect functional influence under perturbation, not merely correlation. For this reason, GENIE3/GRNBoost/SCENIC-style pipelines are not used as head-to-head comparators here. They operate as subgraph generators that prioritize local correlation edges and typically require a manual regulator prior (e.g., TF lists) to seed the search [39,40,41]. Their outputs are commonly undirected and tuned to rank local associations, rather than to assemble a globally coherent dynamic network. They also lack the external authority of curated/experimental resources; when a comparator is needed, databases such as STRING and ARCHS4—grounded in empirical evidence—are more appropriate reference points than GENIE3-family predictions.
Even with authoritative databases, edge-level comparisons remain problematic: curated graphs are incomplete, context-dependent, and unevenly sampled, and the true functional wiring can vary substantially even within the same cell type and omic snapshot. We therefore evaluate at the module level, where the signal is more robust: given a set of curated modules (pathways, TF regulons), we quantify module activity/enrichment induced by the inferred network (or by model-predicted responses) and test whether the recovered activity patterns align with known biology. This enrichment-based evaluation emphasizes coherent functional programs over individual edges, respects the dynamical and directed nature of our model, and provides a fairer, more stable basis for comparison across methods and references.

2.7. ARCHS4 Co-Expression Validation

We annotate each directed edge (I → j) with the ARCHS4 Pearson correlation r(i,j). To control for confounding, enrichment is computed against a matched null: for each edge we sample matched pairs (i′ → j′) from strata that jointly bin genes by (i) control-cell detection/mean expression and (ii) out-/in-degree in the LazyNet elasticity graph (≈400 matched pairs per edge; 9860 edges; 3,943,792 null pairs). We report fractions exceeding |r| ≥ τ and the corresponding fold enrichment (Observed/Expected). For sign-aware checks, we multiply r by the LazyNet edge sign; because ARCHS4 is undirected/heterogeneous, we emphasize |r| results and treat sign panels as exploratory.

2.8. STRING Interaction Concordance

STRING functional associations (human, version 12.0) were converted from protein identifiers to HGNC symbols with the official mapping file [42]. Isoform-specific and unmapped entries were removed. Here, 32 × 4 LazyNet subgraph edges were queried for exact symbol matches, and statistical significance of the overlap was assessed with a one-tailed hypergeometric test that treats the total number of possible unordered human gene pairs as the population size. Elasticity scores and STRING combined scores were each dichotomized at the 75th percentile to classify edges as “both-high”, “STRING-only”, or “LazyNet-only”. Degree-preserving Maslov–Sneppen rewiring (1000 iterations) provided a null distribution for network-density comparisons.

2.9. Proteomics Validation

Pulsed-SILAC proteomes were re-analyzed to evaluate the 32 × 4 LazyNet subgraph at the protein layer. For the tamoxifen-inducible GPX4-knockout time course in mouse fibroblasts (PRIDE PXD050979), proteinGroups.txt (PXD050979.txt) was taken as input [43]. Reverse sequences and common contaminants were discarded; gene symbols were extracted and converted to upper-case HGNC format and duplicate entries were averaged, yielding 1026 proteins. Intensities for the light, medium, and heavy SILAC channels (nine runs) were treated as independent samples and zeros set to missing and imputed with the MinProb left-shift approach (μ − 1.8 σ; width 0.3 σ) as implemented in Perseus [44]. Values were log2-transformed and median-centered per run and proteins quantified in at least three channels with non-zero variance were retained.
For the larger constitutive GPX4-knockout study in Pfa1 mouse embryonic fibroblasts (PRIDE PXD040094) we used the combined_protein.tsv [45]. Gene symbols were cleaned as above and duplicate symbols collapsed. Columns labeled “MaxLFQ Intensity” defined fourteen quantitative channels covering wild-type and knockout states. Missing values were replaced with MinProb and the matrix was log2-transformed and median-centered. Proteins present in at least three channels and showing non-zero variance were kept.
For each dataset the cleaned protein matrix was intersected with the RNA-based LazyNet edge list. Gene pairs with at least three paired, finite observations were assigned a Spearman rank correlation coefficient. To control for hub bias a degree-matched null of 10k random edges was generated by stratified sampling across twenty-one degree bins (0-to-20+). Absolute correlation values of LazyNet edges were compared with the null using a two-sided Wilcoxon rank-sum test.

2.10. Experiment Layout

As shown in Figure 2, we start from a single preprocessed neuronal iPSC Perturb-seq matrix (library-size normalization followed by a small-pseudocount transform, z = log(x + ε)) and run two parallel analyses.
The benchmark arm performs a one-time stratified split (8:1:1 train:validation:test) and trains scGPT, RetNet, and three LazyNet replicas on the training fold, tunes on validation, and reports performance on the locked test fold (RMSE, MAE, Pearson r; threshold-free AUCs where noted) [21,22,23]. A T-cell screen is included only in this benchmark arm to test cross-dataset generalization under the identical pipeline; our central goal remains the neuronal dataset. All models are trained from scratch on the study dataset without any external pretraining or pretrained checkpoints.
The discovery arm fits ten independently seeded LazyNet models on the full neuronal matrix (train+validation+test), averages their Jacobians (restricted to 18,152 Ensembl protein-coding genes), and extracts a 32 × 4 breadth-first subgraph as a consensus network for interpretation. This subgraph is evaluated against authoritative resources—STRING v12 and ARCHS4 [42,46]—and cross-omic coherence is assessed in two GPX4-knockout SILAC proteomes from mouse embryonic fibroblasts (PRIDE PXD040094, PXD050979) [43,45]. The validated interactions are then mined for mechanistic hypotheses and used to nominate follow-up perturbations.

3. Results

3.1. Model Benchmark

Table 1 summarizes head-to-head results for a neuronal iPSC Perturb-seq slice for later inference and a T-cell guide-capture slice (top-quartile HVGs) solely used for the generalization test.
On the neuronal iPSC slice, LazyNet attains the strongest threshold-free ranking quality with the highest ROC-AUC and PR-AUC among the compared methods, while training faster on CPU and using a mid-range parameter count. Its absolute regression errors (RMSE, MAE) and correlation r are slightly worse than the best baseline, and its F1 (micro/macro) is lower at the fixed per-gene operating point—an expected AUC–F1 tradeoff when a single threshold is predeclared per gene. Together, the iPSC results indicate that LazyNet delivers the best ordering of perturbation effects under CPU constraints, with modestly higher regression error than the top baseline.
On the T-cell HVG slice, LazyNet again leads on ranking metrics with the highest ROC- and PR-AUC, and it achieves the best MAE, while its RMSE and Pearson r are lower than those of the strongest baseline and its F1 sits mid-pack at the fixed threshold (Table 1). This pattern—excellent ranking with conservative amplitudes—matches the expected behavior when per-gene thresholds are learned once on singlets and then applied to a broader test distribution of multiplets.
Interpreting AUC and F1 together clarifies these results. ROC-/PR-AUC capture threshold-free ranking and are insensitive to the exact operating point; F1 fixes a single per-gene threshold learned on training singlets, so it can move counter to AUC when calibration shifts or variance broadens between train and test. In our setting, this explains why LazyNet can lead decisively in AUCs yet not maximize F1 on every slice. All definitions and thresholding rules are held constant across models to keep comparisons fair.
Overall, across both slices and under CPU-only execution with a three-run LazyNet ensemble, the model offers the strongest ranking of perturbation effects and a favorable MAE profile on T cells, while ceding some ground on RMSE and r to the best baseline on certain metrics. Because LazyNet instantiates a one-step ODE with log–exp residual dynamics, it exposes per-gene elasticities whose ensemble Jacobian yields directed, mechanism-level edges; we leverage this structure immediately in network inference, where the Jacobian is converted into a consensus subgraph and evaluated via module-level enrichment against curated resources.
It is important to emphasize that these comparisons are performed under a constrained, from-scratch regime: all models are trained only on the study dataset, with a 1 h wall-time cap and no access to any external pretraining or pretrained checkpoints. In contrast, scGPT and CellFM are designed primarily as large-scale foundation models to be pretrained on millions of cells and then adapted. Our results therefore show that, under strict from-scratch, low-data constraints, LazyNet can be competitive with transformer- and state-space-style architectures; they should not be interpreted as a statement that LazyNet outperforms fully pretrained scGPT or CellFM in general.

3.2. Network Inference

Ten independently trained LazyNet replicas were averaged into a single ensemble, and Jacobian elasticities were queried around seven ferroptosis seeds (GPX4, PSAP, ACSL4, WIPI2, MTOR, NFU1, SOD2), selection details appear in Supplementary S1A. A broader parameter sweep, summarized in Supplementary S1B, identified five expansion regimes that were carried forward (Table 2).
In the broadest setting (512 × 1) each seed ballooned into a neighborhood of ~3000 genes—GPX4 alone recruited 1811—and yielded a mean per-seed recall of ~0.08 (≈2–3/27). Constraining the search to 64 × 3 reduced each subgraph to ~1000 genes and lifted the mean per-seed recall to ~0.40 (≈11/27). The most selective regime, 32 × 4, fused all seven seeds into a single graph of 4641 genes and 11,676 edges while preserving a benchmark recall of 15/27 and nudging the hit-rate upward; its data are in Supplementary S1C. Because only 27 of 18,152 genes are treated as positives (base rate ≈0.15%), these fractions are numerically small; viewed as enrichment, the 32×4 graph recovers 15 regulators versus ~6.9 expected by chance (~2.17×).
Evaluating all the gene seeds, per-seed recalls (Supplementary S1D) highlight a practical strength: starting from just one ferroptosis gene, LazyNet typically retrieves ~10 additional benchmarks when run at top = 32, 64, or 128. In other words, minimal prior knowledge suffices to reconstruct much of the ferroptosis module—evidence of the method’s consistency and day-to-day utility.
LazyNet recovers 56% (15/27) of benchmark ferroptosis interactions in <6 h on a single 24-core CPU node (768 GB RAM, no GPUs). By comparison, GENIE3 recovers ~33% of gold-standard E. coli edges after analyzing > 100 microarrays and ~48 h on 32 cores [39,47]; GRNBoost recalls ~22% in a 68 k-cell PBMC atlas [40,48]; and DeepFGRN reaches ~40% but requires dense time-series data and multi-GPU training (4 × A100, several days) [41]. Thus, LazyNet achieves state-of-the-art recall with orders-of-magnitude fewer time points and far lighter compute while still yielding biologically interpretable subnetworks from a single seed gene.
The 32 × 4 expansion is chosen for all subsequent analyses because it strikes the right balance between biological depth and practical scope. Limiting each hop to 32 of the strongest elasticity neighbors keeps the growth of the graph contained, while four propagation steps are ample to capture multi-step regulatory cascades. The resulting subgraph (4641 genes and 11,676 edges) remains small enough for reliable enrichment statistics, manual curation, and clear visualization, yet large enough to preserve the key ferroptosis signals that LazyNet uncovers. This setting therefore provides a tractable, biologically informative backbone for downstream validation and interpretation.

3.3. ARCHS4 Co-Expression

Of the 11,676 edges in the 32 × 4 subgraph, 9860 (84.4%) mapped to the ARCHS4 pan-human matrix and received a co-expression value (Pearson r) [46]. The r distribution is broadly centered near zero (mean ≈ 0.053; median ≈ 0.044) and closely matches degree-matched random pairs (Supplementary S2A).
In an unsigned view, LazyNet edges are far more likely than random to show moderate co-expression: 26.6% of edges meet |r| ≥ 0.20 versus ~5% of random pairs—an ≈5 × enrichment—indicating that the model links genes that frequently co-vary across thousands of public RNA-seq datasets (Supplementary S2B).
Sign-aware analysis refines this picture in Figure 3: the positive, sign-matched tail is enriched (~1.36 fold), the opposite-sign tail is depleted (~0.67 fold), and the overall |r| ≥ 0.20 mass is near expectation (~1.02 fold) once signs are considered (Supplementary S2C). This is consistent with the biology: LazyNet elasticities encode local, directional responses in a one-step ODE, whereas ARCHS4 aggregates steady-state, undirected correlations across heterogeneous tissues; inhibitory or context-specific links can therefore appear anti-correlated in the aggregate.
Taken together, the strong unsigned enrichment and the direction-consistent positive-tail enrichment indicate that a substantial subset of LazyNet edges is corroborated by large-scale expression evidence while respecting expected sign- and context-dependent effects.

3.4. STRING Database and Literature Validation

We evaluated the 32 × 4 LazyNet subgraph against a filtered STRING network containing the 18,152 genes detected in our single-cell atlas [42]. Of the 4630 genes retained by LazyNet, STRING supplies 411,761 undirected links, giving an observed density of 0.0384. Size-matched random gene sets drawn 1000 times from the same background average of 0.0373 ± 0.0008, placing the subgraph 1.5 s.d. above expectation (empirical p = 0.072)—evidence for modest but over-connectivity (Supplementary S3A).
Edgewise, 11,662 LazyNet elasticity links intersect STRING in 523 cases (Jaccard ≈ 1 × 10−4). A one-tailed hypergeometric test confirms this overlap is highly non-random (p = 1.2 × 10−5). Although elasticity magnitudes and STRING scores show no global correlation (Pearson r = –0.071, p = 0.10; Spearman ρ = –0.036), a top-quartile split in each metric separates three informative classes: 27 “both-high” edges, 104 STRING-high only (canonical interactions), and 104 LazyNet-high only—highlighting under-explored relationships for experimental follow-up. Representative literature support for ten both-high edges is provided in Table 3.
Among the findings, the moderate enrichments are entirely acceptable—and even expected—because STRING confidence and LazyNet elasticity quantify different biological facets. STRING aggregates heterogeneous evidence (co-expression, physical binding, literature co-mentions), whereas elasticity captures the transcriptional impact inferred by our ODE model. Perfect concordance would therefore be unrealistic; instead, the modest density excess and significant, yet limited, edge-level overlap demonstrate that LazyNet recovers a meaningful core of known biology while offering a large complement of novel, model-specific interactions.

3.5. Proteomics Corroborates Transcript-Derived Network Structure

Cross-omics robustness was assessed by projecting RNA-derived edges onto two independent mouse fibroblast SILAC proteomes of GPX4 loss-of-function ferroptosis. The inducible GPX4-knockout time course (PRIDE PXD050979) yielded 1026 quantifiable proteins after consolidation; 47 network edges had sufficient protein coverage (≥3 finite values per partner). Their absolute protein co-variation was higher than degree-matched random pairs (median |ρ| = 0.536 vs. 0.400; Wilcoxon p = 4.9 × 10−3). The larger pulsed-SILAC study of constitutive GPX4 knockout (PRIDE PXD040094) provided 5516 proteins across 14 channels; of 563 mapped edges, 507 met the coverage criterion and likewise exceeded the null (median |ρ| = 0.367 vs. 0.336; Wilcoxon p = 6.2 × 10−4). Representative protein–protein scatters are shown in Figure 4, including XDH–ANXA8 (ρ = 0.92), RPLP2–BAIAP2 (ρ = 0.89), NRDC–ENO3 (ρ = 0.88), and SFPQ–CKAP5 (ρ = 0.98), illustrating coherent proteomic co-variation across channels. Together with the enrichment tests, these results indicate that a substantial subset of LazyNet edges remain concordant at the protein layer, supporting the biological fidelity of the inferred network, despite differences in species and experimental design.

3.6. Biological Insights

The 32 × 4 subgraph seeded on the CRISPR A/I screen recapitulates the ferroptosis core: among the top 1000 LazyNet edges (Supplementary S1C), 15 of the 27 benchmark regulators reported by Tian et al. cluster near the network center, consistent with the canonical lipid–peroxidation axis [36].
Several connections align with established mechanisms, supporting the face validity of the wiring. A mitochondrial module links MFN2-mediated mitophagy, which constrains ROS-driven death, with SIRT3 signaling that modulates ferroptosis through GPX4 and AMPK–mTOR pathways [49,50]. Immune–redox cross-talk involving TLR4–STAT3 that amplifies mitochondrial ROS is also recovered [51]. In parallel, ADCY10-driven mitochondrial cAMP has been shown to protect neurons under oxidative stress, matching the directionality of edges observed here [52]. These concordances indicate that LazyNet recovers known biology across mitochondrial, immune, and metabolic axes.
Beyond literature-supported modules, the subgraph nominates cross-axis links that, to our knowledge, are not reported as direct connections and are presented as testable hypotheses. First, a lysosomal–autophagy route from PSAP to mTORC1 (MTOR/RPTOR/RHEB), potentially via ULK1-associated scaffolding (with PPP4R1 as a putative bridge), is consistent with PSAP-dependent lipofuscin accumulation and neuron-specific ferroptosis. Second, high-elasticity edges suggest immune inputs from TLR6 and CCR10 into autophagy/mTOR nodes (ATG13, MTOR). Third, a metabolic coupling is predicted in which TRPM6 (Mg2+ channel) and ADCY10 interface with the SIRT3–mTORC1 axis.
Taken together, the results point to an integrated immune–mitochondrial–lysosomal circuit in which mitochondrial quality control (MFN2/SIRT3), innate immune signaling (TLRs), and metabolic/cAMP cues (ADCY10/TRPM6) converge on mTOR-linked autophagy to tune the ferroptotic threshold. Edge-level attributes are provided in Supplementary S1C to facilitate replication and follow-up.

4. Discussion

This study set out to answer a practical question for small labs and individual projects: given a single CRISPR A/I screen with modest sample size, limited compute, and no realistic path to large-scale pretraining, what modeling strategy is worth the effort? We designed LazyNet as a one-step, log–exp neural ODE that directly encodes multiplicative perturbation effects and exposes gene–gene elasticities as interpretable parameters. Under a strictly from-scratch, low-data regime—where all models, including scGPT-style and CellFM/RetNet-style baselines, are trained only on the study dataset under a 1 h time cap—LazyNet achieves competitive or better predictive performance for single-snapshot perturbation prediction while remaining numerically stable and CPU-tractable. These experiments do not evaluate fully pretrained foundation models in their intended high-data regime. When large-scale pretraining and fine-tuning are feasible, we expect scGPT and CellFM-style models to remain strong options, and we view LazyNet as a complementary choice tailored specifically to from-scratch, low-data settings.
A central feature of LazyNet is that its one-step, log–exp residual update exposes elasticities—local, directed sensitivities of genes to one another around a baseline. Averaging Jacobians across replicas produces a consensus, directed interaction matrix that we expand into a compact subgraph for interpretation. This graph recovers a substantial fraction of known ferroptosis regulators and shows coherent support across independent resources: enrichment in large-scale co-expression (ARCHS4), non-random overlap with curated functional associations (STRING), and concordant protein-level co-variation in GPX4 loss-of-function proteomes. The agreement is moderate rather than exhaustive—as expected given that elasticity (local, directional quantitative response) and database edges (heterogeneous, often undirected evidence) measure different facets of biology—but it is sufficient to anchor new, testable hypotheses, including a lysosomal–mitochondrial–immune module that nominates specific links into mTOR-connected autophagy.
Equally important is practicality. LazyNet’s capacity is concentrated into a small set of mechanistically meaningful weights, which helps it converge quickly on CPUs without external priors, while still producing parameters that carry direct interpretive value (rate-like elasticities). The ensemble Jacobian pipeline is deterministic and light enough to run end-to-end on a single node, and the entire workflow—preprocessing, splits, commands, and seeds—is packaged for exact replication. In contrast, graph-inference tools optimized for edge ranking from co-expression can generate high-accuracy local subgraphs but are not designed to provide a globally consistent, directed dynamic map aligned with potential causality [53]; for comparative purposes we therefore emphasize module-level enrichment against authoritative, curated resources and orthogonal experiments rather than raw edge-overlap counts.
Several limitations follow from design choices. Working in log space requires positive inputs and introduces mild dependence on normalization and pseudocounts. A single explicit Euler step matches two-snapshot designs but leaves the absolute Δt unidentified and cannot represent delays or oscillations; elasticities are local to the chosen baseline and may vary across states. Our enrichment tests rely on incomplete references whose coverage and evidence channels differ from the signal inferred by LazyNet; consequently, we do not expect high global concordance, and we treat overlaps as supportive rather than definitive. Finally, within a fixed wall-time budget, excessively large models can under-train and reduce accuracy despite higher nominal capacity.
Looking ahead, three directions appear most impactful. First, time-resolved perturbation series would allow multi-step fitting that estimates Δt, tests stability, and distinguishes fast from slow modes. Second, structured priors (e.g., motif-based regulator sets or pathway constraints) could be added in a transparent way—regularizing the elasticity matrix without sacrificing interpretability. Third, prospective validation—targeted CRISPR edits chosen from LazyNet’s high-elasticity, database-unsupported edges—will be essential to calibrate precision and to refine the expansion rules that balance depth with tractability. Within these bounds, our results suggest that a one-step neural ODE view can make routine Perturb-seq datasets actionable on commodity hardware, linking accurate prediction with mechanism-level hypotheses that are ready for bench testing.

5. Conclusions

Perturbation datasets remain costly to generate at scale and hard to harmonize across studies, so many groups benchmark on public resources while analyzing their own data with bespoke code. Our goal was a method that operates directly on local measurements, runs under modest compute, and yields parameters with clear mechanistic meaning. LazyNet meets this brief: a one-step neural ODE that trains on CPUs, reports reproducible predictions, and exposes directed elasticities suitable for network interpretation.
Empirically, under a strictly from-scratch, low-data setting with a 1 h training cap per model, LazyNet attains competitive or superior predictive accuracy with substantially fewer parameters and lower wall-time than strong baselines, while producing elasticities that show coherent external concordance at the module level. A central advantage is the log–linear–exp map, which exactly represents multiplicative terms. Synergistic, multi-locus effects therefore appear as explicit components rather than opaque composites, enabling practical design and prioritization of combinatorial perturbations from sparsely sampled CRISPR A/I data.
Limits follow from design choices: two-snapshot learning cannot identify absolute Δt, delays, or oscillations; log-domain preprocessing (pseudocount, normalization) introduces mild dependencies; the update is Markovian without latent memory; and full transcriptome-scale expansion can be memory-intensive (partially mitigated with block-sparse top-K storage and streaming). Nonetheless, within a fixed 1 h budget, the model’s capacity-to-convergence tradeoff is favorable on CPUs, and its parameters remain directly interpretable.
Future work will extend validation to time-resolved perturbation series and additional cell contexts, incorporate lightweight structured priors (motifs, pathways) and uncertainty quantification, and perform prospective tests of synergy-driven predictions. Within these bounds, LazyNet offers a reproducible, resource-efficient route to synergy-aware mechanistic inference from perturbation data—bridging accurate prediction with directed, testable hypotheses.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biology15010062/s1. Supplementary S1A—Ferroptosis seed selection. List of seed genes with selection criteria, source references/databases, and brief inclusion rationale (gene symbol, alias, source, evidence note). Supplementary S1B—Subgraph metrics. Global metrics for each expanded subgraph (e.g., BFS 32 × 4): nodes, edges, density, average degree, clustering coefficient, diameter/90th-percentile path length, hub scores, and community counts. Supplementary S1C—Selected subgraph details. Node- and edge-level attributes for the chosen subgraph: gene symbol, aliases, degree, community label; edge elasticity/sign, rank/percentile, and any external matches (ARCHS4 r, STRING score). Supplementary S1D—Per-seed recall. Recall of benchmark ferroptosis regulators per seed (and aggregated), with optional precision@K and top-K per hop settings. Supplementary S2A—ARCHS4 comparison histogram. Histogram of Pearson’s r for model-inferred edges vs. degree-/expression-matched null pairs; bin width and sample sizes indicated. Supplementary S2B—ARCHS4 comparison statistics. Summary statistics (mean, median, IQR, upper-tail mass ≥ r0), test results (e.g., Mann–Whitney U/KS), effect sizes (e.g., Cliff’s δ), and adjusted p-values. Supplementary S2C—Pair enrichment. Enrichment of high co-expression pairs at thresholds (e.g., r ≥ 0.1/0.2/0.3), reported as observed/expected, fold enrichment, 95% CIs, and matched-null definition. Supplementary S3A—STRING comparison overlap. Overlap between inferred edges and STRING v12 interactions across combined-score cutoffs, with counts, hypergeometric p-values, FDR, and evidence-channel breakdown (experimental, database, co-expression, text mining). Supplementary S4—Mathematical Details of LazyNet. Formal derivation of LazyNet’s one-step log–exp residual update; exact representation of multiplicative synergies.

Author Contributions

Majority of work, Z.Y.; Data curation, N.M.; Data curation, Y.A. All authors have read and agreed to the published version of the manuscript.

Funding

The study is supported by the Jiangsu Funding Program for Excellent Postdoctoral Talent under grant No. 2024ZB546. This research work is supported by the Big Data Computing Center of Southeast University. The study is supported by the Scientific and Technological Innovation 2030 Major Projects (STI2030-Major Projects 2021ZD0204001 Wei Xie wei.xie@seu.edu.cn ORCID 0000-0002-9179-4787).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

A complete, versioned reproducibility package—preprocessing scripts, group-aware splits and seeds, training/evaluation code, environment files, run configurations, and notebooks to regenerate all tables/figures—is archived at Figshare (https://figshare.com/s/0908014677113d5c83d4, accessed on 26 November 2025). To avoid redundancy, operational details are documented in the repository rather than repeated in the main text.

Acknowledgments

We gratefully acknowledge Southeast University and Wei Xie for their generous support in funding and computational resources.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
A/ICRISPR activation/interference
AIVCAI virtual cell
ARCHS4Large-scale RNA-seq co-expression resource
BFSBreadth-first search
CRISPRClustered regularly interspaced short palindromic repeats
GPX4Glutathione peroxidase 4
KOKnockout
LLELog–linear–exp map (LazyNet block)
MAEMean absolute error
ODEOrdinary differential equation
rPearson correlation coefficient
ResNetResidual network
RMSERoot mean square error
scRNA-seqSingle-cell RNA sequencing
SILACStable isotope labeling by amino acids in cell culture
STRINGSearch Tool for the Retrieval of Interacting Genes/Proteins
HVGHighly variable gene

References

  1. Klein, A.M.; Mazutis, L.; Akartuna, I.; Tallapragada, N.; Veres, A.; Li, V.; Peshkin, L.; Weitz, D.A.; Kirschner, M.W. Droplet Barcoding for Single-Cell Transcriptomics Applied to Embryonic Stem Cells. Cell 2015, 161, 1187–1201. [Google Scholar] [CrossRef]
  2. Macosko, E.Z.; Basu, A.; Satija, R.; Nemesh, J.; Shekhar, K.; Goldman, M.; Tirosh, I.; Bialas, A.R.; Kamitaki, N.; Martersteck, E.M.; et al. Highly Parallel Genome-Wide Expression Profiling of Individual Cells Using Nanoliter Droplets. Cell 2015, 161, 1202–1214. [Google Scholar] [CrossRef]
  3. Kitano, H. Systems Biology: A Brief Overview. Science 2002, 295, 1662–1664. [Google Scholar] [CrossRef]
  4. Sachs, K.; Perez, O.; Pe’er, D.; Lauffenburger, D.A.; Nolan, G.P. Causal Protein-Signaling Networks Derived from Multiparameter Single-Cell Data. Science 2005, 308, 523–529. [Google Scholar] [CrossRef]
  5. Jaitin, D.A.; Weiner, A.; Yofe, I.; Lara-Astiaso, D.; Keren-Shaul, H.; David, E.; Salame, T.M.; Tanay, A.; van Oudenaarden, A.; Amit, I. Dissecting Immune Circuits by Linking CRISPR-Pooled Screens with Single-Cell RNA-Seq. Cell 2016, 167, 1883–1896. [Google Scholar] [CrossRef] [PubMed]
  6. Nelson, M.R.; Tipney, H.; Painter, J.L.; Shen, J.; Nicoletti, P.; Shen, Y.; Floratos, A.; Sham, P.C.; Li, M.J.; Wang, J.; et al. The Support of Human Genetic Evidence for Approved Drug Indications. Nat. Genet. 2015, 47, 856–860. [Google Scholar] [CrossRef] [PubMed]
  7. Lee, J.S.; Nair, N.U.; Dinstag, G.; Chapman, L.; Chung, Y.; Wang, K.; Sinha, S.; Cha, H.; Kim, D.; Schperberg, A.V.; et al. Synthetic Lethality-Mediated Precision Oncology via the Tumor Transcriptome. Cell 2021, 184, 2487–2502. [Google Scholar] [CrossRef]
  8. Katti, A.; Diaz, B.J.; Caragine, C.M.; Sanjana, N.E.; Dow, L.E. CRISPR in Cancer Biology and Therapy. Nat. Rev. Cancer 2022, 22, 259–279. [Google Scholar] [CrossRef] [PubMed]
  9. O’Neil, N.J.; Bailey, M.L.; Hieter, P. Synthetic Lethality and Cancer. Nat. Rev. Genet. 2017, 18, 613–623. [Google Scholar] [CrossRef]
  10. Haley, B.; Roudnicky, F. Functional Genomics for Cancer Drug Target Discovery. Cancer Cell 2020, 38, 31–43. [Google Scholar] [CrossRef]
  11. Norman, T.M.; Horlbeck, M.A.; Replogle, J.M.; Ge, A.Y.; Xu, A.; Jost, M.; Gilbert, L.A.; Weissman, J.S. Exploring Genetic Interaction Manifolds Constructed from Rich Single-Cell Phenotypes. Science 2019, 365, 786–793. [Google Scholar] [CrossRef]
  12. Low, L.A.; Mummery, C.; Berridge, B.R.; Austin, C.P.; Tagle, D.A. Organs-on-Chips: Into the Next Decade. Nat. Rev. Drug Discov. 2021, 20, 345–361. [Google Scholar] [CrossRef]
  13. Wang, H.; Yang, Y.; Liu, J.; Qian, L. Direct Cell Reprogramming: Approaches, Mechanisms and Progress. Nat. Rev. Mol. Cell Biol. 2021, 22, 410–424. [Google Scholar] [CrossRef]
  14. Maude, S.L.; Laetsch, T.W.; Buechner, J.; Rives, S.; Boyer, M.; Bittencourt, H.; Bader, P.; Verneris, M.R.; Stefanski, H.E.; Myers, G.D.; et al. Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. N. Engl. J. Med. 2018, 378, 439–448. [Google Scholar] [CrossRef]
  15. Gillmore, J.D.; Gane, E.; Taubel, J.; Kao, J.; Fontana, M.; Maitland, M.L.; Seitzer, J.; O’connell, D.; Walsh, K.R.; Wood, K.; et al. CRISPR–Cas9 In Vivo Gene Editing for Transthyretin Amyloidosis. N. Engl. J. Med. 2021, 385, 493–502. [Google Scholar] [CrossRef] [PubMed]
  16. Lim, W.A. The Emerging Era of Cell Engineering: Harnessing the Modularity of Cells to Program Complex Biological Function. Science 2022, 378, 848–852. [Google Scholar] [CrossRef] [PubMed]
  17. Dixit, A.; Parnas, O.; Li, B.; Chen, J.; Fulco, C.P.; Jerby-Arnon, L.; Marjanovic, N.D.; Dionne, D.; Burks, T.; Raychowdhury, R.; et al. Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens. Cell 2016, 167, 1853–1866.e17. [Google Scholar] [CrossRef] [PubMed]
  18. Adamson, B.; Norman, T.M.; Jost, M.; Cho, M.Y.; Nuñez, J.K.; Chen, Y.; Villalta, J.E.; Gilbert, L.A.; Horlbeck, M.A.; Hein, M.Y.; et al. A Multiplexed Single-Cell CRISPR Screening Platform Enables Systematic Dissection of the Unfolded Protein Response. Cell 2016, 167, 1867–1882.e21. [Google Scholar] [CrossRef]
  19. Przybyla, L.; Gilbert, L.A. A New Era in Functional Genomics Screens. Nat. Rev. Genet. 2022, 23, 89–103. [Google Scholar] [CrossRef]
  20. Bunne, C.; Roohani, Y.; Rosen, Y.; Gupta, A.; Zhang, X.; Roed, M.; Alexandrov, T.; AlQuraishi, M.; Brennan, P.; Burkhardt, D.B.; et al. How to Build the Virtual Cell with Artificial Intelligence: Priorities and Opportunities. Cell 2024, 187, 7045–7063. [Google Scholar] [CrossRef]
  21. Cui, H.; Wang, C.; Maan, H.; Pang, K.; Luo, F.; Duan, N.; Wang, B. scGPT: Toward Building a Foundation Model for Single-Cell Multi-Omics Using Generative AI. Nat. Methods 2024, 21, 1470–1480. [Google Scholar] [CrossRef] [PubMed]
  22. Zeng, Y.; Xie, J.; Shangguan, N.; Wei, Z.; Li, W.; Su, Y.; Yang, S.; Zhang, C.; Zhang, J.; Fang, N.; et al. CellFM: A Large-Scale Foundation Model Pre-Trained on Transcriptomics of 100 Million Human Cells. Nat. Commun. 2025, 16, 4679. [Google Scholar] [CrossRef] [PubMed]
  23. Roohani, Y.; Huang, K.; Leskovec, J. Predicting Transcriptional Outcomes of Novel Multigene Perturbations with GEARS. Nat. Biotechnol. 2024, 42, 927–935. [Google Scholar] [CrossRef] [PubMed]
  24. Ahlmann-Eltze, C.; Huber, W.; Anders, S. Deep-learning-based gene perturbation effect prediction does not yet outperform simple linear baselines. Nat. Methods 2025, 22, 1657–1661. [Google Scholar] [CrossRef]
  25. Murray, J.D. Mathematical Biology I: An Introduction, 3rd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  26. Klipp, E.; Liebermeister, W.; Wierling, C.; Kowald, A.; Schuster, S. Systems Biology: A Textbook, 2nd ed.; Wiley-Blackwell: Chichester, UK, 2016. [Google Scholar]
  27. Tyson, J.J.; Chen, K.C.; Novak, B. Network Dynamics and Cell Physiology. Nat. Rev. Mol. Cell Biol. 2001, 2, 908–916. [Google Scholar] [CrossRef]
  28. Heinrich, R.; Schuster, S. The Regulation of Cellular Systems; Chapman & Hall: New York, NY, USA, 1996. [Google Scholar]
  29. Voit, E.O. Computational Analysis of Biochemical Systems; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  30. Chen, R.T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural Ordinary Differential Equations. Adv. Neural Inf. Process. Syst. 2018, 31, 6571–6583. [Google Scholar]
  31. Grathwohl, W.; Chen, R.T.Q.; Bettencourt, J.; Sutskever, I.; Duvenaud, D. FFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models. In Proceedings of the 36th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 3642–3651. [Google Scholar]
  32. Dupont, E.; Doucet, A.; Teh, Y.W. Augmented Neural Ordinary Differential Equations. Adv. Neural Inf. Process. Syst. 2019, 32, 313–325. [Google Scholar]
  33. Hossain, I.; Fanfani, V.; Quackenbush, J.; Burkholz, R. Biologically informed NeuralODEs for genome-wide regulatory dynamics. Genome Biol. 2024, 25, 127. [Google Scholar] [CrossRef]
  34. Lin, Z.; Chang, S.; Zweig, A.; Azizi, E.; Knowles, D. Interpretable Neural ODEs for Gene Regulatory Network Discovery under Perturbations. arXiv 2025, arXiv:2501.02409. [Google Scholar] [CrossRef]
  35. Yi, Z. Integrating Equation Coding with Residual Networks for Efficient ODE Approximation in Biological Research. Math. Comput. Appl. 2025, 30, 47. [Google Scholar] [CrossRef]
  36. Tian, R.; Abarientos, A.; Hong, J.; Hashemi, S.H.; Yan, R.; Dräger, N.; Leng, K.; Nalls, M.A.; Singleton, A.B.; Xu, K.; et al. Genome-Wide CRISPRi/a Screens in Human Neurons Link Lysosomal Failure to Ferroptosis. Nat. Neurosci. 2021, 24, 1020–1034. [Google Scholar] [CrossRef]
  37. Schmidt, R.; Steinhart, Z.; Layeghi, M.; Freimer, J.W.; Bueno, R.; Nguyen, V.Q.; Blaeschke, F.; Ye, C.J.; Marson, A. CRISPR activation and interference screens decode stimulation responses in primary human T cells. Science 2022, 375, eabj4008. [Google Scholar] [CrossRef]
  38. Dyer, S.C.; Austine-Orimoloye, O.; Azov, A.G.; Barba, M.; Barnes, I.; Barrera-Enriquez, V.P.; Becker, A.; Bennett, R.; Beracochea, M.; Berry, A.; et al. Ensembl 2025. Nucleic Acids Res. 2025, 53, D948–D957. [Google Scholar] [CrossRef]
  39. Huynh-Thu, V.A.; Irrthum, A.; Wehenkel, L.; Geurts, P. Inferring Regulatory Networks from Expression Data Using Tree-Based Methods. BMC Bioinform. 2010, 11, 359. [Google Scholar] [CrossRef]
  40. Aibar, S.; González-Blas, C.B.; Moerman, T.; Huynh-Thu, V.A.; Imrichova, H.; Hulselmans, G.; Rambow, F.; Marine, J.-C.; Geurts, P.; Aerts, J.; et al. SCENIC: Single-Cell Regulatory Network Inference and Clustering. Nat. Methods 2017, 14, 1083–1086. [Google Scholar] [CrossRef]
  41. Gao, Z.; Su, Y.; Xia, J.; Cao, R.F.; Ding, Y.; Zheng, C.H.; Wei, P.J. DeepFGRN: Inference of gene regulatory network with regulation type based on directed graph embedding. Brief. Bioinform. 2024, 25, bbae143. [Google Scholar] [CrossRef] [PubMed]
  42. Szklarczyk, D.; Kirsch, R.; Koutrouli, M.; Nastou, K.; Mehryary, F.; Hachilif, R.; Gable, A.L.; Fang, T.; Doncheva, N.T.; Pyysalo, S.; et al. The STRING Database in 2023: Protein-Protein Association Networks and Functional Enrichment Analyses for Any Sequenced Genome of Interest. Nucleic Acids Res. 2023, 51, D638–D646. [Google Scholar] [CrossRef] [PubMed]
  43. Wagle, P.; von Karstedt, S. Secretome Analysis of Ferroptotic Pfa1 Cells. PRIDE Archive, Dataset PXD050979 2023. Available online: https://proteomecentral.proteomexchange.org/cgi/GetDataset?ID=PXD050979 (accessed on 1 June 2025).
  44. Tyanova, S.; Temu, T.; Sinitcyn, P.; Carlson, A.; Hein, M.Y.; Geiger, T.; Mann, M.; Cox, J. The Perseus computational platform for comprehensive analysis of (prote)omics data. Nat. Methods 2016, 13, 731–740. [Google Scholar] [CrossRef] [PubMed]
  45. Nesterenko, A.M.; Korzhenevskii, D.A.; Tereshchuk, V.M.; Kudryashova, O.M.; Belousov, V.V.; Shokhina, A.G. Dataset on the Proteomic Response during Ferroptosis Induction via Tamoxifen-Induced GPX4 Knockout in Mouse Embryonic Fibroblasts. Data Brief. 2023, 48, 109170. [Google Scholar] [CrossRef]
  46. Lachmann, A.; Torre, D.; Keenan, A.B.; Jagodnik, K.M.; Lee, H.J.; Wang, L.; Silverstein, M.C.; Ma’ayan, A. Massive Mining of Publicly Available RNA-Seq Data from Human and Mouse. Nat. Commun. 2018, 9, 1366. [Google Scholar] [CrossRef]
  47. Marbach, D.; Costello, J.C.; Küffner, R.; Vega, N.M.; Prill, R.J.; Camacho, D.M.; Allison, K.R.; Kellis, M.; Collins, J.J.; Stolovitzky, G. Wisdom of Crowds for Robust Gene-Network Inference. Nat. Methods 2012, 9, 796–804. [Google Scholar] [CrossRef] [PubMed]
  48. Bravo González-Blas, C.; De Winter, S.; Hulselmans, G.; Hecker, N.; Matetovici, I.; Christiaens, V.; Poovathingal, S.; Wouters, J.; Aibar, S.; Aerts, S. SCENIC+: Single-cell multiomic inference of enhancers and gene regulatory networks. Nat. Methods 2023, 20, 1355–1367. [Google Scholar] [CrossRef]
  49. Ma, L.; Meng, X.; Abudurexiti, T.; Liu, Y.; Gao, J.; Sheng, W. MANF Overexpression Ameliorates Oxidative Stress-Induced Apoptosis via MFN2-Dependent Mitophagy. Sci. Rep. 2024, 14, 11234. [Google Scholar] [CrossRef]
  50. Han, D.; Jiang, L.; Gu, X.; Huang, S.; Pang, J.; Wu, Y.; Yin, J.; Wang, J. SIRT3 Deficiency Is Resistant to Autophagy-Dependent Ferroptosis by Inhibiting the AMPK/mTOR Pathway and Promoting GPX4. J. Cell Physiol. 2020, 235, 8839–8851. [Google Scholar] [CrossRef]
  51. Zhang, J.; Liu, H.; Shen, Y.; Cheng, D.; Tang, H.; Zhang, Q.; Li, C.; Liu, M.; Yao, W.; Ran, R.; et al. Macrophage AHR–TLR4 Crosstalk Drives p-STAT3-Mediated Mitochondrial Oxidative Stress. Sci. Total Environ. 2024, 923, 171377. [Google Scholar] [CrossRef] [PubMed]
  52. Bastola, T.; Perkins, G.A.; Huu, V.A.N.; Ju, S.; Kim, K.-Y.; Shen, Z.; Skowronska-Krawczyk, D.; Weinreb, R.N.; Ju, W.-K. Administration of Bicarbonate Protects Mitochondria, Rescues Retinal Ganglion Cells, and Ameliorates Visual Dysfunction Caused by Oxidative Stress. Antioxidants 2024, 13, 743. [Google Scholar] [CrossRef] [PubMed]
  53. Oates, C.J.; Dondelinger, F.; Bayani, N.; Korkola, J.; Gray, J.W.; Mukherjee, S. Causal network inference using biochemical kinetics. Bioinformatics 2014, 30, i468–i474. [Google Scholar] [CrossRef]
Figure 1. The architecture of LazyNet. All log and exp operations are elementwise; inputs are library-size-normalized and shifted by a small pseudocount ε to ensure positivity. The color and intensity of the edges indicates the direction and intesntiy of the influences. The “+” indicates the LazyNet will add the calculated residue to the original input and form the final output.
Figure 1. The architecture of LazyNet. All log and exp operations are elementwise; inputs are library-size-normalized and shifted by a small pseudocount ε to ensure positivity. The color and intensity of the edges indicates the direction and intesntiy of the influences. The “+” indicates the LazyNet will add the calculated residue to the original input and form the final output.
Biology 15 00062 g001
Figure 2. Benchmark-and-discovery workflow.
Figure 2. Benchmark-and-discovery workflow.
Biology 15 00062 g002
Figure 3. ARCHS4 co-expression enrichment for LazyNet edges. Observed (orange) versus expected (blue) fractions for edges from the 32 × 4 LazyNet subgraph annotated in ARCHS4 (n = 9860). To avoid inflation from expression abundance and hubness, expected fractions come from an expression/detection- and degree-matched null (≈3.94 M matched pairs). Bars show tail rates at |r| ≥ 0.40; numbers above bars are fold enrichment (Observed/Expected). LazyNet edges are enriched in the overall |r| tail (0.073 vs. 0.058; 1.26×) and especially the positive tail (0.064 vs. 0.045; 1.44×) with depletion in the negative tail (0.009 vs. 0.014; 0.66×). “Sign-matched” multiplies ARCHS4 r by the LazyNet edge sign; “Opposite-sign” reverses it. As expected for an undirected, context-mixed resource, sign-aware fractions are similar (1.27× vs. 1.25×).
Figure 3. ARCHS4 co-expression enrichment for LazyNet edges. Observed (orange) versus expected (blue) fractions for edges from the 32 × 4 LazyNet subgraph annotated in ARCHS4 (n = 9860). To avoid inflation from expression abundance and hubness, expected fractions come from an expression/detection- and degree-matched null (≈3.94 M matched pairs). Bars show tail rates at |r| ≥ 0.40; numbers above bars are fold enrichment (Observed/Expected). LazyNet edges are enriched in the overall |r| tail (0.073 vs. 0.058; 1.26×) and especially the positive tail (0.064 vs. 0.045; 1.44×) with depletion in the negative tail (0.009 vs. 0.014; 0.66×). “Sign-matched” multiplies ARCHS4 r by the LazyNet edge sign; “Opposite-sign” reverses it. As expected for an undirected, context-mixed resource, sign-aware fractions are similar (1.27× vs. 1.25×).
Biology 15 00062 g003
Figure 4. Proteomics support for LazyNet edges in GPX4 loss-of-function fibroblasts. Protein–protein abundance scatterplots for representative LazyNet edges across SILAC channels. Axes show z-scored log2 protein levels; each point is one channel/time point. Panels report Spearman ρ and sample count (n). Three pairs come from the pulsed-SILAC GPX4-KO dataset PXD040094—XDH–ANXA8 (ρ = 0.92, n = 14), RPLP2–BAIAP2 (ρ = 0.89, n = 14), NRDC–ENO3 (ρ = 0.88, n = 14)—and one from the tamoxifen-inducible GPX4-KO time course PXD050979—SFPQ–CKAP5 (ρ = 0.98, n = 9). Edges were selected for adequate coverage (≥8 quantified channels when available) and clear co-variation.
Figure 4. Proteomics support for LazyNet edges in GPX4 loss-of-function fibroblasts. Protein–protein abundance scatterplots for representative LazyNet edges across SILAC channels. Axes show z-scored log2 protein levels; each point is one channel/time point. Panels report Spearman ρ and sample count (n). Three pairs come from the pulsed-SILAC GPX4-KO dataset PXD040094—XDH–ANXA8 (ρ = 0.92, n = 14), RPLP2–BAIAP2 (ρ = 0.89, n = 14), NRDC–ENO3 (ρ = 0.88, n = 14)—and one from the tamoxifen-inducible GPX4-KO time course PXD050979—SFPQ–CKAP5 (ρ = 0.98, n = 9). Edges were selected for adequate coverage (≥8 quantified channels when available) and clear co-variation.
Biology 15 00062 g004
Table 1. Comparison of training cost and predictive accuracy.
Table 1. Comparison of training cost and predictive accuracy.
CategoryMetricRetNetscGPTLazyNet
Neural
iPSC
Total time (s)360336432286
DeviceGPUGPUCPU
Peak VRAM/RAM17/18 G19/17 G40 G
Parameters (millions)10.075.089.58
Ensemble size113
RMSE2.1402.3332.376
MAE0.5150.6370.562
Pearson r0.7460.6960.680
ROC/PR AUC0.784/0.8060.591/0.5870.920/0.956
F1 micro/macro0.834/0.4370.888/0.5140.683/0.320
T-cell
top
quantile
Total time (s)360136684405
DeviceGPUGPUCPU
Peak VRAM/RAM8/31 G5/30 G25 G
Parameters (millions)5.424.16.92
Ensemble size113
RMSE0.32530.3230.356
MAE0.15720.1540.130
Pearson r0.71810.7220.666
ROC/PR AUC0.809/0.8660.876/0.9400.970/0.988
F1 micro/macro0.754/0.4440.538/0.2310.607/0.345
Table 2. Gene inference stats.
Table 2. Gene inference stats.
SettingsGenesEdgesBench-HitsRecall Enrichment Mean Recall (Per-Seed) §
--top 2048 --depth 1 --pct 95522786901556%1.930.11
--top 512 --depth 1 --pct 95334449461348%2.610.08
--top 128 --depth 2 --pct 99676121,2411763%1.690.29
--top 64 --depth 3 --pct 99684319,1181867%1.770.4
--top 32 --depth 4 --pct 99464111,6761556%2.170.26
Recall = input 10, X/27 reference genes recalled. Enrichment here means fold-over-random recovery of the 27 ferroptosis benchmarks; or, enrichment = observed hits/expected hits by chance. § Mean per-seed recall (single-seed start): the average fraction of the 27 benchmark genes recovered when each benchmark is used as the sole seed. For example, in the top = 64 setting a single seed recalls ~40% (≈11/27) on average. Full per-seed values are provided in Supplementary S1D.
Table 3. STRING database literature comparison.
Table 3. STRING database literature comparison.
EdgeSource/ReferenceBrief Description of Reported Linkage
AATF—PRPF3Psychiatry Investigation, Suppl. 2 PDF (2022)Coupling across RNA posttranscriptional modification, cell-death/survival pathways, and embryonic development
ABHD15—TRPM6Decreased GLUT2 and glucose uptake contribute to insulin-secretion defects in MODY3/HNF1A hiPSC-derived mutant β-cellsShared involvement in insulin signaling/insulin resistance
ADH4—CYP27C1The effect of rs2910686 on ERAP2 expression in IBD and epithelial inflammatory responseRetinoid and isoprenoidbinding/processing connection
ANO2—XKR9TMEM16 and OSCA/TMEM63 proteins share a conserved potential to permeate ions and phospholipids (ANO ⇔ XKR link)Functional link between ANO (TMEM16) and XKR families in ion and phospholipid permeation
CNIH3—NAPBHAGRID MPM interaction page (pair 3804/342900)Documented protein–protein interaction
DNAJA2—SNRPD3Report on DNAJ–HSP40–HSP70–SNRPD3 complexChaperone–spliceosome functional connection
FAM177A1—MED29Cleavage and polyadenylation specific factor 1 promote tumor progression via alternative polyadenylation and splicing in hepatocellular carcinomaLinkage through mRNA 3′-end processing and transcription regulation
FAM241B—WLSPICKLE interaction record (ID 3601507, human)Reported physical association
GADD45GIP1—IER2Transcription and chromatin organization of a housekeeping gene cluster containing an integrated β-globin locus-control regionCo-regulation within housekeeping gene chromatin domain
HLF—UQCRQMitochondrial complex III regulates hypoxic activation of HIFHLF–HIF axis functionally linked to complex III subunit UQCRQ in hypoxic signaling
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yi, Z.; Ma, N.; Ao, Y. LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights. Biology 2026, 15, 62. https://doi.org/10.3390/biology15010062

AMA Style

Yi Z, Ma N, Ao Y. LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights. Biology. 2026; 15(1):62. https://doi.org/10.3390/biology15010062

Chicago/Turabian Style

Yi, Ziyue, Nao Ma, and Yuanbo Ao. 2026. "LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights" Biology 15, no. 1: 62. https://doi.org/10.3390/biology15010062

APA Style

Yi, Z., Ma, N., & Ao, Y. (2026). LazyNet: Interpretable ODE Modeling of Sparse CRISPR Single-Cell Screens Reveals New Biological Insights. Biology, 15(1), 62. https://doi.org/10.3390/biology15010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop