Next Article in Journal
Expression of Ion Transporters Is Altered in Experimental Ulcerative Colitis: Anti-Inflammatory Effects of Nobiletin
Previous Article in Journal
Traction Force Microscopy Using an Epifluorescence Microscope: Experimental Considerations and Caveats
Previous Article in Special Issue
Evaluating the Effectiveness of Machine Learning for Alzheimer’s Disease Prediction Using Applied Explainability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating AI with Cellular and Mechanobiology: Trends and Perspectives

by
Sakib Mohammad
1,*,†,
Md Sakhawat Hossain
2,*,† and
Sydney L. Sarver
3
1
Department of Engineering Technology, Fairmont State University, Fairmont, WV 26554, USA
2
Department of Mechanical Engineering, Auburn University, Auburn, AL 36849, USA
3
School of Biological Sciences, Southern Illinois University Carbondale, Carbondale, IL 62901, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Biophysica 2025, 5(4), 62; https://doi.org/10.3390/biophysica5040062
Submission received: 24 October 2025 / Revised: 5 December 2025 / Accepted: 11 December 2025 / Published: 14 December 2025
(This article belongs to the Special Issue Advances in Computational Biophysics)

Abstract

Mechanobiology explores how physical forces and cellular mechanics influence biological processes. This field has experienced rapid growth, driven by advances in high-resolution imaging, micromechanical testing, and computational modeling. At the same time, the increasing complexity and volume of mechanobiological imaging and measurement data have made traditional analysis methods difficult to scale. Artificial intelligence (AI) has emerged as a practical tool to address these challenges by providing new methods for interpreting and predicting biological behavior. Recent studies have demonstrated potential in several areas, including image-based analysis of cell and nuclear morphology, traction force microscopy (TFM), cell segmentation, motility analysis, and the detection of cancer biomarkers. Within this context, we review AI applications that either incorporate mechanical inputs/outputs directly or infer mechanobiologically relevant information from cellular and nuclear structure. This study summarizes progress in four key domains: AI/ML-based cell morphology studies, cancer biomarker identification, cell segmentation, and prediction of traction forces and motility. We also discuss the advantages and limitations of integrating AI/ML into mechanobiological research. Finally, we highlight future directions, including physics-informed and hybrid AI approaches, multimodal data integration, generative strategies, and opportunities for computational biophysics-aligned applications.

1. Introduction

Artificial intelligence (AI) has been shaping the study of cellular biology in recent years by enabling the analysis of diverse, complex, and high-dimensional data. It is finding patterns in these datasets that are difficult to realize with traditional methods. Beyond pattern recognition, AI is a powerful tool for modeling intricate biological systems and deducing their behavior. Bioinformatics, a branch of biology, has been using AI for a long time to extract information from diverse experimental modalities, including genomics, transcriptomics, proteomics, and epigenomics [1,2]. While Bioinformatics paved the way for AI tools in the biological domain, researchers in other sub-domains are now adapting AI-driven approaches to investigate cellular processes and mechanisms.
Mechanobiology is a promising area in cell biology that examines how physical forces affect cell behavior and fate. The field lies at the intersection of biology, engineering, and physical sciences, with a focus on correlating mechanical cues with functional outcomes in biological cells [3]. Mechanotransduction, the molecular process by which cells sense and respond to mechanical stimuli, plays a crucial role in regulating cell behavior [4]. The microenvironment, which encompasses interactions with extracellular matrices (ECMs) [5], the physical properties of cells, and forces such as loading and traction, play a pivotal role in determining cell function, lineage commitment, and disease progression [6]. To understand the complex interactions between these parameters, researchers need tools that can integrate large, diverse datasets with interpretable outcomes, challenges that align perfectly with the strengths of AI models.
In this review, we highlight recent advances in the application of AI in the field of mechanobiology and related areas of cell biology. We first present an overview of primary AI methods to establish a conceptual grounding. We then examine representations of research efforts that utilize AI to address key challenges in cellular morphology analysis, cancer biomarker identification, cell tracking and segmentation, and force/motility prediction. Finally, we discuss current limitations in data, methods, and explainability, and conclude with perspectives on future directions for integrating AI methods into mechanobiological research.
To clarify the scope of this review, we define mechanobiological AI as computational approaches that (i) use mechanical inputs or outputs, namely forces, stiffness, substrate deformation, or the mechanical properties of ECMs; or (ii) infer mechanical or mechanotransduction related states from cellular or nuclear morphology, cell motility, or image derived structural cues. While several AI applications in cellular imaging are biological, we focus here on those that provide mechanobiologically relevant information, either by directly predicting mechanical properties or by capturing morphological correlations of mechanical states. This definition guides the organization of the four topical domains reviewed here.

2. AI Along the Mechanobiological Pipeline

Cellular mechanobiology involves a cascade of processes linking mechanical stimuli to biochemical and morphological outcomes. For clarity, we break down the AI applications presented here within a simple pipeline consisting of: (i) mechanical inputs such as ECM stiffness, substrate deformation, or cell–cell forces, (ii) sensing via focal adhesions, cytoskeletal prestress and mechanotransduction pathways, (iii) downstream morphological and molecular responses and (iv) outputs such as migration, differentiation, or changes in phenotype. AI methods contribute by predicting either mechanical properties directly, inferring mechanical states from morphological cues, or automating image-based measurements that support mechanical modeling.
This framing provides a unifying view of the four domains discussed in this review: morphology, biomarkers, segmentation, and finally force and motility prediction, highlighting how AI aids traditional mechanobiological approaches without replacing underlying biophysical theory.
The organization of this work is presented in Figure 1.

3. Background on AI

AI has evolved significantly from its beginnings in the 1950s, when symbolic approaches were developed, to the robust data-driven methods of today. Initial enthusiasm was thwarted by inadequate computing power and a scarcity of high-quality datasets. The resurgence of the 1980s introduced algorithms that could learn from examples rather than being explicitly programmed [7]. By the 1990s, the field had matured into what is now known as Machine Learning (ML), marked by landmark models such as LeNet, which was used for recognizing handwritten digits [8]. The 2010s brought fast GPUs, large-scale, high-quality datasets, and robust programming abstractions (PyTorch [9], and Tensorflow [10]), which collectively fueled the Deep Learning (DL) revolution. In particular, DL models, neural networks (NNs), have become central to modern AI, offering state-of-the-art performance in computer vision, natural language processing, and biomedical imaging.
ML, at its core, can be viewed as a function approximator, given input-output pairs ( x i ,   y i ) , the algorithm seeks a mapping f ( x )     y that minimizes a loss function L . For supervised regression, a common goal is:
min f 1 n i = 1 n ( y i f x i ) 2
This corresponds to mean squared error (MSE). In the case of classification, the loss function is cross-entropy loss, that is:
L =     i = 1 n c = 1 C y i , c log y ^ i , c
where y i , c is the true class label, where as y ^ i , c means predicted probability.
NNs, specifically computer vision models, operate by learning hierarchical representations from data through layered transformations. Early layers extract low-level patterns such as edges or textures, whereas deeper layers capture complex structures relevant to cell morphology or mechanical phenotype. These models are trained by iteratively adjusting their parameters (or weights) to minimize the error between predictions and ground-truth labels.
In recent years, large foundation models [11], diffusion architectures [12], and large language models (LLMs) [13,14], among many others, have transformed representation learning and multimodal data integration. Although their adoption in cellular and mechanobiology is still emerging, these models demonstrate potential in protein structure prediction [15,16,17,18], automated experiment annotation [19], and cross-modal biological reasoning [20,21]. Their ability to encode broad biological cues suggests future utility in unifying mechanical, morphological, and molecular features.

Learning Paradigms

Supervised learning (SL): This uses labeled datasets where the target output is known a priori. Typical tasks include regression and classification. These are evaluated with metrics such as accuracy, precision, recall, F1 score, and area under receiver operating curve (ROC-AUC or simply AUC) [22]. Classical ML models include the Support Vector Machine (SVM) [23], logistic regression [24], Naïve Bayes [25], k–Nearest Neighbor (k-NN) [26], and decision trees [27]. In addition to the traditional models, ensemble models such as Random Forests (RF) [28], XGBoost [29], NNs including Multilayer Perceptron (MLP) [30], Convolutional Neural Network (CNN) [31], Recurrent Neural Network (RNN) [32], Transformer [33], Vision Transformer (ViT) [34] have expanded the scope of SL in multiple domains.
Unsupervised learning (UL): This learning paradigm works without labeled data. Instead, it finds hidden structures in data. Key tasks include clustering (k-Means [35], Density-based Spatial Clustering of Applications with Noise (DBSCAN) [36], Gaussian Mixture Model (GMM) [37]) and dimensionality reduction (Principal Component Analysis (PCA) [38], and t-Distributed Stochastic Neighbor Embedding (t-SNE) [39]). NN approaches include an autoencoder (AE) [40] and variational autoencoder (VAE) [41]. However, unlike SL, evaluation is less straightforward, relying on metrics such as the silhouette score or reconstruction error [42].
Reinforcement learning (RL): RL models how intelligent agents act in an unknown environment to maximize the notion of cumulative reward. Unlike SL and UL, RL emphasizes sequential decision-making. Algorithms include Q-learning [43], state action reward state action (SARSA) [44], and deep Q-network (DQN) [45]. A milestone in this field was DeepMind’s AlphaGo Zero [46], which surpassed human experts in the game of Go.
Most of the AI models presented in this review are implemented using open-source Python frameworks mentioned earlier in this section. However, some works make use of several legacy or specialized tools such as MATLAB or various ImageJ [47] plugins.

4. Cell Morphology Analysis

Cell morphology provides a window into the underlying state of a biological system, with cytoplasmic and nuclear features often reflecting genotypic or biochemical changes. Long-standing toolkits such as FluoCell [48], have allowed the extraction of meaningful features from raw fluorescence images. Many of these morphological characters, such as nuclear shape, cytoskeletal organization, and protrusive structures, are established features of cellular mechanical states. Hence, morphology is a practical alternative for mechanotransduction and cell–ECM interaction. AI models, particularly NNs such as CNNs and ViTs, have shown strong capability in extracting complex morphological features from microscopy datasets. These methods surpass traditional analysis pipelines that relied heavily on feature engineering, such as Zernike moments [49] or Haralick textures [50].

4.1. Traditional ML and Feature-Based Approaches

Earlier studies leveraged hand-crafted features in combination with classic ML models. For example, RF and logistic regression models trained on fluorescence-derived descriptors have accuracies above 90% for classifying macrophage phenotypes [51,52]. Dimensionality-reduction methods such as PCA, t-SNE, and Uniform Manifold Approximation and Projection (UMAP) [38,53] were instrumental in visualizing morphological clusters and reducing redundancy in high-dimensional datasets. A mathematical formulation for this reduction is:
Z = X W   w h e r e   X R n × d ,   W R d × k , k d
where X is the feature matrix and Z is the reduced representation. Although effective, these pipelines were limited by their dependence on predefined features and struggled with generalization in new imaging conditions.

4.2. DL for End-to-End Morphology Recognition

With the addition of DL vision models such as CNNs and ViTs, researchers shifted towards end-to-end learning directly from raw pixels. Dürr et al. [54] trained CNNs on over 40,000 multi-channel images from the BBBC022v1 cell painting assay dataset [55], achieving 93.4% accuracy while outperforming SVM and Linear Discriminant Analysis (LDA) [56] baselines. Similarly, DenseNet121-based [57] classifiers achieved 86% accuracy in distinguishing stem cell multipotency [58]. Architectures such as squeeze and excite RNN (SE-RNN) [59] and multi-scale CNNs [60] further improved generalization across diverse cell lines and treatment conditions. The predictive task in these settings is often framed as classification, where the model learns:
y ^ = a r g   max c C   P ( y = c x ; θ )
With P ( y = c x ; θ ) estimated by the neural network for the input image x , parameterized by weights θ .

4.3. Weakly Supervised and Transfer-Learning Approaches

Another development in AI methods, known as weakly supervised models [61], bypasses the need for pixel-level annotations. Kraus et al. [62] introduced a multiple-instance learning (MIL) operator [63], enabling classification of microscopy images using only image-level labels. These saliency mapping techniques improved interpretability, a critical requirement in a biological context [64].
Transfer learning has also proven its effectiveness in morphology analysis. Xu et al.’s CellVisioner tool [65] combined U-Net [66] and conditional generative adversarial networks (cGANs) [67] to extract mechanobiological parameters from label-free images, achieving R2 = 0.93 for yes-associated protein (YAP) ratios. Similarly, transfer learning [68] with InceptionV3 [69] yielded near-perfect accuracy in classifying T-cell activation states from autofluorescence images [70]. These works highlight the adaptivity of pretrained networks to biological imaging tasks with limited annotated data.

4.4. Generative and Hybrid Models

Recent research has explored generative AI and hybrid temporal-spatial models. Palma et al. [71] applied a style-transfer [72] AE to investigate morphological shift induced by genetic or chemical perturbations, while Aida et al. [73] employed a cGAN to segment cancer stem cells from phase-contrast and nuclear images. Sullivan et al. [74] demonstrated the efficacy of a “human-in-the-loop” [75] training approach, where the authors integrated annotations from more than 300,000 citizen scientists to classify protein localization, achieving expert-level accuracy. Wu et al. [76] extended DL into mechanobiology, predicting single-cell stiffness from brightfield images and linking these predictions to Young’s modulus validated by Atomic Force Microscopy (AFM) and cytometry.
Hybrid models not only make raw classifications but also reveal latent dynamics within cellular systems. Buggenthin et al. [77] combined CNNs with RNNs to predict hematopoietic stem and progenitor cell (HSPC) lineage commitment generations before the surface markers become apparent, while Zhu et al. [78] used an Xception-based CNN [79] to predict neural stem cell fates from brightfield images. AE-based novelty detection [80] and tracking networks, such as time-delay neural networks (TDNNs) [81], further illustrate how morphology and dynamics can be coupled within a single framework.

4.5. Synthesis

Cell morphology analysis using AI methods has progressed from utilizing handcrafted features in conjunction with traditional ML models to end-to-end DL, weak supervision, and generative modeling. CNNs remain strong baselines for many morphological tasks, whereas ViTs improve generalization on large datasets but require more data and computation. Weakly supervised and generative models reduce annotation burdens at the cost of interpretability and consistency. These approaches enable high-throughput, label-free, and explainable predictions that can generalize across imaging modes. More importantly, some recent work highlights the integration of cellular morphologies with biophysical properties as seen in AFM-guided AI systems [76,82]. However, the challenge remains in the form of data curation, ensuring robustness in biological experiments and imaging, and finally integrating morphology with multi-omic and mechanical readouts. Several of these approaches, although not directly modeling mechanical parameters, contribute to mechanobiology by using morphology as a proxy for mechanical states of the cell. Table 1 shows works related to this section.

5. Cancer Biomarker Detection

Identification of reliable cancer biomarkers is crucial for early detection, diagnosis, and treatment. AI methods have been increasingly applied to diverse data modalities, ranging from Raman spectroscopy to high-resolution imaging, to extract subtle features from molecular and morphological signatures of cancer progression. Since cancer progression involves defined mechanical alterations, including changes in stiffness, contractility, and ECM remodeling, AI-driven biomarker detection often captures phenotypes that directly reflect the underlying mechanobiological states.

5.1. Spectroscopy-Based Biomarker Detection

Several studies have demonstrated the efficacy of AI in classifying spectroscopic signatures of tumor repopulating cells (TRCs). Tipatet et al. [90] employed Raman spectroscopy to detect acquired radio-resistance in breast cancer cell lines (Michigan Cancer Foundation-7 (MCF-7), ZR-75-1, MDA-MB-231), allowing them to distinguish between wild-type and resistant phenotype. Here, data processing was performed using PCA followed by classical ML classifiers. In PCA, the spectral data matrix X R n × d is decomposed into orthogonal components as follows:
X Z W T , W T W = I
where Z contains reduced-dimensional features.
In the same manner, Wu et al. [91] applied probabilistic PCA (PPCA) [92] and SVMs to classify ovarian cancer patients from healthy controls using surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) [93] proteomic data. Mandrell et al. [94] extended this approach by combining PCA, k-NN, and SVM to classify TRCs in pancreatic cancer cell lines. In hepatocellular studies, Shen et al. [95] utilized single-cell Raman spectra with an SVM classifier to identify the proliferation stages of human hepatocytes, achieving a specificity of 0.98, which demonstrates that spectroscopic biomarkers are capable of capturing cell-cycle progression.

5.2. Image-Based Biomarker Analysis

Beyond spectroscopy, fluorescence and label-free microscopy have been utilized for the detection of cancer biomarkers. Rozova et al. [96] cultured breast cancer cells on ECMs of various stiffness, then extracted antibody-stained fluorescence features with CellProfiler [97] and classified macrophages using an RF model. Kandaswamy et al. [98] expanded this pipeline by pairing CellProfiler features of the MCF-7 cell line with stacked auto-associators (SAAs) [99] and deep transfer learning, obtaining an 88% classification accuracy in predicting compound mechanisms of action.

5.3. DL Approaches

DL enables end-to-end biomarker extractions directly from the raw images, minimizing the need for hand-crafted feature engineering. Forslid et al. [100] applied ResNet [101] to cervical cell microscopy images, achieving robust detection of cancer vs. normal cells. Mohammad et al. [102] utilized CNNs as feature extractors for several pancreatic cancer cell lines. They combined them with linear as well as tree-based classifier models to identify three TRC subtypes within the cell population.
Histology-based approaches have also benefited from custom CNN architectures. Sirinukunwattana et al. [103] developed a spatially constrained CNN (SC-CNN) for nucleus detection and a SoftMax [104] CNN with neighboring ensemble predictors (NEP) [103] for classification, with an F1 score of 0.802 for detection and 0.784 for classification. Similarly, Berryman et al. [105] proposed a custom CNN for phenotyping disaggregated cancer cells from low-resolution microscopy images with a strong F1 score of 95.3% across several cell lines.

5.4. Generative and Mechano-Biology-Informed Models

Recent works have explored the integration of generative modeling with mechanobiology. Foo et al. [106] employed a cGAN to estimate tumor spheroid elasticity from mechano-microscopic data. The authors trained their model on synthetic finite element simulations, resulting in a 29% reduction in error compared to algebraic methods. In addition, they validated their approach using spheroids developed from MCF-7 cell line, successfully identifying stiff nuclei and enabling high-resolution 3D biomarker analysis.

5.5. Synthesis

In general, traditional ML models perform strongly for spectroscopy-based features, while CNNs and transformers outperform them on imaging modalities. Additionally, generative models offer unique advantages for data augmentation and mechanical phenotype prediction, complementing feature-based pipelines.
Together, these studies illustrate a clear progression in cancer biomarker identification: from spectroscopic analysis with traditional ML models, towards end-to-end CNNs, transfer learning, and generative models that integrate mechanical as well as morphological information. The integration of multimodal data (spectroscopy, fluorescence imaging, and mechanobiology) remains an exciting frontier, offering scope for robust, label-free, and high-throughput biomarker discovery. Table 2 is a compilation of research papers in relation to cancer biomarker detection.

6. Cell Segmentation

Accurate segmentation of cells and subcellular structures is a foundational task in cell biology. Traditional annotations are laborious, subjective, and infeasible for large datasets. AI-powered tasks, in particular, those based on DL, provide scalable solutions for describing cell boundaries, identifying intracellular organelles, and tracking cell populations over periods of time. Segmentation serves as a mechanobiology-related step because accurate delineation of cell and nuclear boundaries enables many downstream applications that infer shape-related mechanical cues, tractions, and cytoskeletal organization.

6.1. General-Purpose DL Frameworks

Several landmark frameworks have advanced end-to-end segmentation directly from microscopy images. Van Valen et al. [107] demonstrated that supervised CNNs can segment both bacterial and mammalian cells in a live, co-culture context. At the same time, Sadanandan et al. [108] developed a workflow using fluorescent endpoint masks to train CNNs for brightfield segmentation, achieving accuracy comparable to manual annotation. Building upon these efforts, Stringer et al.’s Cellpose models [109,110] have established themselves as community standards for generalized 2D and 3D image segmentation. These frameworks reduce manual curation and accelerate analysis; however, their performance still declines on unusual morphologies, highlighting the need for domain-specific fine-tuning.

6.2. Accessible and Lightweight Tools

A second wave of work focuses on usability in addition to integration with general workflows. Griebel and colleagues [111] developed a tool called Deepflash 2 that leverages ConvNext [112] to handle ambiguous cellular images. Wiggins et al. [113] introduced CellPhe, which combines classical models such as LDA, RF, and SVM with clustering to phenotype cells. Arzt et al. [114] designed LABKIT, a lightweight Fiji/ImageJ plugin [47] for rapid and high-throughput segmentation that operates locally without requiring heavy computation. Similarly, other tools such as DeepCell 1.0/2.0 [115] as well as Usiigaci [116] push forward the efforts of making AI-powered, lightweight, automated cell segmentation platforms. These tools try to improve accessibility but often sacrifice generalizability or precision.

6.3. Data-Efficient Learning

Annotation scarcity remains a central bottleneck in the field of AI-assisted cell segmentation studies. To obtain the best generalization, scientists need a bulk of annotated data to train the DL models. Robitaille et al. [117] address this deficit by introducing a self-supervised (SSL) [118] model for live-cell segmentation that achieves competitive performance with a low number of labeled data. While highly efficient in data-starved regimes, such methods still fall behind supervised models in boundary precision, highlighting the trade-off between efficiency and accuracy.

6.4. Advanced Architectures

Architectural innovation has driven both accuracy and adaptability in the domain. Ghaznavi and colleagues [119] applied U-Net with VGG19 [120], Inception [121], and ResNet34 [101] encoders to segment HeLa cell images, achieving a mean intersection over union score (mIoU) of 0.81. Hollandi et al. [122] combined Mask R-CNN [123] with U-Net refinement and image-style transfer, enabling robust cross-modality nucleus segmentation and earning top scores in the 2018 Data Science Bowl. Jian et al. [124] enhanced U-Net++ [125] with an attention module called self-supervised equivariant attention mechanism (SEAM) [33], which improved boundary detection in fluorescence in situ hybridization (FISH) images [126]. Pelt et al. [127] developed the mixed-scale dense convolutional network (MS-D), which matched or outperformed U-Net and SegNet [128] using 40× fewer parameters. Their work highlights the importance of efficiency-oriented designs. These advances showcase architectural creativity; however, they remain mostly validated on narrow benchmarks (limited datasets), raising questions about generalizability across a wide range of cellular and mechanobiological datasets.

6.5. Three-Dimensional and Four-Dimensional Segmentation

Beyond 2D segmentation, new methods target volumetric and temporal dimensions. Chen et al. [129] introduced the Allen Cell Segmenter, a tool that leverages both classical ML models as well as DL architectures for 3D intracellular segmentation with over 98% accuracy on human induced pluripotent stem cell (hiPSC) datasets. Amat and colleagues [130] expanded into 4D segmentation and tracking, combining supervoxel oversegmentation [131], GMM-based contour evolution [132], and spatiotemporal association [133]. They applied the approach to Drosophila embryos with 97% lineage accuracy, processing up to 26,000 cells per minute and generating lineage fate maps. These studies demonstrate the feasibility of high-throughput spatiotemporal analysis, although computational cost and storage remain a challenge for many labs.

6.6. Synthesis

Segmentation research has matured from task-specific CNNs to general-purpose frameworks, lightweight platforms, SSL paradigms, and advanced models. Yet, tools such as Cellpose, which work well for standard morphologies, fall short on rare or noisy datasets. SSL, on the other hand, reduces the need for annotated data but at the cost of cell boundary precision. Finally, most of the methods described here are computationally demanding. Future progress should likely rely on hybrid strategies that integrate the robustness of general-purpose models with domain adaptation and generative augmentation to reduce annotation costs, while also striking a balance between efficiency and accuracy. Table 3 shows studies related to the discussion in this section.

7. Traction Forces and Motility Prediction

Traction forces exerted by adherent cells on their microenvironments provide critical information about their physiology and differentiation state in addition to pathological progression. Quantifying these forces enables the study of mechanotransduction and motility. However, traditional methods, such as Fourier Transform Traction Cytometry (FTTC) [134], suffer from limitations, including sensitivity to noise and the inherent ill-posed nature of inverse problems [135]. AI-based methods aim to alleviate these issues by learning force-displacement relationships directly from imaging and cell morphology. This section presents the most direct link between AI and mechanobiology as here the models explicitly predict mechanical variables such as traction, stiffness, and stress distribution.

7.1. Wrinkle-Based and Direct Learning Approaches

Wrinkle-based traction force microscopy (TFM) approaches utilize substrate deformation as a proxy for traction. Li and colleagues [136] proposed the small-world U-Net (SW U-Net), a variant of the U-Net, to segment wrinkles on an elastic substrate. This segmented pattern then translates into force estimates that outperform classic 2D-fast Fourier transform (2D-FFT) [137] methods. The method is expressed as:
F x ,   y = T ( W ( x , y ) ; θ )
where W ( x ,   y ) represents the wrinkle field and T ( · ) is the CNN-based mapping parameterized by θ.
In follow-up work [138], the authors employed a generative adversarial network (GAN) [139] to generate traction maps directly from wrinkle images, bypassing the need for explicit feature extraction. Similarly, Pielawski et al. [140] combined a Tiramisu segmentation network [141] with a Bayesian Neural Network (BNN) [142] to simultaneously predict traction forces with uncertainties from cell geometry alone, avoiding the need for bead displacement imaging. The NN provides, in addition to a mean predictor F ^ but also a confidence interval σ F 2   .
p ( F X ) N ( F ^ , σ F 2 ) .
This uncertainty-aware modeling is critical in noisy experimental conditions.

7.2. Morphology and Physics-Informed Models

Fujiwara et al. [143] developed a U-Net-based framework that integrates morphology, stress fiber organization, and focal adhesions into force predictions. Their model was trained on synthetic shape-augmented datasets with a combined magnitude–direction loss function,
L = α F t r u e F p r e d 2 2 + β ( 1 c o s ( θ t r u e , θ p r e d ) ) ,
This model improves robustness at low bead densities and minimizes false predictions in non-adherent regions, solving a crucial problem in the field.
Wang et al. [136] extended this direction by training a 3D U-Net on synthetic cellular traction data to solve the ill-posed inverse problem in FTTC, producing more accurate traction vectors than traditional implementations. In a similar study, Kratz and colleagues [144] utilized a custom CNN with synthetic data to simplify computations associated with Bayesian FTTC (BFTTC) [134], thereby contributing to reducing error propagation in inverse modeling.
SubramanianBalachandar et al. [145] combined several classical ML models (Stepwise Linear Regression (SLR) [146] and Quadratic SVM [147] to predict cellular tractions and intercellular stresses in human umbilical vein endothelial cells (HUVEC) treated with connexin-43-disrupting drugs. They demonstrated that morphology and drug dosage can serve as predictors of mechanical stress, achieving R2 values of 0.85 for traction and 0.93 for intercellular stresses. The authors, in addition, validated their approach for several drug concentrations.

7.3. Generative and 3D Force Prediction

In addition to conventional 2D traction estimation, several research groups explored generative and 3D forces prediction. Duan et al. [148] proposed a specialized CNN architecture that predicts full 3D traction maps from only two fluorescent bead images per cell, omitting the need for further analysis using complex mathematical models. Li et al. [149] further applied a GAN to generate traction maps from phase-contrast images combined with phase-field model (PFM) simulations, thereby eliminating the need for fluorescent displacement fields and revolutionizing the process of traction-force measurement. Schmitt et al. [150] integrated zyxin signals with three distinct NNs (U-Net-based, physics-constrained, and physics-agnostic) to predict traction forces across three adherent cell lines, demonstrating how physics-informed models improve generalization.

7.4. Cell Motility Prediction with Reinforcement Learning

AI methods have also been extended to model cell motility and migration. Wang et al. [151] applied RL with a residual CNN to model cell tracking as a linear assignment problem. Formally, given a cost matrix C representing distances between detections across frames, the objective for the problem is:
min π P i C i ,   π i ,
where π is the assignment permutation. RL in cell motility prediction improves assignment robustness under noisy trajectories.
In later work, Wang et al. [152] introduced a hierarchical deep RL model trained on phase-contrast and nuclear images to predict cell migration patterns, capturing both short- and long-term motility dynamics.

7.5. Synthesis

AI-based approaches to traction force and motility prediction have evolved from wrinkle-based CNNs and GANs to physics-informed and morphology-aware models that strike a balance between accuracy and biological plausibility. Generative and 3D NN frameworks extend these predictions into volumetric space, while RL introduces decision-making strategies for motility modeling. Compared with purely image-driven CNN models, approaches that encode mechanical structure or spatiotemporal dependencies such as physics-guided networks or RL-based trajectories show improved robustness and biological plausibility, especially in tasks involving deformation or dynamic force propagation. Remaining challenges include limited datasets, the interpretability of force–displacement relationships, and the high computational cost of training 3D models. Advances in this field must focus on capturing the interactions of physical forces and biochemical markers to quantify cell behavior that is generalizable across diverse cell lines of various shapes and sizes. Table 4 presents works related to traction forces and motility prediction with AI methods.
Our discussion up to this point of this review is summarized in Figure 2.

8. Additional Applications Relevant to Mechanobiology

In addition to mammalian cell analysis, AI-assisted methods have been applied to a diverse range of biological and biomechanical problems that are relevant to the fields of cellular and mechanobiology. One of the critical examples lies in the work of Wen et al. [153] who developed a 3D U-Net pipeline for segmenting and tracking neurons in C. elegans. The authors utilized whole-brain imaging and achieved an accuracy of over 98%, demonstrating robustness even in the presence of noise and missing data. Similarly, Häring et al. [154] utilized a cycle-consistent GAN (CycleGAN) [155] NN to automate the segmentation of epithelial tissues in Drosophila embryos using unpaired image-mask datasets. They outperformed traditional tools on images of low-quality or mutant tissue. The significance of their works can be directly translated to mammalian cell images, aiding in applications where high-quality images are scarce.
In pathology, Mahmood et al. [156] combined cGANs with synthetic augmentation techniques to enhance nuclei segmentation in Hematoxylin and Eosin (H&E)-stained slides, resulting in a 29% improvement over prior tools. Coudray et al. [157] applied InceptionV3 to the Cancer Genome Atlas Program (TCGA) slides, distinguishing lung cancer subtypes with an AUC score of 0.97 and predicting gene mutations (epidermal growth factor receptor (EGFR), Kirsten rat sarcoma virus (KRAS), tumor protein p53 (TP53)) directly from images, highlighting the potential of DL to aid molecular testing. Oh et al. [158] introduced CNN-Peaks, a DL model that enhances peak detection in chromatin immunoprecipitation sequencing (ChIP-seq) and other sequencing assays, outperforming traditional algorithms such as model-based analysis of ChIP-Seq (MACS2) [159] and hypergeometric optimization of motif enrichment (HOMER) [160].
Several studies highlight the role of AI in biomechanics. Giolando et al. [161] developed AI-dente, which utilizes forward and inverse NNs to extract mechanical parameters from nanoindentation data with an error of less than 1%, thereby reducing computation time from months to minutes. Stashko et al. [162] developed spatially transformed inferential force map (STIFMap), a CNN-based method for mapping stromal stiffness in breast cancer tissue, which links mechanical heterogeneity to epithelial–mesenchymal transition (EMT) markers and poor outcomes. Hassanlou et al. [163] introduced a CNN for label-free lipid droplet counting during MSC adipogenesis, achieving 94% accuracy while eliminating the need for staining. Additionally, Haider et al. [163] integrated a poroelastic model with ANNs and SVMs to predict Young’s modulus and viscosity in cancer and non-cancer cells. The authors achieved a near-perfect R2 score of 0.999 and very high classification AUC scores, ranging from 0.90 to 0.95. Some related advances in active tissue mechanics include differentiable traction inversion [164], RL-based morphogenetic control [165], and AI-assisted AFM mapping [166,167]. These works further demonstrate how learning-based methods can integrate biophysical cues into mechanical modeling.

Synthesis

The studies presented in this section demonstrate how AI has expanded into areas beyond predicting cellular behaviors, encompassing whole-organism neuroimaging, developmental biology, pathology, epigenomics, and tissue mechanics. In addition, they highlight how AI can complement classical modeling by predicting effective mechanical parameters (Young’s modulus, viscosity, or stromal stiffness) from imaging or indentation-derived data. Generative models enable learning from synthetic data, while CNNs and ANNs provide robust prediction across histological and biomechanical contexts. However, challenges persist, as models are often trained on narrow datasets (for example, specific tissues or imaging conditions), raising concerns about generalizability. Additionally, many models remain computationally intensive, hindering their large-scale adoption. Table 5 shows a summary of research projects pertinent to this section.

9. Limitations and Prospects

One of the primary strengths of AI models is their flexibility; architectures designed for one domain can often be adapted for use in another. For example, image classification networks developed for natural images can be retrained to identify biological cells, and the transformer architecture designed initially for natural language processing now achieve state-of-the-art performance in vision tasks (ViT, shifted-window transformer (Swin-T) [169]). Yet this generalizability is constrained in practice by data availability. Training modern deep networks, particularly transformers, requires vast, diverse datasets. In cell biology, generating such datasets is expensive due to the experimental burden of imaging and assays. While transfer learning provides a partial solution, its success depends on the availability of large, domain-relevant pretrained models, which are not readily available in biology.
From a computational biophysics perspective, AI provides complementary tools that aid experimental measurements rather than replacing mechanical modeling. In cellular mechano-biology, tasks such as traction estimation, stiffness inference, or cytoskeletal structural analysis traditionally rely on computational methods stemming from biophysical principles. AI models accelerate these workflows by extracting mechanically relevant information directly from data such as images, thereby connecting experimental data with downstream physical interpretation without requiring explicit modeling.
Coming back to the topic of data scarcity, we maintain that, public datasets such as LiveCell [170], Cellpose, and TissueNet [171] have helped immensely; however, they are relatively small and limited in cell type coverage. Heterogeneity in cellular and nuclear morphology across biological systems makes it challenging to build universally generalizable models. As a result, pretrained models trained on natural images often require extensive fine-tuning, and their performance may degrade when applied to images of a different cell line. Compounding this issue, the field faces risks associated with data quality, including mislabeled or fraudulent data [172], which can undermine the trustworthiness of models.
Another major challenge is overfitting. This problem applies to the field of AI in general, rather than specifically to cell biology. Many models perform exceptionally well on the dataset around which they are built around but fail to reproduce results in the real world. A related issue is interpretability. While tools such as gradient-weighted class activation mapping (Grad-CAM) [173] and local interpretable model-agnostic explanations (LIME) [174] have been used to show some insight into the training processes of deep models, detailed explanations of the training as well as the inference process of the NNs remain vague. Improving interpretability is essential, not only for scientific acceptance but also for uncovering biologically meaningful features.
Reproducibility remains a concern. Biological systems are inherently stochastic, and AI models themselves involve stochastic optimization. Together, these two layers of variability make results difficult to replicate exactly across laboratories. Addressing this will require standardized benchmarks, reproducible pipelines, and possibly physics-informed or hybrid modeling that incorporates mechanistic processes into model training.
We, in addition, note the growing interest in physics-informed modeling approaches, including physics-informed neural network (PINN)-style [175] formulations, differentiable mechanical solvers [176], and hybrid finite element/deep learning schemes [177]. These methods aim to incorporate physical priors into model training, helping to enforce consistency with known biophysical principles. Although such approaches are of paramount importance, they are outside the primary scope of this review. These methods represent a promising complement to the predominantly data-driven studies discussed here.
Finally, several of the models discussed here implicitly approximate classical mechanical relations (stress–strain mappings, viscoelastic behavior). For example, traction inference can be interpreted as a surrogate inverse elasticity problem, while stiffness or modulus prediction corresponds to estimating mechanical quantities from imaging cues. A full treatment of constitutive theory is beyond the scope of this review, but acknowledging these connections helps to establish the link between AI predictions and mechanobiological principles.
Figure 3 summarizes the current obstacles researchers are facing integrating AI into their work and what the future may hold for these methods in the ever-evolving field of biology.

10. Conclusions

AI has transformed cellular biology and mechanobiology by enabling high-throughput, automated analysis of complex datasets and revealing patterns that are inaccessible to manual inspection. Applications now span morphological analysis, biomarker detection, segmentation, traction force estimation, and motility prediction, with growing use of generative and physics-informed modeling.
Nevertheless, the field faces persistent challenges; data scarcity, overfitting, limited generalization, and interpretability remain barriers to widespread deployment. Progress is likely to come from hybrid approaches that integrate data-driven learning with biomechanical and biophysical estimations, as well as from the development of large, diverse, and well-curated biological datasets.
Looking forward, AI is projected to act as a complementary tool in mechanobiology by accelerating traction inference, stiffness estimation, and other relevant mechanical measurements from mono- or multimodal datasets. Further progress will depend on bridging experimental observations with physics-informed learning frameworks enabling models that are biology-relevant and physically grounded.

Author Contributions

Conceptualization, S.M.; Literature review, S.M. and M.S.H.; writing—original draft preparation, S.M., M.S.H. and S.L.S.; writing—review and editing, S.M. and M.S.H.; visualization, S.M. and M.S.H.; supervision, S.M. and M.S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jamialahmadi, H.; Khalili-Tanha, G.; Nazari, E.; Rezaei-Tavirani, M. Artificial intelligence and bioinformatics: A journey from traditional techniques to smart approaches. Gastroenterol. Hepatol. Bed Bench 2024, 17, 241. [Google Scholar] [CrossRef] [PubMed]
  2. Ballard, J.L.; Wang, Z.; Li, W.; Shen, L.; Long, Q. Deep learning-based approaches for multi-omics data integration and analysis. BioData Min. 2024, 17, 38. [Google Scholar] [CrossRef] [PubMed]
  3. Qiu, J.; Li, L.; Sun, J.; Peng, J.; Shi, P.; Zhang, R.; Dong, Y.; Lam, K.; Lo, F.P.W.; Xiao, B.; et al. Large AI Models in Health Informatics: Applications, Challenges, and the Future. IEEE J. Biomed. Health Inform. 2023, 27, 6074–6087. [Google Scholar] [CrossRef] [PubMed]
  4. Chowdhury, F.; Huang, B.; Wang, N. Cytoskeletal prestress: The cellular hallmark in mechanobiology and mechanomedicine. Cytoskeleton 2021, 78, 249–276. [Google Scholar] [CrossRef]
  5. Ouyang, M.; Hu, Y.; Chen, W.; Li, H.; Ji, Y.; Qiu, L.; Zhu, L.; Ji, B.; Bu, B.; Deng, L. Cell mechanics regulates the dynamic anisotropic remodeling of fibril matrix at large scale. Research 2023, 6, 0270. [Google Scholar] [CrossRef]
  6. Chowdhury, F.; Huang, B.; Wang, N. Forces in stem cells and cancer stem cells. Cells Dev. 2022, 170, 203776. [Google Scholar] [CrossRef]
  7. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  8. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  9. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
  10. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  11. Schneider, J.; Meske, C.; Kuss, P. Foundation models: A new paradigm for artificial intelligence. Bus. Inf. Syst. Eng. 2024, 66, 221–231. [Google Scholar] [CrossRef]
  12. Yang, L.; Zhang, Z.; Song, Y.; Hong, S.; Xu, R.; Zhao, Y.; Zhang, W.; Cui, B.; Yang, M.-H. Diffusion models: A comprehensive survey of methods and applications. ACM Comput. Surv. 2023, 56, 1–39. [Google Scholar] [CrossRef]
  13. Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S. Language models are few-shot learners. arXiv 2020. [Google Scholar] [CrossRef]
  14. Manning, C.D. Human language understanding & reasoning. Daedalus 2022, 151, 127–138. [Google Scholar] [CrossRef]
  15. Senior, A.W.; Evans, R.; Jumper, J.; Kirkpatrick, J.; Sifre, L.; Green, T.; Qin, C.; Žídek, A.; Nelson, A.W.; Bridgland, A. Improved protein structure prediction using potentials from deep learning. Nature 2020, 577, 706–710. [Google Scholar] [CrossRef]
  16. Jumper, J.; Evans, R.; Pritzel, A.; Green, T.; Figurnov, M.; Ronneberger, O.; Tunyasuvunakool, K.; Bates, R.; Žídek, A.; Potapenko, A. Highly accurate protein structure prediction with AlphaFold. Nature 2021, 596, 583–589. [Google Scholar] [CrossRef] [PubMed]
  17. Abramson, J.; Adler, J.; Dunger, J.; Evans, R.; Green, T.; Pritzel, A.; Ronneberger, O.; Willmore, L.; Ballard, A.J.; Bambrick, J. Accurate structure prediction of biomolecular interactions with AlphaFold 3. Nature 2024, 630, 493–500. [Google Scholar] [CrossRef] [PubMed]
  18. Meng, Y.; Zhang, Z.; Zhou, C.; Tang, X.; Hu, X.; Tian, G.; Yang, J.; Yao, Y. Protein structure prediction via deep learning: An in-depth review. Front. Pharmacol. 2025, 16, 1498662. [Google Scholar] [CrossRef] [PubMed]
  19. Jiang, Q.; Sun, G.; Li, T.; Tang, J.; Xia, W.; Wang, Y.; Jiang, L.; Liang, R. AutoMA: Automated Generation of Multi-level Annotations for Time Series Visualization. In Proceedings of the 2025 IEEE 18th Pacific Visualization Conference (PacificVis), Taipei, Taiwan, 22–25 April 2025; pp. 80–90. [Google Scholar]
  20. Fallahpour, A.; Magnuson, A.; Gupta, P.; Ma, S.; Naimer, J.; Shah, A.; Duan, H.; Ibrahim, O.; Goodarzi, H.; Maddison, C.J. BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model. arXiv 2025. [Google Scholar] [CrossRef]
  21. Tsou, C.-H.; Ozery-Flato, M.; Barkan, E.; Mahajan, D.; Shapira, B. BioVERSE: Representation Alignment of Biomedical Modalities to LLMs for Multi-Modal Reasoning. arXiv 2025. [Google Scholar] [CrossRef]
  22. Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
  23. Cristianini, N.; Ricci, E. Support Vector Machines. In Encyclopedia of Algorithms; Kao, M.-Y., Ed.; Springer: Boston, MA, USA, 2008; pp. 928–932. [Google Scholar]
  24. Castro, H.M.; Ferreira, J.C. Linear and logistic regression models: When to use and how to interpret them? J. Bras. Pneumol. 2023, 48, e20220439. [Google Scholar] [CrossRef]
  25. Lowd, D.; Domingos, P. Naive Bayes models for probability estimation. In Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 7–11 August 2005; pp. 529–536. [Google Scholar]
  26. Mucherino, A.; Papajorgji, P.J.; Pardalos, P.M. k-Nearest Neighbor Classification. In Data Mining in Agriculture; Springer: New York, NY, USA, 2009; pp. 83–106. [Google Scholar]
  27. Fürnkranz, J. Decision Tree. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2010; pp. 263–267. [Google Scholar]
  28. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  29. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  30. Popescu, M.-C.; Balas, V.E.; Perescu-Popescu, L.; Mastorakis, N. Multilayer perceptron and neural networks. WSEAS Trans. Cir. Syst. 2009, 8, 579–588. Available online: https://dl.acm.org/doi/abs/10.5555/1639537.1639542 (accessed on 10 December 2025).
  31. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
  32. Schmidt, R.M. Recurrent Neural Networks (RNNs): A gentle Introduction and Overview. arXiv 2019. [Google Scholar] [CrossRef]
  33. Vaswani, A.; Shazeer, N.M.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  34. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2020. [Google Scholar] [CrossRef]
  35. Jin, X.; Han, J. K-Means Clustering. In Encyclopedia of Machine Learning; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2010; pp. 563–564. [Google Scholar]
  36. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, Oregon, 2–4 August 1996; pp. 226–231. [Google Scholar]
  37. Reynolds, D. Gaussian Mixture Models. In Encyclopedia of Biometrics; Li, S.Z., Jain, A., Eds.; Springer: Boston, MA, USA, 2009; pp. 659–663. [Google Scholar]
  38. Maćkiewicz, A.; Ratajczak, W. Principal components analysis (PCA). Comput. Geosci. 1993, 19, 303–342. [Google Scholar] [CrossRef]
  39. Maaten, L.V.D.; Hinton, G.E. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  40. Li, P.; Pei, Y.; Li, J. A comprehensive survey on design and application of autoencoder in deep learning. Appl. Soft Comput. 2023, 138, 110176. [Google Scholar] [CrossRef]
  41. Kingma, D.P.; Welling, M. An Introduction to Variational Autoencoders. Found. Trends Mach. Learn. 2019, 12, 307–392. [Google Scholar] [CrossRef]
  42. Shahapure, K.R.; Nicholas, C. Cluster quality analysis using silhouette score. In Proceedings of the 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA), Sydney, Australia, 6–9 October 2020; pp. 747–748. [Google Scholar]
  43. Watkins, C.J.C.H.; Dayan, P. Q-learning. Mach. Learn. 1992, 8, 279–292. [Google Scholar] [CrossRef]
  44. Yang, L.; Jiang, D.; Guo, F.; Fu, M. The State-Action-Reward-State-Action Algorithm in Spatial Prisoner’s Dilemma Game. arXiv 2024. [Google Scholar] [CrossRef]
  45. Hafiz, A.M.; Bhat, G.M. Deep Q-Network Based Multi-agent Reinforcement Learning with Binary Action Agents. arXiv 2020. [Google Scholar] [CrossRef]
  46. Silver, D.; Schrittwieser, J.; Simonyan, K.; Antonoglou, I.; Huang, A.; Guez, A.; Hubert, T.; Baker, L.; Lai, M.; Bolton, A.; et al. Mastering the game of Go without human knowledge. Nature 2017, 550, 354–359. [Google Scholar] [CrossRef]
  47. Schindelin, J.; Arganda-Carreras, I.; Frise, E.; Kaynig, V.; Longair, M.; Pietzsch, T.; Preibisch, S.; Rueden, C.; Saalfeld, S.; Schmid, B. Fiji: An open-source platform for biological-image analysis. Nat. Methods 2012, 9, 676–682. [Google Scholar] [CrossRef]
  48. Qin, Q.; Laub, S.; Shi, Y.; Ouyang, M.; Peng, Q.; Zhang, J.; Wang, Y.; Lu, S. Fluocell for ratiometric and high-throughput live-cell image visualization and quantitation. Front. Phys. 2019, 7, 154. [Google Scholar] [CrossRef]
  49. Khotanzad, A.; Hong, Y.H. Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 489–497. [Google Scholar] [CrossRef]
  50. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  51. Neto, N.G.B.; O’Rourke, S.A.; Zhang, M.; Fitzgerald, H.K.; Dunne, A.; Monaghan, M.G. Non-invasive classification of macrophage polarisation by 2P-FLIM and machine learning. eLife 2022, 11, e77373. [Google Scholar] [CrossRef]
  52. Rostam, H.M.; Reynolds, P.M.; Alexander, M.R.; Gadegaard, N.; Ghaemmaghami, A.M. Image based Machine Learning for identification of macrophage subsets. Sci. Rep. 2017, 7, 3521. [Google Scholar] [CrossRef]
  53. McInnes, L.; Healy, J.; Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv 2018. [Google Scholar] [CrossRef]
  54. Dürr, O.; Sick, B. Single-cell phenotype classification using deep convolutional neural networks. J. Biomol. Screen. 2016, 21, 998–1003. [Google Scholar] [CrossRef]
  55. Bray, M.-A.; Gustafsdottir, S.M.; Rohban, M.H.; Singh, S.; Ljosa, V.; Sokolnicki, K.L.; Bittker, J.A.; Bodycombe, N.E.; Dančík, V.; Hasaka, T.P. A dataset of images and morphological profiles of 30 000 small-molecule treatments using the Cell Painting assay. Gigascience 2017, 6, giw014. [Google Scholar] [CrossRef]
  56. Tharwat, A.; Gaber, T.; Ibrahim, A.; Hassanien, A.E. Linear discriminant analysis: A detailed tutorial. AI Commun. 2017, 30, 169–190. [Google Scholar] [CrossRef]
  57. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  58. Kim, H.; Park, K.; Yon, J.-M.; Kim, S.W.; Lee, S.Y.; Jeong, I.; Jang, J.; Lee, S.; Cho, D.-W. Predicting multipotency of human adult stem cells derived from various donors through deep learning. Sci. Rep. 2022, 12, 21614. [Google Scholar] [CrossRef]
  59. Wong, K.S.; Zhong, X.; Low, C.S.L.; Kanchanawong, P. Self-supervised classification of subcellular morphometric phenotypes reveals extracellular matrix-specific morphological responses. Sci. Rep. 2022, 12, 15329. [Google Scholar] [CrossRef]
  60. Godinez, W.J.; Hossain, I.; Lazic, S.E.; Davies, J.W.; Zhang, X. A multi-scale convolutional neural network for phenotyping high-content cellular images. Bioinformatics 2017, 33, 2010–2019. [Google Scholar] [CrossRef]
  61. Zhu, D.; Shen, X.; Mosbach, M.; Stephan, A.; Klakow, D. Weaker than you think: A critical look at weakly supervised learning. arXiv 2023. [Google Scholar] [CrossRef]
  62. Kraus, O.Z.; Ba, J.L.; Frey, B.J. Classifying and segmenting microscopy images with deep multiple instance learning. Bioinformatics 2016, 32, i52–i59. [Google Scholar] [CrossRef]
  63. Ilse, M.; Tomczak, J.; Welling, M. Attention-based deep multiple instance learning. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 2127–2136. [Google Scholar]
  64. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv 2013. [Google Scholar] [CrossRef]
  65. Xu, X.; Xiao, Z.; Zhang, F.; Wang, C.; Wei, B.; Wang, Y.; Cheng, B.; Jia, Y.; Li, Y.; Li, B. CellVisioner: A Generalizable Cell Virtual Staining Toolbox based on Few-Shot Transfer Learning for Mechanobiological Analysis. Research 2023, 6, 0285. [Google Scholar] [CrossRef]
  66. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  67. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014. [Google Scholar] [CrossRef]
  68. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A comprehensive survey on transfer learning. Proc. IEEE 2020, 109, 43–76. [Google Scholar] [CrossRef]
  69. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  70. Wang, Z.J.; Walsh, A.J.; Skala, M.C.; Gitter, A. Classifying T cell activity in autofluorescence intensity images with convolutional neural networks. J. Biophotonics 2020, 13, e201960050. [Google Scholar] [CrossRef]
  71. Palma, A.; Theis, F.J.; Lotfollahi, M. Predicting cell morphological responses to perturbations using generative modeling. Nat. Commun. 2025, 16, 505. [Google Scholar] [CrossRef]
  72. Jing, Y.; Yang, Y.; Feng, Z.; Ye, J.; Yu, Y.; Song, M. Neural style transfer: A review. IEEE Trans. Vis. Comput. Graph. 2019, 26, 3365–3385. [Google Scholar] [CrossRef]
  73. Aida, S.; Okugawa, J.; Fujisaka, S.; Kasai, T.; Kameda, H.; Sugiyama, T. Deep learning of cancer stem cell morphology using conditional generative adversarial networks. Biomolecules 2020, 10, 931. [Google Scholar] [CrossRef]
  74. Sullivan, D.P.; Winsnes, C.F.; Åkesson, L.; Hjelmare, M.; Wiking, M.; Schutten, R.; Campbell, L.; Leifsson, H.; Rhodes, S.; Nordgren, A. Deep learning is combined with massive-scale citizen science to improve large-scale image classification. Nat. Biotechnol. 2018, 36, 820–828. [Google Scholar] [CrossRef]
  75. Natarajan, S.; Mathur, S.; Sidheekh, S.; Stammer, W.; Kersting, K. Human-in-the-loop or AI-in-the-loop? Automate or Collaborate? In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 25 February–4 March 2025; pp. 28594–28600. [Google Scholar]
  76. Wu, Z.; Feng, Y.; Bi, R.; Liu, Z.; Niu, Y.; Jin, Y.; Li, W.; Chen, H.; Shi, Y.; Du, Y. Image-based evaluation of single-cell mechanics using deep learning. Cell Regen. 2025, 14, 21. [Google Scholar] [CrossRef]
  77. Buggenthin, F.; Buettner, F.; Hoppe, P.S.; Endele, M.; Kroiss, M.; Strasser, M.; Schwarzfischer, M.; Loeffler, D.; Kokkaliaris, K.D.; Hilsenbeck, O. Prospective identification of hematopoietic lineage choice by deep learning. Nat. Methods 2017, 14, 403–406. [Google Scholar] [CrossRef]
  78. Zhu, Y.; Huang, R.; Wu, Z.; Song, S.; Cheng, L.; Zhu, R. Deep learning-based predictive identification of neural stem cell differentiation. Nat. Commun. 2021, 12, 2614. [Google Scholar] [CrossRef]
  79. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  80. Sommer, C.; Hoefler, R.; Samwer, M.; Gerlich, D.W. A deep learning and novelty detection framework for rapid phenotyping in high-content screening. Mol. Biol. Cell 2017, 28, 3428–3436. [Google Scholar] [CrossRef]
  81. Wang, Y.; Mao, H.; Yi, Z. Stem cell motion-tracking by using deep neural networks with multi-output. Neural Comput. Appl. 2019, 31, 3455–3467. [Google Scholar] [CrossRef]
  82. Yang, X.; Yang, Y.; Zhang, Z.; Li, M. Deep learning image recognition-assisted atomic force microscopy for single-cell efficient mechanics in co-culture environments. Langmuir 2023, 40, 837–852. [Google Scholar] [CrossRef]
  83. Bonnevie, E.D.; Ashinsky, B.G.; Dekky, B.; Volk, S.W.; Smith, H.E.; Mauck, R.L. Cell morphology and mechanosensing can be decoupled in fibrous microenvironments and identified using artificial neural networks. Sci. Rep. 2021, 11, 5950. [Google Scholar] [CrossRef]
  84. Piccinini, F.; Balassa, T.; Szkalisity, A.; Molnar, C.; Paavolainen, L.; Kujala, K.; Buzas, K.; Sarazova, M.; Pietiainen, V.; Kutay, U.; et al. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data. Cell Syst. 2017, 4, 651–655.e5. [Google Scholar] [CrossRef]
  85. Jin, J.; Schorpp, K.; Samaga, D.; Unger, K.; Hadian, K.; Stockwell, B.R. Machine Learning Classifies Ferroptosis and Apoptosis Cell Death Modalities with TfR1 Immunostaining. ACS Chem. Biol. 2022, 17, 654–660. [Google Scholar] [CrossRef]
  86. Boland, M.V.; Murphy, R.F. A neural network classifier capable of recognizing the patterns of all major subcellular structures in fluorescence microscope images of HeLa cells. Bioinformatics 2001, 17, 1213–1223. [Google Scholar] [CrossRef]
  87. Mohammad, S.; Roy, A.; Karatzas, A.; Sarver, S.L.; Anagnostopoulos, I.; Chowdhury, F. Deep Learning Powered Identification of Differentiated Early Mesoderm Cells from Pluripotent Stem Cells. Cells 2024, 13, 534. [Google Scholar] [CrossRef]
  88. He, L.; Li, M.; Wang, X.; Wu, X.; Yue, G.; Wang, T.; Zhou, Y.; Lei, B.; Zhou, G. Morphology-based deep learning enables accurate detection of senescence in mesenchymal stem cell cultures. BMC Biol. 2024, 22, 1. [Google Scholar] [CrossRef]
  89. Wang, S.; Han, J.; Huang, J.; Islam, K.; Shi, Y.; Zhou, Y.; Kim, D.; Zhou, J.; Lian, Z.; Liu, Y.; et al. Deep learning-based predictive classification of functional subpopulations of hematopoietic stem cells and multipotent progenitors. Stem Cell Res. Ther. 2024, 15, 74. [Google Scholar] [CrossRef]
  90. Tipatet, K.S.; Davison-Gates, L.; Tewes, T.J.; Fiagbedzi, E.K.; Elfick, A.; Neu, B.; Downes, A. Detection of acquired radioresistance in breast cancer cell lines using Raman spectroscopy and machine learning. Analyst 2021, 146, 3709–3716. [Google Scholar] [CrossRef]
  91. Wu, J.; Ji, Y.; Zhao, L.; Ji, M.; Ye, Z.; Li, S. A mass spectrometric analysis method based on PPCA and SVM for early detection of ovarian cancer. Comput. Math. Methods Med. 2016, 2016, 6169249. [Google Scholar] [CrossRef]
  92. Tipping, M.E.; Bishop, C.M. Probabilistic principal component analysis. J. R. Stat. Soc. Ser. B Stat. Methodol. 1999, 61, 611–622. [Google Scholar] [CrossRef]
  93. Al-Tarawneh, S.K.; Bencharit, S. Applications of surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) mass spectrometry in defining salivary proteomic profiles. Open Dent. J. 2009, 3, 74. [Google Scholar] [CrossRef]
  94. Mandrell, C.T.; Holland, T.E.; Wheeler, J.F.; Esmaeili, S.M.A.; Amar, K.; Chowdhury, F.; Sivakumar, P. Machine Learning Approach to Raman Spectrum Analysis of MIA PaCa-2 Pancreatic Cancer Tumor Repopulating Cells for Classification and Feature Analysis. Life 2020, 10, 181. [Google Scholar] [CrossRef]
  95. Shen, B.; Ma, C.; Tang, L.; Wu, Z.; Peng, Z.; Pan, G.; Li, H. Applying machine learning for multi-individual Raman spectroscopic data to identify different stages of proliferating human hepatocytes. iScience 2024, 27, 109500. [Google Scholar] [CrossRef]
  96. Rozova, V.S.; Anwer, A.G.; Guller, A.E.; Es, H.A.; Khabir, Z.; Sokolova, A.I.; Gavrilov, M.U.; Goldys, E.M.; Warkiani, M.E.; Thiery, J.P.; et al. Machine learning reveals mesenchymal breast carcinoma cell adaptation in response to matrix stiffness. PLoS Comput. Biol. 2021, 17, e1009193. [Google Scholar] [CrossRef]
  97. Carpenter, A.E.; Jones, T.R.; Lamprecht, M.R.; Clarke, C.; Kang, I.H.; Friman, O.; Guertin, D.A.; Chang, J.H.; Lindquist, R.A.; Moffat, J. CellProfiler: Image analysis software for identifying and quantifying cell phenotypes. Genome Biol. 2006, 7, R100. [Google Scholar] [CrossRef]
  98. Kandaswamy, C.; Silva, L.M.; Alexandre, L.A.; Santos, J.M. High-content analysis of breast cancer using single-cell deep transfer learning. J. Biomol. Screen. 2016, 21, 252–259. [Google Scholar] [CrossRef]
  99. Chappell, M.; Humphreys, M.S. An auto-associative neural network for sparse representations: Analysis and application to models of recognition and cued recall. Psychol. Rev. 1994, 101, 103. [Google Scholar] [CrossRef]
  100. Forslid, G.; Wieslander, H.; Bengtsson, E.; Wählby, C.; Hirsch, J.M.; Stark, C.R.; Sadanandan, S.K. Deep Convolutional Neural Networks for Detecting Cellular Changes Due to Malignancy. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 82–89. [Google Scholar]
  101. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  102. Mohammad, S.; Amar, K.; Chowdhury, F. Hybrid AI models allow label-free identification and classification of pancreatic tumor repopulating cell population. Biochem. Biophys. Res. Commun. 2023, 677, 126–131. [Google Scholar] [CrossRef]
  103. Sirinukunwattana, K.; Raza, S.E.A.; Tsang, Y.-W.; Snead, D.R.; Cree, I.A.; Rajpoot, N.M. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1196–1206. [Google Scholar] [CrossRef]
  104. Banerjee, K.; Gupta, R.R.; Vyas, K.; Mishra, B. Exploring alternatives to softmax function. In Proceedings of the 2nd International Conference on Deep Learning Theory and Applications—DeLTA, Virtual, 7–9 July 2021. [Google Scholar] [CrossRef]
  105. Berryman, S.; Matthews, K.; Lee, J.H.; Duffy, S.P.; Ma, H. Image-based phenotyping of disaggregated cells using deep learning. Commun. Biol. 2020, 3, 674. [Google Scholar] [CrossRef]
  106. Foo, K.Y.; Shaddy, B.; Murgoitio-Esandi, J.; Hepburn, M.S.; Li, J.; Mowla, A.; Sanderson, R.W.; Vahala, D.; Amos, S.E.; Choi, Y.S. Tumor spheroid elasticity estimation using mechano-microscopy combined with a conditional generative adversarial network. Comput. Methods Programs Biomed. 2024, 255, 108362. [Google Scholar] [CrossRef]
  107. Van Valen, D.A.; Kudo, T.; Lane, K.M.; Macklin, D.N.; Quach, N.T.; DeFelice, M.M.; Maayan, I.; Tanouchi, Y.; Ashley, E.A.; Covert, M.W. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput. Biol. 2016, 12, e1005177. [Google Scholar] [CrossRef]
  108. Sadanandan, S.K.; Ranefall, P.; Le Guyader, S.; Wählby, C. Automated training of deep convolutional neural networks for cell segmentation. Sci. Rep. 2017, 7, 7860. [Google Scholar] [CrossRef]
  109. Stringer, C.; Wang, T.; Michaelos, M.; Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 2021, 18, 100–106. [Google Scholar] [CrossRef]
  110. Pachitariu, M.; Stringer, C. Cellpose 2.0: How to train your own model. Nat. Methods 2022, 19, 1634–1641. [Google Scholar] [CrossRef]
  111. Griebel, M.; Segebarth, D.; Stein, N.; Schukraft, N.; Tovote, P.; Blum, R.; Flath, C.M. Deep learning-enabled segmentation of ambiguous bioimages with deepflash2. Nat. Commun. 2023, 14, 1679. [Google Scholar] [CrossRef]
  112. Liu, Z.; Mao, H.; Wu, C.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar] [CrossRef]
  113. Wiggins, L.; Lord, A.; Murphy, K.L.; Lacy, S.E.; O’Toole, P.J.; Brackenbury, W.J.; Wilson, J. The CellPhe toolkit for cell phenotyping using time-lapse imaging and pattern recognition. Nat. Commun. 2023, 14, 1854. [Google Scholar] [CrossRef]
  114. Arzt, M.; Deschamps, J.; Schmied, C.; Pietzsch, T.; Schmidt, D.; Tomancak, P.; Haase, R.; Jug, F. LABKIT: Labeling and segmentation toolkit for big image data. Front. Comput. Sci. 2022, 4, 777728. [Google Scholar] [CrossRef]
  115. Bannon, D.; Moen, E.; Schwartz, M.; Borba, E.; Kudo, T.; Greenwald, N.; Vijayakumar, V.; Chang, B.; Pao, E.; Osterman, E. DeepCell Kiosk: Scaling deep learning–enabled cellular image analysis with Kubernetes. Nat. Methods 2021, 18, 43–45. [Google Scholar] [CrossRef]
  116. Tsai, H.-F.; Gajda, J.; Sloan, T.F.; Rares, A.; Shen, A.Q. Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning. SoftwareX 2019, 9, 230–237. [Google Scholar] [CrossRef]
  117. Robitaille, M.C.; Byers, J.M.; Christodoulides, J.A.; Raphael, M.P. Self-supervised machine learning for live cell imagery segmentation. Commun. Biol. 2022, 5, 1162. [Google Scholar] [CrossRef]
  118. Liu, X.; Zhang, F.; Hou, Z.; Mian, L.; Wang, Z.; Zhang, J.; Tang, J. Self-supervised learning: Generative or contrastive. IEEE Trans. Knowl. Data Eng. 2021, 35, 857–876. [Google Scholar] [CrossRef]
  119. Ghaznavi, A.; Rychtáriková, R.; Císař, P.; Ziaei, M.M.; Štys, D. Symmetry Breaking in the U-Net: Hybrid Deep-Learning Multi-Class Segmentation of HeLa Cells in Reflected Light Microscopy Images. Symmetry 2024, 16, 227. [Google Scholar] [CrossRef]
  120. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014. [Google Scholar] [CrossRef]
  121. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  122. Hollandi, R.; Szkalisity, A.; Toth, T.; Tasnadi, E.; Molnar, C.; Mathe, B.; Grexa, I.; Molnar, J.; Balind, A.; Gorbe, M. nucleAIzer: A parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst. 2020, 10, 453–458.e6. [Google Scholar] [CrossRef]
  123. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  124. Jian, Z.; Song, T.; Zhang, Z.; Ai, Z.; Zhao, H.; Tang, M.; Liu, K. An improved nested U-net network for fluorescence in situ hybridization cell image segmentation. Sensors 2024, 24, 928. [Google Scholar] [CrossRef]
  125. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Cham, Switzerland, 2018; Volume 11045, pp. 3–11. [Google Scholar]
  126. Shakoori, A.R. Fluorescence in situ hybridization (FISH) and its applications. In Chromosome Structure and Aberrations; Springer: New Delhi, India, 2017; pp. 343–367. [Google Scholar]
  127. Pelt, D.M.; Sethian, J.A. A mixed-scale dense convolutional neural network for image analysis. Proc. Natl. Acad. Sci. USA 2018, 115, 254–259. [Google Scholar] [CrossRef]
  128. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  129. Chen, J.; Ding, L.; Viana, M.P.; Lee, H.; Sluezwski, M.F.; Morris, B.; Hendershott, M.C.; Yang, R.; Mueller, I.A.; Rafelski, S.M. The Allen Cell and Structure Segmenter: A new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. bioRxiv 2018. [Google Scholar] [CrossRef]
  130. Amat, F.; Lemon, W.; Mossing, D.P.; McDole, K.; Wan, Y.; Branson, K.; Myers, E.W.; Keller, P.J. Fast, accurate reconstruction of cell lineages from large-scale fluorescence microscopy data. Nat. Methods 2014, 11, 951–958. [Google Scholar] [CrossRef]
  131. Tamajka, M.; Benešová, W. Supervoxel algorithm for medical image processing. In Proceedings of the 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), Chennai, India, 21–22 September 2017; pp. 3121–3127. [Google Scholar]
  132. Zhao, W.; Xu, X.; Zhu, Y.; Xu, F. Active contour model based on local and global Gaussian fitting energy for medical image segmentation. Optik 2018, 158, 1160–1169. [Google Scholar] [CrossRef]
  133. Li, L. A spatiotemporal association method for multi-source targets based on joint-similarity constrained clustering. In Proceedings of the 2022 IEEE International Conference on Unmanned Systems (ICUS), Guangzhou, China, 28–30 October 2022; pp. 992–997. [Google Scholar]
  134. Zancla, A.; Mozetic, P.; Orsini, M.; Forte, G.; Rainer, A. A primer to traction force microscopy. J. Biol. Chem. 2022, 298, 101867. [Google Scholar] [CrossRef]
  135. Wang, Y.-l.; Lin, Y.-C. Traction force microscopy by deep learning. Biophys. J. 2021, 120, 3079–3090. [Google Scholar] [CrossRef]
  136. Li, H.; Matsunaga, D.; Matsui, T.S.; Aosaki, H.; Deguchi, S. Image based cellular contractile force evaluation with small-world network inspired CNN: SW-UNet. Biochem. Biophys. Res. Commun. 2020, 530, 527–532. [Google Scholar] [CrossRef]
  137. Pupeikis, R. Revised 2D fast fourier transform. In Proceedings of the 2015 Open Conference of Electrical, Electronic and Information Sciences (eStream), Vilnius, Lithuania, 21 April 2015; pp. 1–4. [Google Scholar]
  138. Li, H.; Matsunaga, D.; Matsui, T.S.; Aosaki, H.; Kinoshita, G.; Inoue, K.; Doostmohammadi, A.; Deguchi, S. Wrinkle force microscopy: A machine learning based approach to predict cell mechanics from images. Commun. Biol. 2022, 5, 361. [Google Scholar] [CrossRef] [PubMed]
  139. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  140. Pielawski, N.; Hu, J.; Strömblad, S.; Wählby, C. In silico prediction of cell traction forces. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 877–881. [Google Scholar]
  141. Jégou, S.; Drozdzal, M.; Vázquez, D.; Romero, A.; Bengio, Y. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2016; pp. 1175–1183. [Google Scholar]
  142. Goan, E.; Fookes, C. Bayesian Neural Networks: An Introduction and Survey. In Case Studies in Applied Bayesian Data Science: CIRM Jean-Morlet Chair, Fall 2018; Mengersen, K.L., Pudlo, P., Robert, C.P., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 45–87. [Google Scholar]
  143. Fujiwara, K.; Fujikawa, R.; Suzuki, Y.; Suzuki, K.T.; Sakumura, Y. Cell Morphology and Biophysical Mechanisms-Informed Traction Force Microscopy Using Machine Learning. bioRxiv 2025. [Google Scholar] [CrossRef]
  144. Kratz, F.S.; Möllerherm, L.; Kierfeld, J. Enhancing robustness, precision, and speed of traction force microscopy with machine learning. Biophys. J. 2023, 122, 3489–3505. [Google Scholar] [CrossRef]
  145. SubramanianBalachandar, V.; Islam, M.M.; Steward, R. A machine learning approach to predict cellular mechanical stresses in response to chemical perturbation. Biophys. J. 2023, 122, 3413–3424. [Google Scholar] [CrossRef]
  146. Gómez, R.S.; García, C.G. Stepwise regression revisited. arXiv 2025. [Google Scholar] [CrossRef]
  147. Gu, Y.; Song, Z.; Zhang, L. Faster algorithms for structured linear and kernel support vector machines. arXiv 2023. [Google Scholar] [CrossRef]
  148. Duan, X.; Huang, J. Deep-learning-based 3D cellular force reconstruction directly from volumetric images. Biophys. J. 2022, 121, 2180–2192. [Google Scholar] [CrossRef]
  149. Li, C.; Feng, L.; Park, Y.J.; Yang, J.; Li, J.; Zhang, S. Machine learning traction force maps for contractile cell monolayers. Extrem. Mech. Lett. 2024, 68, 102150. [Google Scholar] [CrossRef]
  150. Schmitt, M.S.; Colen, J.; Sala, S.; Devany, J.; Seetharaman, S.; Caillier, A.; Gardel, M.L.; Oakes, P.W.; Vitelli, V. Machine learning interpretable models of cell mechanics from protein images. Cell 2024, 187, 481–494.e24. [Google Scholar] [CrossRef]
  151. Wang, J.; Su, X.; Zhao, L.; Zhang, J. Deep reinforcement learning for data association in cell tracking. Front. Bioeng. Biotechnol. 2020, 8, 298. [Google Scholar] [CrossRef]
  152. Wang, Z.; Xu, Y.; Wang, D.; Yang, J.; Bao, Z. Hierarchical deep reinforcement learning reveals novel mechanism of cell movement. Nat. Mach. Intell. 2022, 4, 73–83. [Google Scholar] [CrossRef]
  153. Wen, C.; Miura, T.; Voleti, V.; Yamaguchi, K.; Tsutsumi, M.; Yamamoto, K.; Otomo, K.; Fujie, Y.; Teramoto, T.; Ishihara, T. 3DeeCellTracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images. Elife 2021, 10, e59187. [Google Scholar] [CrossRef]
  154. Häring, M.; Großhans, J.; Wolf, F.; Eule, S. Automated segmentation of epithelial tissue using cycle-consistent generative adversarial networks. bioRxiv 2018. [Google Scholar] [CrossRef]
  155. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  156. Mahmood, F.; Borders, D.; Chen, R.J.; McKay, G.N.; Salimian, K.J.; Baras, A.; Durr, N.J. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging 2019, 39, 3257–3267. [Google Scholar] [CrossRef]
  157. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
  158. Oh, D.; Strattan, J.S.; Hur, J.K.; Bento, J.; Urban, A.E.; Song, G.; Cherry, J.M. CNN-Peaks: ChIP-Seq peak detection pipeline using convolutional neural networks that imitate human visual inspection. Sci. Rep. 2020, 10, 7933. [Google Scholar] [CrossRef]
  159. Zhang, Y.; Liu, T.; Meyer, C.A.; Eeckhoute, J.; Johnson, D.S.; Bernstein, B.E.; Nusbaum, C.; Myers, R.M.; Brown, M.; Li, W. Model-based analysis of ChIP-Seq (MACS). Genome Biol. 2008, 9, R137. [Google Scholar] [CrossRef]
  160. Misra, D.; Henaff, M.; Krishnamurthy, A.; Langford, J. Kinematic state abstraction and provably efficient rich-observation reinforcement learning. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 6961–6971. [Google Scholar]
  161. Giolando, P.; Kakaletsis, S.; Zhang, X.; Weickenmeier, J.; Castillo, E.; Dortdivanlioglu, B.; Rausch, M.K. AI-dente: An open machine learning based tool to interpret nano-indentation data of soft tissues and materials. Soft Matter 2023, 19, 6710–6720. [Google Scholar] [CrossRef]
  162. Stashko, C.; Hayward, M.-K.; Northey, J.J.; Pearson, N.; Ironside, A.J.; Lakins, J.N.; Oria, R.; Goyette, M.-A.; Mayo, L.; Russnes, H.G. A convolutional neural network STIFMap reveals associations between stromal stiffness and EMT in breast cancer. Nat. Commun. 2023, 14, 3561. [Google Scholar] [CrossRef]
  163. Hassanlou, L.; Meshgini, S.; Alizadeh, E. Evaluating adipocyte differentiation of bone marrow-derived mesenchymal stem cells by a deep learning method for automatic lipid droplet counting. Comput. Biol. Med. 2019, 112, 103365. [Google Scholar] [CrossRef] [PubMed]
  164. Caforio, F.; Regazzoni, F.; Pagani, S.; Karabelas, E.; Augustin, C.; Haase, G.; Plank, G.; Quarteroni, A. Physics-informed neural network estimation of material properties in soft tissue nonlinear biomechanical models. Comput. Mech. 2025, 75, 487–513. [Google Scholar] [CrossRef]
  165. Pezzotta, A.; Briscoe, J. Optimal control of gene regulatory networks for morphogen-driven tissue patterning. Cell Syst. 2023, 14, 940–952.e11. [Google Scholar] [CrossRef] [PubMed]
  166. Oria, R.; Jain, K.; Weaver, V.M. Exploring the intersection of mechanobiology and artificial intelligence. Npj Biol. Phys. Mech. 2025, 2, 9. [Google Scholar] [CrossRef]
  167. O’Dowling, A.T.; Rodriguez, B.J.; Gallagher, T.K.; Thorpe, S.D. Machine learning and artificial intelligence: Enabling the clinical translation of atomic force microscopy-based biomarkers for cancer diagnosis. Comput. Struct. Biotechnol. J. 2024, 24, 661–671. [Google Scholar] [CrossRef]
  168. Haider, S.; Kumar, G.; Goyal, T.; Raj, A. Stiffness estimation and classification of biological cells using constriction microchannel: Poroelastic model and machine learning. Microfluid. Nanofluidics 2024, 28, 14. [Google Scholar] [CrossRef]
  169. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
  170. Edlund, C.; Jackson, T.R.; Khalid, N.; Bevan, N.; Dale, T.; Dengel, A.; Ahmed, S.; Trygg, J.; Sjögren, R. LIVECell-A large-scale dataset for label-free live cell segmentation. Nat. Methods 2021, 18, 1038–1045. [Google Scholar] [CrossRef]
  171. Greenwald, N.F.; Miller, G.; Moen, E.; Kong, A.; Kagel, A.; Dougherty, T.; Fullaway, C.C.; McIntosh, B.J.; Leow, K.X.; Schwartz, M.S.; et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat. Biotechnol. 2022, 40, 555–565. [Google Scholar] [CrossRef]
  172. Májovský, M.; Černý, M.; Kasal, M.; Komarc, M.; Netuka, D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened. J. Med. Internet Res. 2023, 25, e46924. [Google Scholar] [CrossRef]
  173. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  174. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar] [CrossRef]
  175. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  176. Du, H.; Guo, B.; He, Q. Differentiable neural-integrated meshfree method for forward and inverse modeling of finite strain hyperelasticity. Eng. Comput. 2025, 41, 1597–1617. [Google Scholar] [CrossRef]
  177. Xiong, W.; Long, X.; Bordas, S.P.; Jiang, C. The deep finite element method: A deep learning framework integrating the physics-informed neural networks with the finite element method. Comput. Methods Appl. Mech. Eng. 2025, 436, 117681. [Google Scholar] [CrossRef]
Figure 1. An overview of the structure of this review. The figure illustrates how different AI paradigms are applied across cellular and mechanobiology-relevant domains, including morphology-based readouts, biomarker analysis, segmentation, and force/motility prediction, using both mono- and multimodal datasets.
Figure 1. An overview of the structure of this review. The figure illustrates how different AI paradigms are applied across cellular and mechanobiology-relevant domains, including morphology-based readouts, biomarker analysis, segmentation, and force/motility prediction, using both mono- and multimodal datasets.
Biophysica 05 00062 g001
Figure 2. Summary of applications of AI-driven methods for cellular morphology, cancer biomarker, cell segmentation, and traction-force and motility analysis.
Figure 2. Summary of applications of AI-driven methods for cellular morphology, cancer biomarker, cell segmentation, and traction-force and motility analysis.
Biophysica 05 00062 g002
Figure 3. Roadmap depicting current limitations in the integration process of AI into biology, as well as outlining future directions.
Figure 3. Roadmap depicting current limitations in the integration process of AI into biology, as well as outlining future directions.
Biophysica 05 00062 g003
Table 1. Representative applications of cell morphology analysis.
Table 1. Representative applications of cell morphology analysis.
ReferencesML Algorithms UsedKey Contribution
Neto et al. [51]RF, UMAPClassified macrophage subtypes and revealed clustering patterns in Two-Photon Fluorescence Lifetime Imaging Microscopy (2P-FILM) data.
Bonnevie et al. [83]Self-organizing map (SOM) + Artificial neural network (ANN)Predicted YAP/transcriptional coactivator with PDZ-binding motif (TAZ) localization from morphological features.
Dürr et al. [54]CNNPhenotype classification in the Cell Painting assay, achieving 93.4% accuracy.
Kim et al. [58]DenseNet121 CNNPredicted stem cell multipotency rate.
Wong et al. [59]SE-RNN + PCA/t-SNEClassified fibroblast as well as epithelial morphologies on ECM substrates.
Godinez et al. [60]Multi-scale CNNClassified microscopy images directly from raw pixels.
Kraus et al. [62]CNN + MILWeakly supervised microscopy classification with interpretable saliency maps.
Piccinini et al. [84]MLP, SVM, RFAdvanced Cell Classifier for phenotype discovery.
Xu et al. [65]U-Net + cGANCellVisioner tool for extracting morphology and mechanobiological parameters.
Jin et al. [85]Logistic Lasso RegressionClassified apoptosis and ferroptosis in fibrosarcoma cells.
Sommer et al. [80]AENovelty detection in mitotic and nuclear morphologies.
Wang et al. [81]TDNN (CNN + particle filter)Real-time tracking and mitosis detection in stem cells.
Bolad et al. [86]Neural NetworkClassified subcellular localization patterns with up to 99% accuracy in populations.
Rostam et al. [52]RF, Logistic RegressionLabel-free macrophage phenotype classification, achieved an accuracy of over 90%.
Wang et al. [70]InceptionV3 CNNClassified T-cell activation with 98.8% accuracy.
Yang et al. [82]YOLOX-MobileNetGuided AFM-based stiffness and adhesion measurements.
Mohammad et al. [87]U-Net + InceptionV3Segmented and classified early mesoderm cells from pluripotent stem cells.
He et al. [88]Cascade region-based CNN (R-CNN)Identified senescent vs. non-senescent mesenchymal stem cells (MSCs) with an F1 score of 0.9.
Wang et al. [89]CNNDistinguished murine hematopoietic stem cell (HSC) subtypes and age groups.
Buggenthin et al. [77]CNN-RNN hybridPredicted lineage commitment of HSPCs before the appearance of biomarkers.
Zhu et al. [78]Xception CNNPredicted neural stem cell fate, obtaining 82.7% accuracy.
Palma et al. [71]Style-transfer AEDetected genetic as well as chemical-perturbation-induced morphological shifts.
Aida et al. [73]cGANSegmented cancer stem cells from phase-contrast and nucleus images.
Sullivan et al. [74]Human-in-the-loop DLCitizen science annotations improved protein localization in Human Atlas Images.
Wu et al. [76]CNNPredicted single-cell stiffness from brightfield microscopy images.
Table 2. Representative applications of AI for cancer biomarker detection.
Table 2. Representative applications of AI for cancer biomarker detection.
ReferencesML Algorithms UsedKey Contribution
Tipatet et al. [90]PCA + ML classifiersClassified wild-type and radio-resistant breast cancer cells using Raman spectra.
Wu et al. [91]PPCA + SVMDistinguished ovarian cancer patients from healthy controls using SELDI-TOF-MS data.
Mandrell et al. [94]PCA + k-NN/SVMIdentified TRCs in pancreatic cancer lines using Raman spectroscopy.
Shen et al. [95]t-SNE + SVMClassified hepatocyte proliferation stages from Raman spectra with a specificity of 0.98.
Rozova et al. [96]CellProfiler + RFClassified breast cancer morphologies from antibody-stained fluorescence features.
Kandaswamy et al. [98]SAA + deep transfer learningPredicted compound mechanisms of action from single-cell imaging, obtaining 87.9% accuracy.
Forslid et al. [100]ResNet CNNDifferentiated normal vs. cancerous cervical cells from microscopy images.
Mohammad et al. [102]CNN + Linear/Tree-based ML modelsExtracted features from pancreatic cancer cells and classified three subtypes within the TRCs.
Sirinukunwattana et al. [103]SC-CNN + Softmax CNN + NEPDetected (with an F1 score of 0.802) and classified (with an F1 score of 0.784) nuclei in colorectal tissues.
Berryman et al. [105]Custom CNNClassified disaggregated cancer cells across eight cell lines, achieving an F1 score of 95.3%.
Foo et al. [106]cGANEstimated tumor spheroid elasticity, reduced error by 29% vs. algebraic methods.
Table 3. Representative applications of AI for cell segmentation.
Table 3. Representative applications of AI for cell segmentation.
ReferencesML Algorithms UsedKey Contribution
Van Valen et al. [107]Custom CNNPerformed automated segmentation of bacterial and mammalian cells in live-cell imaging.
Sadanandan et al. [108]Custom CNNSegmented label-free brightfield microscopy images using fluorescent endpoint masks.
Stringer et al. [109,110]U-Net-like CNNDeveloped Cellpose, a general-purpose segmentation model for 2D/3D cell images.
Griebel et al. [111]ConvNext encoderIntroduced Deepflash2 for ambiguous cell image segmentation.
Wiggins et al. [113]LDA, RF, SVM, clusteringWorked on CellPhe for long-term phenotyping and segmentation.
Arzt et al. [114]Classical ML modelsIntroduced LABKIT, a lightweight ImageJ-Fiji segmentation plugin.
Bannon et al. [115]CNN + CloudDeveloped DeepCell, a scalable segmentation platform with a web interface.
Tsai et al. [116]Semi-automated MLWorked on a tool called Usiigaci for fibroblast segmentation and tracking.
Robitaille et al. [117]SSL modelSegmented live-cell images with minimal labeled data.
Ghaznavi et al. [119]U-Net variantsSegmented HeLa phase-contrast cells with a mIoU of 0.81.
Hollandi et al. [122]Mask R-CNN + U-Net refinement + style transferPerformed cross-modality nucleus segmentation.
Jian et al. [124]SEAM U-Net++Segmented FISH images with an IoU of 0.91.
Pelt et al. [127]MS-D CNNMulti-scale segmentation with 40×fewer parameters.
Chen et al. [129]Allen Cell SegmenterPerformed 3D intracellular segmentation with over 98% accuracy.
Amat et al. [130]Supervoxel + GMM + Spatiotemporal associationFour-dimensional segmentation and tracking with a 97% linkage accuracy.
Table 4. Representative applications of AI for traction forces and motility prediction.
Table 4. Representative applications of AI for traction forces and motility prediction.
ReferencesML Algorithms UsedKey Contribution
Li et al. [136]SW-U-NetSegmented substrate wrinkles to measure cell traction forces.
Li et al. [138]GANGenerated traction force maps from previously segmented wrinkles.
Pielawski et al. [140]Tiramisu segmentation network, BNNPredicted traction forces and their uncertainty using cell geometry.
Fujiwara et al. [143]U-NetIncorporated physiological data for traction estimation.
Wang et al. [146]Three-dimensional U-NetPredicted traction maps from synthetic data.
Kratz et al. [145]Custom CNNPredicted traction maps from synthetic data.
SubramanianBalachandar et al. [146]SLR, Quadratic SVMPredicted traction and intercellular stresses from morphology and drug dosage.
Duan et al. [148]Custom CNNCalculated 3D traction force maps from bead displacement images only.
Li et al. [149]GANPredicted traction maps from only phase-contrast images.
Schmitt et al. [150]Neural networks (U-Net, physics-constrained, physics-agnostic)Predicted mechanical forces using zyxin signals.
Wang et al. [151]RL, Residual CNNSolved cell tracking data association as a linear assignment problem.
Wang et al. [152]Hierarchical Deep RLModeled cell migration behavior using cell and nuclear morphologies.
Table 5. Miscellaneous works and their contribution.
Table 5. Miscellaneous works and their contribution.
ReferencesML Algorithms UsedKey Contribution
Wen et al. [153]Three-dimensional U-NetSegmented and tracked neurons in whole-brain 3D time-lapse images of C. elegans.
Häring et al. [154]CycleGANPerformed automated segmentation of epithelial tissue in Drosophila embryos.
Mahmood et al. [156]cGAN with CycleGAN synthetic augmentationSegmented the nuclei in H&E-stained histopathology.
Coudray et al. [157]InceptionV3 CNNDistinguished normal and mutated tissues from TCGA slides with an AUC of 0.97.
Oh et al. [158]Custom CNNDetected enriched regions in ChIP-seq and other sequencing data.
Giolando et al. [159]Forward/Inverse Neural Networks (AI-dente)Performed extraction of mechanical parameters from nano-indentation of mouse brain tissues.
Stashko et al. [162]Custom CNNPredicted stromal stiffness from collagen and nuclear features in breast cancer tissues.
Hassanlou et al. [163]Custom CNNIntroduced a method for label-free lipid droplet counting in MSC adipogenesis with 94.45% accuracy.
Haider et al. [168]ANN, SVMPredicted Young’s modulus and viscosity in multiple cell lines.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammad, S.; Hossain, M.S.; Sarver, S.L. Integrating AI with Cellular and Mechanobiology: Trends and Perspectives. Biophysica 2025, 5, 62. https://doi.org/10.3390/biophysica5040062

AMA Style

Mohammad S, Hossain MS, Sarver SL. Integrating AI with Cellular and Mechanobiology: Trends and Perspectives. Biophysica. 2025; 5(4):62. https://doi.org/10.3390/biophysica5040062

Chicago/Turabian Style

Mohammad, Sakib, Md Sakhawat Hossain, and Sydney L. Sarver. 2025. "Integrating AI with Cellular and Mechanobiology: Trends and Perspectives" Biophysica 5, no. 4: 62. https://doi.org/10.3390/biophysica5040062

APA Style

Mohammad, S., Hossain, M. S., & Sarver, S. L. (2025). Integrating AI with Cellular and Mechanobiology: Trends and Perspectives. Biophysica, 5(4), 62. https://doi.org/10.3390/biophysica5040062

Article Metrics

Back to TopTop