Next Article in Journal
On the Total Version of Triple Roman Domination in Graphs
Previous Article in Journal
On Chaos, Tipping and Delayed Dynamical Transitions in a Hassell-Type Population Model with an Allee Effect
Previous Article in Special Issue
Efficiency Investigation of Langevin Monte Carlo Ray Tracing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neuronal Mesh Reconstruction from Image Stacks Using Implicit Neural Representations

1
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
2
National Center for Computer Animation, Bournemouth University, Poole BH12 5BB, UK
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1276; https://doi.org/10.3390/math13081276
Submission received: 20 March 2025 / Revised: 4 April 2025 / Accepted: 10 April 2025 / Published: 12 April 2025
(This article belongs to the Special Issue Mathematical Applications in Computer Graphics)

Abstract

:
Reconstructing neuronal morphology from microscopy image stacks is essential for understanding brain function and behavior. While existing methods are capable of tracking neuronal tree structures and creating membrane surface meshes, they often lack seamless processing pipelines and suffer from stitching artifacts and reconstruction inconsistencies. In this study, we propose a new approach utilizing implicit neural representation to directly extract neuronal isosurfaces from raw image stacks by modeling signed distance functions (SDFs) with multi-layer perceptrons (MLPs). Our method accurately reconstructs the tubular, tree-like topology of neurons in complex spatial configurations, yielding highly precise neuronal membrane surface meshes. Extensive quantitative and qualitative evaluations across multiple datasets demonstrate the superior reliability of our approach compared to existing methods. The proposed method achieves a volumetric reconstruction accuracy of up to 98.2% and a volumetric IoU of 0.90.

1. Introduction

Neurons have long been recognized as fundamental components of the brain. The various complex abilities of the brain and its unique physiological qualities emerge from the orderly arrangement of countless neurons [1]. Signal transmission essential for biological functions and brain activity occurs within networks formed by the intricate connections and intersections of diverse neurons [2]. Neurons are essential to the nervous system, and their morphology and connectivity significantly influence its overall function [3]. Therefore, investigating the morphology and connectivity of neurons is vital for a comprehensive understanding of the structure and function of the nervous system. Neuronal morphology skeletons, obtained through various methods, can generate a range of intriguing hypotheses and insights into neural circuits at the level of individual neurons and synapses [4,5].
Neuronal morphology is a critical research field in neuroscience with significant applications across various fields, particularly in brain research. The reconstruction of neuronal morphology involves extracting quantitative data that characterize nerve fibers from image stacks. As research advances, there is a growing demand in academia for the simulation of cellular behaviors using three-dimensional, high-fidelity models [6]. Through various 3D reconstruction methods, detailed morphological information of neurons can be obtained and subsequently converted into digital models. This morphological reconstruction process provides essential data and technical support for subsequent biological simulations of neurons. For instance, numerical simulations of reaction–diffusion problems [7] require precisely defined neuronal geometries. Consequently, one of the key challenges in neuroscience is the robust conversion of tracked results into high-quality neuronal membrane surfaces, which can then be used to generate tetrahedral meshes supporting reaction–diffusion simulations across individual neurons or entire neural networks. Researchers have compiled extensive databases of 3D models derived from realistic reconstructions of neuronal morphology. These efforts offer valuable insights into the spatial structure and morphology of neurons, thereby providing essential data and technical support for advancing research in neuroscience and biology.
This study aims to integrate three-dimensional neuronal reconstruction with implicit neural representation techniques, enabling the direct end-to-end extraction of neuronal membrane surface meshes from volumetric image stacks. By leveraging a deep learning-based framework that incorporates Graph Convolutional Networks (GCNs) and attention mechanisms, our approach accurately captures the detailed reconstruction of local dendritic branches and the global continuity of entire neuronal structures. The use of implicit representations allows for smoother and more coherent modeling of fine neuronal morphology. Figure 1 compares the traditional neuron reconstructing framework with our approach. In traditional neuronal reconstruction workflows, the process typically begins with the extraction of neuronal skeletons from volumetric image stacks. This is usually achieved through skeletonization algorithms that trace the central paths of neurites. The resulting skeleton data are subsequently transformed into a three-dimensional membrane surface mesh using using explicit or implicit modeling algorithms from the field of computer graphics. This two-stage pipeline relies heavily on the accuracy of intermediate representations and often suffers from cumulative errors, leading to potential loss of morphological detail and structural discontinuities. Here, we employ an MLP as a continuous function approximator that maps spatial coordinates to corresponding SDF values. The resulting continuous representation allows for high-fidelity mesh reconstruction through iso-surface extraction techniques such as Marching Cubes. Our approach not only guarantees the rapid and efficient direct reconstruction of neuronal morphology from original image stacks but also accommodates more complex neural network architectures. Through the precise reconstruction of neuronal morphological structures, we can gain a deeper understanding of the spatial characteristics and interconnections of neurons. These improvements provide a robust technical foundation for high-resolution neuronal modeling and neural topology analysis, contributing valuable insights into brain function and supporting a wide range of neuroscience research and applied neurotechnological development.
The main contributions of our work are summarized as follows:
  • We present a new deep learning framework capable of extracting neuronal membrane surfaces from image stacks in an end-to-end manner.
  • We propose a framework for neuronal reconstruction that integrates a GCN with attention mechanisms to accurately extract neural structural features.
  • The proposed method demonstrates superior performance in both the detailed reconstruction of local dendritic regions and the global reconstruction of complete neuronal dendritic structures.
The rest of this manuscript is organized as follows: We first review related work in Section 2. The details of the network architecture are described in Section 3, encompassing the specific method for fitting SDF using implicit neural representations. The test results and reconstruction analysis are discussed in Section 4, followed by the conclusion and discussion in Section 5.

2. Related Work

2.1. Neuron Tracing from Light Microscopy Images

Over the years, various methods have been developed for single-neuron reconstruction. Skeletonizations [8,9], while applicable to diverse image data, exhibit poor noise resistance and often require manual refinement. Region-growing methods offer higher reconstruction efficiency but struggle with reconstructing fragmented neurons [10,11,12]. To address these challenges, variational methods [13,14] and graph-based approaches [15,16] have been proposed. However, variational methods tend to be slow when processing large-scale data and are prone to over-segmentation in noisy images, while graph-based approaches are susceptible to misclassifying noise as valid tracking results [17]. The Rivulet2 method [18] utilizes local neuronal features to trace neural signals within the surrounding context at each processing stage. Similarly, the FMST method [19] employs a two-stage approach: first, it over-reconstructs the neuron, then systematically refines the reconstruction through targeted pruning to improve accuracy. Another two-stage tracking approach, Automation-Following-Manual (AFM), integrates manual modifications and annotations following the application of an automated tracking algorithm [20]. Despite numerous optimization attempts from diverse research perspectives [21], automatic tracing methods continue to face significant challenges. Specifically, they remain highly dependent on precise foreground/background segmentation and struggle to capture the intricate morphological details inherent in neural imaging.
Beyond these traditional methods, machine learning-based recognition models [22,23] have been applied to neuronal morphology tracing. Deep learning-based image segmentation models, particularly those built upon 3D U-Net architectures [24,25] have been widely adopted for neuronal fluorescence imaging. However, the inherent complexity of these images presents significant challenges. The intricate structures, coupled with the persistent difficulty of achieving precise foreground/background segmentation, substantially constrains the generalizability of these approaches across diverse datasets and biological species [26]. Techniques such as sparse dictionary learning [27] and supervised learning [28] improve the detection of weak signals during reconstruction. However, these approaches require substantial annotated data and computational resources to achieve high recognition accuracy. Currently, the field of neuronal image analysis lacks a comprehensive tracing algorithm capable of delivering consistently robust and accurate neuronal reconstructions across diverse imaging modalities. Existing automated methods remain fundamentally constrained, exhibiting significant vulnerabilities to noise interference and signal discontinuities. Consequently, these approaches frequently necessitate extensive manual intervention and post-processing to rectify reconstruction artifacts and validate morphological accuracy [29,30].

2.2. Neuron Reconstruction from Morphology Skeletons

Significant efforts have been made in the three-dimensional reconstruction of neuronal morphology from tracked images, yielding numerous scientific achievements. Several software packages, such as Neurolucida [31], NeuroConstruct [32], NeuGen [33], and Genesis [34], provide tools for constructing three-dimensional neuronal surfaces using mesh-based methods. However, models generated by these methods often suffer from poor mesh quality, with branching structures frequently prone to self-intersection during modeling. Other approaches, including Neuronize [35], NeuroTessMesh [36], AnaMorph [6], and Neuromorphovis [37], have introduced various optimizations to improve the morphology and mesh quality of the soma region. In addition, several studies have leveraged traditional computer graphics techniques to develop implicit surface reconstructions. Notably, Abdellah et al. [38,39] introduced an innovative point-skeleton-based metaball approach, achieving remarkable precision in constructing detailed vascular mesh models. In parallel, Zhu et al. [40] proposed a sophisticated methodology that integrates line skeletons with a progressive convolution approximation strategy, effectively generating high-fidelity surface meshes that capture the intricate morphological features of neuronal structures. Nevertheless, the conventional two-state approach—first tracking skeleton information from images and then generating membrane surfaces—remains cumbersome and indirect. Additionally, the resulting membrane surfaces may not always accurately reflect the original pixel data. As a result, the direct extraction of neuronal membrane surfaces from image stacks remains an important research direction for further exploration.

2.3. Implicit Neural Representation

In recent years, implicit neural representations (INRs), exemplified by neural rendering technologies such as Neural Radiance Fields (NeRFs) [41], have gained significant attention. The core principle of this AI technology is to represent an object’s state using a continuous function and approximate it through neural networks. MLPs play a crucial role in INR by enabling neural networks to model the relationship between spatial coordinates and geometric properties, thereby facilitating the representation of high-precision, continuous 3D shapes. In occupancy networks [42], MLPs are trained to map the relationship between the input spatial point coordinates and object occupancy, outputting a binary probability to efficiently and continuously represent 3D surfaces. In NeRFs, for a fixed viewpoint, MLPs map 3D spatial coordinates to corresponding RGB color values and density values based on position and viewing direction. This approach is used to reconstruct 3D scenes from multi-view images, emphasizing how MLPs capture complex textures and reflective effects. For 3D object reconstruction, methods such as DeepSDF [43] are widely used. DeepSDF employs MLPs to fit a signed distance function, where 3D input coordinates yield the distance from a point to the object’s surface, enabling the flexible representation of complex 3D shapes.
In the field of biosciences, INRs have gradually been applied to various tasks, including undersampled image reconstruction [44,45,46,47,48], registration [49,50], compression [51,52], and segmentation [53]. For instance, Wiesner et al. [54] improved the DeepSDF approach to model live cell morphological changes using INR, accurately capturing surface details and topological variations under different cell growth modes. Similarly, Bernard et al. [55] applied INR to model the development of abdominal aortic aneurysms, highlighting its potential clinical value in simulating pathological changes. However, applying INR to fit 3D objects with tree-like structures often yields suboptimal results. These structures are highly fractal and hierarchical, with intricate geometric details such as small branches and irregular curves that pose challenges for INR representations, particularly when network capacity is limited. Additionally, branches within each hierarchical layer are closely interrelated with both parent and child nodes. Since INR typically represents an object’s geometry using a single implicit function, capturing the complex local details of hierarchical tree structures within a global model remains a significant challenge.

3. Methods

We propose an implicit shape representation framework for neuron reconstruction, leveraging a hybrid neural architecture. As shown in Figure 2, our approach consists of two main components. First, we employ a 3D ResNet-based image encoder [56] to extract hierarchical visual features from image stacks. The encoder, composed of multiple 3D convolutional layers with batch normalization and ReLU activations, encodes spatial patterns into a latent representation, which is then passed to an MLP. Second, the network integrates a Graph Attention Network (GAT) [57] and a GCN to encode the structural properties of neurons, ensuring that the implicit neural representation captures both the geometric and topological characteristics of neuron structures. These two feature representations—graph-based structural encoding and image-based feature extraction—are fused and processed through a sequence of layer normalization and fully connected layers to predict the signed distance function of the neuronal membrane surface. By leveraging the interplay between graph-based and image-based feature representations, our model effectively reconstructs detailed neuronal morphology. The resulting implicit shape representation can then be used to reconstruct a mesh using techniques such as Marching Cubes [58].

3.1. Using SDF to Represent Neurons

The implicit surface model naturally generates smooth surfaces with seamless transitions while effectively handling complex topological changes. This approach enables neuron morphology reconstruction through isosurface extraction. In this model, the SDF represents the distance between any point in space and the neuron membrane surface. Specifically, when the SDF value at a given point is zero, the point lies precisely on the neuron membrane surface. By applying isosurface extraction techniques such as the Marching Cubes, the geometric structure of the neuron membrane can be reconstructed from the SDF field. To further illustrate this concept, we define a three-dimensional spatial domain P = [ 1 , 1 ] 3 , where each point p = ( x , y , z ) P is represented by the SDF. The signed distance function is then defined as Equation (1).
S D F ( p ) = d ( p ) , if p is outside the surface ; 0 , if p is on the surface ; d ( p ) , if p is inside the surface .
The SDF value not only represents the Euclidean distance from a point to the neuron membrane surface but also encodes directional information through its sign, indicating whether the point lies inside or outside the membrane. This approach facilitates high-precision neuron morphology reconstruction while preserving topological integrity and geometric continuity.

3.2. Implicit Representation Modeling

To formalize our neuron reconstruction framework, we adopt an INR approach, where the neuronal structure is modeled as a continuous function in 3D space. Specifically, we learn a function f θ : R d × R 3 R , parameterized by an MLP, that maps a latent shape code z R d and a spatial coordinate x R 3 to a signed distance value S D F ( x ) R as function (2):
f θ ( z , x ) S D F ( x ) ,
where z is jointly derived from the image stacks and the corresponding neuronal skeleton graph structure. The input image stacks I R H × W × D can be processed through an image feature encoder Φ img to obtain an embedding. The equation is shown as (3):
z i = Φ img ( I ) .
In parallel, to encode the graph structure, a GCN with three convolutional layers further enhances feature learning by aggregating information from neighboring nodes. Owing to the distinctive structure of neuron skeleton files, the input can be naturally represented as a graph G = ( x , E ) . Given input node features x and the graph’s edge structure E (represented as an adjacency matrix or edge list), each GCN layer updates node features through convolution operations as described in Equation (4).
h ( 1 ) = GCNConv ( x , E ) ,
which can be further transformed into Equation (5).
h ( 1 ) = σ ( A ^ x W 1 ) .
The same can be inferred for other convolutional layers as in Equations (6) and (7).
h ( 2 ) = σ ( A ^ h ( 1 ) W 2 ) ;
h ( 3 ) = σ ( A ^ h ( 2 ) W 3 ) ,
where A ^ is the adjacency matrix, W l is the weight matrix, and σ is the activation function. Based on the GCN, we intend to introduce an attention mechanism. In GAT, node feature updates are achieved through weighted aggregation of neighboring nodes. Given a node v and its set of neighboring nodes N ( v ) , the feature of each node is updated by performing a weighted sum over the features of its neighbors, where the weights are dynamically computed using an attention mechanism. Hence, in each layer of GAT, the node feature update process can be represented by the following Equation (8):
h v ( l + 1 ) = σ u N ( v ) α u v ( l ) W ( l ) h u ( l ) ,
where α u v ( l ) is the attention coefficients indicating the relevance between node v and its neighboring node u. It is worth noting that we introduce a Top-K mechanism here, where we first compute the attention weights for all neighbors, and then select the top-K highest weights. The Top-K operation can be expressed as Equation (9):
α u v ( l ) = exp ( σ ( a T [ W ( l ) h u ( l ) | | W ( l ) h v ( l ) ] ) ) Z v , if u T v ; 0 , otherwise ,
where T v is the Top-K neighbor set of node v, selecting the Top-K largest attention weights. Z v is the normalization factor which can be expressed as Equation (10):
Z v = u T v exp ( σ ( a T [ W ( l ) h u ( l ) | | W ( l ) h v ( l ) ] ) ) .
In this case, the updated GCN formula can now be represented as Equation (11):
h v ( l + 1 ) = σ u T v α u v ( l ) W ( l ) h u ( l )
The input skeleton morphology can be processed through a stack of GCN or GAT layers to extract node embeddings, followed by global pooling shown as Equation (12):
z g = GlobalPool { h v ( L ) } v V ,
The final latent code is formed by concatenating the two modalities as Equation (13):
z = [ z g ; z i ] R d .
The model is trained by minimizing a reconstruction loss over sampled 3D points and corresponding ground truth SDF values, combined with a regularization term on the latent code shown as Equation (14):
L ( θ , z ) = x X f θ ( z , x ) S D F ( x ) 2 + λ z 2 ,
where X is the set of sampled 3D locations and λ controls the strength of latent code regularization. This formulation not only ensures accurate and efficient neuron reconstruction but also establishes a solid mathematical basis for the subsequent network architecture, particularly in integrating graph and image-based feature encoding.

3.3. Neural Network Architecture

During training, image stacks and the corresponding tracked skeletons are fed into the network as paired data. The image component first passes through a lightweight deep residual network designed to extract essential three-dimensional features. This network consists of 3D convolutional layers, batch normalization, ReLU activation, max pooling, and basic residual blocks. By incorporating skip connections within a shallow architecture, it enhances gradient flow, stabilizing feature learning. The compact structure, with a reduced number of layers, preserves strong feature extraction capabilities while mitigating the vanishing gradient problem, ultimately improving convergence speed and generalization performance.
Next, the network interprets the corresponding tracked skeletons as a graph. A three-layer GAT module is first applied, consisting of three linear transformation layers for feature conversion, followed by a graph attention layer that processes the dependencies between nodes in the neuron tree structure. This attention mechanism enables weighted information transfer, effectively capturing local features.
The feature vectors extracted by the GAT and GCN modules, along with the neuronal coordinates and initialized high-dimensional feature vectors, are fed into a fully connected MLP network. The network architecture consisted of 12 layers, each containing 128 units. The model incorporated latent vector injections at layers 4, 7, and 10, and normalization was applied to all layers. The latent vector dimensionality was set to 64 and initialized randomly from a normal distribution N (0, 0.012). This deep feedforward neural network encodes latent vectors and spatial coordinate information to generate SDF values corresponding to these points. During training, multiple loss functions—including Binary Cross-Entropy (BCE) loss, focal loss, and Dice loss—are combined to enhance model performance. Additionally, L2 regularization and gradient clipping are applied to improve the training stability. Specifically, BCE loss [60] is used to evaluate the discrepancy between the predicted and the ground truth SDF values. It is primarily employed when the label is negative, meaning the point lies outside the object’s surface. The detail is given as Equation (15).
L BCE = 1 N i = 1 N y i log ( p i ) + 1 y i log ( 1 p i ) ,
where p i represents the i-th sample predicted by the MLP, and y i denotes its corresponding label.
On this basis, focal loss is introduced to further improve BCE loss, addressing category imbalance [61], especially for dealing with sparse labels in data preprocessing. This loss functions adjusts the contribution of each sample based on the confidence of the predicted probability, emphasizing misclassified samples. Additionally, Dice loss is employed to measure the overlap between predictions and ground truth labels, making it effective for mitigating data imbalance [62]. The combination of focal loss and Dice loss improves the model’s performance on sparse data and misclassified samples, which can be expressed as Equations (16) and (17).
L focal = α t ( 1 p t ) γ log ( p t ) ;
L dice = 1 2 | A B | | A | + | B | ;
where p t represents the predicted probability for each coordinate, | A B | denotes the intersection area between the predicted region and the ground truth region, and | A | and | B | correspond to the area of the predicted and ground true region, respectively.
Therefore, the overall loss can be expressed as Equation (18).
L total = λ L focal + ( 1 λ ) L dice .

4. Experiment Results

4.1. Data

We validated our approach using multiple datasets, including the dendritic morphology provided by Peng [63]. This dataset contains 1741 single-neuron morphological samples from a variety of molecularly defined cell types, primarily sourced from the mouse brain. For our experiments, we specifically extracted the dendritic regions. The second dataset, gold166 [64], is one of the largest neuron morphology datasets in neuroscience, comprising brain and nervous system samples from multiple species, including mice, Drosophila, zebrafish, and humans. The combination of these datasets demonstrates that our proposed method not only achieves high-quality reconstruction in local dendritic regions but also effectively reconstructs complete neuronal tree structures.

4.2. Reconstruction Analysis

4.2.1. Quantitative Analysis

To assess the reconstruction efficiency of our INR-based approach compared to traditional methods, we separately measured both training and inference times. Our implicit shape representation models were trained using a single RTX 4090 GPU. Specifically, the training was conducted for 1500 epochs, with each epoch taking approximately 100 s, resulting in a total training time of approximately 40 h. The network was implemented in Python 3.9.16 using PyTorch 2.5.0. We trained the model with periodic activation functions [65,66], which demonstrated superior reconstruction of complex signals compared to non-periodic alternatives [67]. The model weights were optimized using the Adam optimizer [68] with a base learning rate of 1 × 10 4 , which was decayed by a factor of 0.5 every 500 epochs, in accordance with the defined step-based learning rate schedule.
During testing, we evaluated the model’s ability to reconstruct neurons from voxel-based representations across samples with varying structural complexity. As shown in Table 1 and Table 2, we quantitatively compared the original and reconstructed neuron models by measuring key morphological properties, including volume and surface area. The evaluation encompassed neurons from multiple datasets, ensuring a comprehensive assessment across diverse structural configurations.
Table 3 and Table 4 compare the accuracy rates of various methods in reconstructing neuronal volumes and surfaces. By analyzing these metrics, we aimed to validate the fidelity of voxel-based neuron reconstructions and their ability to preserve fine morphological details while maintaining topological consistency. Additionally, Table 5 and Table 6 present three commonly used metrics for evaluating 3D reconstruction quality, providing a comprehensive assessment of neuron reconstruction performance. Compared to other methods, our approach achieves a voxel-wise distance closer to the ground truth and demonstrates superior performance in both global volumetric overlap and local consistency.
Reconstruction was performed at a resolution of [256, 256, 256], using 10,000 sampled points per subset and one frame per sequence. To ensure a fair comparison with traditional methods that do not require training, we only report inference-time reconstruction speed for all approaches. Training time is considered an offline cost and excluded from the runtime comparison. We integrated AFM [20] and the quasi-uniform surface meshing method [40] to enable both neuronal skeleton tracing from image stacks and membrane surface reconstruction. Using the former method, the average tracing time per neuron is approximately 20 min, while the latter implicit surface method generates membrane surfaces in an average of 34.3 min. Thus, the total estimated time required by the traditional pipeline is approximately 54.3 min per neuron, as shown in Figure 3. In contrast, our method achieves an average inference time of approximately 258.3 s per shape, representing a more than 12-fold speed-up. This significant improvement highlights the practicality of our approach for applications requiring rapid and repeated reconstructions, such as large-scale neuron morphology databases. Although our method’s inference time is comparable to other implicit reconstruction approaches, it significantly outperforms these methods in reconstruction quality.

4.2.2. Visualization Analysis

Our experimental results demonstrate significant improvements over existing implicit neural representation methods. Figure 4 compares our results with those produced by DeepSDF and Wiesner’s method on Peng’s dataset, while Figure 5 presents test results on the gold166 dataset. The ground truth is derived by voxelizing the traced output from the original dataset, serving as a high-fidelity representation of neuronal morphology. In neuron tracing, this is widely considered a gold standard due to its precision.
Figure 4a depicts a case where branch details and background noise appear similar. Despite this challenge, our method (Figure 4g)—leveraging GAT and GCN—achieves excellent reconstruction results, whereas Figure 4j,m fail to effectively identify the regions requiring reconstruction. Figure 4b presents a scenario with coarser neuronal branching and greater contrast between noise brightness and the neuron. Our method (Figure 4h) successfully reconstructs the neuronal structure, while Figure 4k,n are only able to reconstruct the soma region. Figure 4c illustrates a more challenging case involving finer branches and complex topology. Our method (Figure 4j) preserves the topological structure more effectively than the compared methods.
Further comparisons are detailed in Figure 5: Figure 5a shows a case with smaller branches, where our method preserves connectivity well. Figure 5b represents a complex scenario with dense and elongated branches. Our method (Figure 5j) effectively retains both fine branch details and long branch connections. Figure 5c highlights a case where the branches undergo significant angular twists, demonstrating our method’s ability to maintain structural integrity. Figure 5d showcases the reconstruction capability in cases of dense branching, further illustrating the robustness of our approach. These comparisons underscore the advantages of our method in preserving fine details, connectivity, and topological relationships across various challenging scenarios.
In conclusion, our approach more effectively captures intricate branching details and preserves richer topological information. In contrast, DeepSDF tends to produce overly smooth reconstructions that lack fine details, while Wiesner’s method often results in scattered and fragmented structures. By maintaining fine, tree-like features, particularly in regions with complex branching patterns, our approach enables a more accurate representation of neuronal geometry and topology. This enhanced capability is especially critical in cases where detailed connectivity and structural variations are essential. The superior performance in both branch detail and topological fidelity underscores the effectiveness of our method in capturing the nuanced complexities of neuronal structures.
Moreover, Figure 6 illustrates the robustness of the proposed network under challenging imaging conditions, where input images contain substantial artifacts and noise, including intensity fluctuations, background blurring, and scanning inconsistencies commonly found in microscopy data. Despite significant image degradation, the model accurately reconstructs the overall neuronal structure while preserving essential morphological features and spatial connectivity. Specifically, it successfully identifies the primary trajectories of dendrites and axons, maintains precise diameter estimations along neuronal processes, and ensures high structural fidelity at branch junctions, effectively mitigating common issues such as discontinuities, misconnections, and volume inconsistencies. These results highlight the model’s strong robustness and generalization ability, allowing reliable reconstruction even in low-signal-to-noise-ratio (SNR) scenarios where conventional thresholding-based methods typically fail. Quantitative analysis reveals that our method retains over 85% structural completeness compared to the ground truth, even under significantly increased noise levels. Notably, its implicit regularization properties help maintain structural integrity and biological plausibility, underscoring its applicability in complex real-world imaging environments such as in vivo imaging or thick-tissue sections where optical clarity is compromised.
Notably, the proposed model shows strong reconstruction performance in the soma–dendrite regions of neuronal voxel stacks. It can also complete neuron structures with relatively high imaging quality. This strong performance is primarily due to the structural constraints imposed by the Graph Feature Encoder. However, the SDF fitting process remains sensitive to variations in raw image intensity and signal quality. As illustrated in Figure 7a,b, two challenging scenarios highlight the limitations of the model. In Figure 7c, the reconstruction result corresponds to a dendritic segment cropped from the image, where extraneous branches originating from neighboring neurons are present. These interfering structures are difficult to exclude accurately during SDF fitting, leading to the reconstruction of redundant or false branches, which compromises structural precision. In Figure 7b, the effective imaging region (highlighted by the red frame) occupies only a small portion of the entire volume, thereby increasing the difficulty in accurately fitting fine dendritic branches. As a result shown in Figure 7d, the reconstruction suffers from discrete artifacts, such as scattered voxels or fragmented structures.
To address these issues, we incorporate a post-processing step based on the largest connected component extraction. This step effectively removes spurious, unconnected fragments from the reconstruction results without affecting the integrity of the main structure. By applying this strategy, the overall structural completeness and continuity of the reconstructed neurons are significantly improved in Figure 7e,f, yielding more reliable and biologically interpretable outcomes.

4.3. Space of INR

In this work, we represent neurons extracted from image stacks using INRs. To analyze the space of these INRs, we apply t-SNE to visualize how structurally similar neurons cluster in the 2D t-SNE space. As shown in Figure 8, neurons with similar morphologies tend to group together. To further quantify these relationships, we compute the matrix of L 2 distances between the INRs of neurons, where each neuron corresponds to a specific structure in the image stack. Our observations reveal that neurons with similar structures exhibit smaller L 2 distances (closer to zero), indicating higher similarity in their INRs. Conversely, as the structural differences between neurons increase, the L 2 distance grows accordingly, a pattern visually represented by the color gradient in the heatmap.

5. Discussion

The proposed approach, which utilizes INR for neuronal morphology reconstruction, offers several key advantages over traditional mesh generation methods. By modeling the SDF with an MLP, our method eliminates the need for explicit segmentation and interpolation, which are often prone to errors and computationally intensive. The SDF-based modeling ensures the generation of smooth, continuous surfaces, maintaining high structural fidelity in the reconstructed neuronal membranes and avoiding common issues such as over-segmentation or loss of fine details. Furthermore, implicit neural representations provide a flexible framework capable of adapting to neurons with highly complex, tree-like morphologies, ensuring that even intricate dendritic branches are accurately preserved. Our evaluations demonstrate that this approach consistently outperforms traditional methods in terms of surface continuity, topological accuracy, and robustness across diverse datasets.
In addition, conventional approaches for neuronal reconstruction often involve multiple intermediate processing steps, such as manual tracing, automated segmentation, and meshing via neuronal skeleton representations. While these methods have been widely used, they are prone to artifacts such as discontinuities at segment junctions and inaccuracies due to the discrete nature of voxel-based processing. Furthermore, methods relying on explicit mesh generation from skeletons can lead to loss of detail, particularly in highly branched structures or regions with complex curvatures. In contrast, our proposed end-to-end approach allows direct reconstruction of neuronal surfaces from raw image stacks, eliminating the need for intermediate data conversion. The use of SDF provides an inherent robustness against noise and inconsistencies in microscopy images, leading to a more stable and precise reconstruction process. However, one potential limitation of our method is its dependency on neural network training, which may introduce computational overhead. While this approach significantly improves reconstruction accuracy, further optimization of inference speed and memory efficiency will be necessary for large-scale applications in neuroscience research.
It has been noted that different microscopy techniques may influence our method’s performance. While confocal microscopy shares similar imaging properties with the data used in this work, its increased precision may come at the cost of higher resource consumption. This may necessitate adjustments in parameter configurations to accommodate larger data sizes and computational demands. Regarding light-sheet microscopy, although it offers a broader dynamic range and is commonly used for time-lapse imaging, our focus on static reconstructions minimizes the impact of its dynamic capabilities on our method. For electron microscopy (EM), its extremely high resolution introduces unique challenges such as noise and data sparsity. Adapting our approach to handle EM data would likely require specialized modifications, which we consider an import avenue for future research aimed at extending our method for detailed reconstructions.
The ability to generate high-resolution, continuous neuronal surface meshes directly from image stacks opens new possibilities for neuroanatomical studies, functional simulations, and large-scale connectomics research. The improved accuracy in neuronal morphology reconstruction can benefit downstream applications such as synapse modeling, electrophysiological simulations, and brain connectivity analysis. Additionally, the adaptability of implicit neural representations suggests potential extensions to other biological structures beyond neurons, including glial cells and vascular networks. Future work may focus on integrating our approach with multimodal imaging techniques to further improve reconstruction accuracy, as well as extending the method to handle large-scale neuronal datasets with higher computational efficiency. Moreover, incorporating domain-specific constraints and neuroscientific priors into the neural network architecture could further improve the biological plausibility of the reconstructed models, ultimately leading to more accurate representations of neuronal structures in computational neuroscience studies.

6. Conclusions

In summary, our method provides a new and effective solution for morphology reconstruction using implicit neural representations. It addresses several limitations of traditional approaches, including segmentation errors, discontinuities at segment junctions, and loss of detail in complex regions. By directly reconstructing neuronal surfaces from raw image stacks through an SDF-based neural network combined with image and graph feature encoding, our approach offers a robust, accurate, and adaptable framework for capturing intricate neuronal structures.

Author Contributions

Conceptualization, X.Z.; methodology, X.Z. and Y.Z.; algorithm, X.Z. and Y.Z.; validation, X.Z. and Y.Z.; formal analysis, X.Z. and Y.Z.; investigation, Y.Z.; resources, X.Z.; writing—original draft preparation, X.Z. and Y.Z.; writing—review and editing, X.Z., Y.Z. and L.Y.; visualization, Y.Z.; supervision, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are openly available in BigNeuron at https://doi.org/10.1016/j.neuron.2015.06.036, reference number [64].

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MLPMulti-layer perceptron
INRImplicit Neural Representations
SDFSigned distance function
GCNGraph convolutional networks
GATGraph attention network
BCEBinary Cross-Entropy
3DThree-dimensional
2DTwo-dimensional
AFMAutomation-Following-Manual

References

  1. Senft, S.L. A Brief History of Neuronal Reconstruction. Neuroinformatics 2011, 9, 119–128. [Google Scholar] [CrossRef]
  2. Markram, H.; Muller, E.; Ramaswamy, S.; Reimann, M.; Abdellah, M.; Sanchez, C.; Ailamaki, A.; Alonso-Nanclares, L.; Antille, N.; Arsever, S.; et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell 2015, 163, 456–492. [Google Scholar] [CrossRef]
  3. Kim, J.; Zhao, T.; Petralia, R.S.; Yu, Y.; Peng, H.; Myers, E.; Magee, J.C. mGRASP Enables Mapping Mammalian Synaptic Connectivity with Light Microscopy. Nat. Methods 2012, 9, 96–102. [Google Scholar] [CrossRef]
  4. Parekh, R.; Ascoli, G. Neuronal Morphology Goes Digital: A Research Hub for Cellular and System Neuroscience. Neuron 2013, 77, 1017–1038. [Google Scholar] [CrossRef]
  5. Iascone, D.M.; Li, Y.; Sümbül, U.; Doron, M.; Chen, H.; Andreu, V.; Goudy, F.; Blockus, H.; Abbott, L.F.; Segev, I.; et al. Whole-Neuron Synaptic Mapping Reveals Spatially Precise Excitatory/Inhibitory Balance Limiting Dendritic and Somatic Spiking. Neuron 2020, 106, 566–578.e8. [Google Scholar] [CrossRef]
  6. Mörschel, K.; Breit, M.; Queisser, G. Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph. Neuroinformatics 2017, 15, 247–269. [Google Scholar] [CrossRef]
  7. Hepburn, I.; Chen, W.; Wils, S.; De Schutter, E. STEPS: Efficient Simulation of Stochastic Reaction–diffusion Models in Realistic Morphologies. BMC Syst. Biol. 2012, 6, 36. [Google Scholar] [CrossRef]
  8. Yuan, X.; Trachtenberg, J.T.; Potter, S.M.; Roysam, B. MDL Constrained 3-D Grayscale Skeletonization Algorithm for Automated Extraction of Dendrites and Spines from Fluorescence Confocal Images. Neuroinformatics 2009, 7, 213–232. [Google Scholar] [CrossRef]
  9. Al-Kofahi, K.; Lasek, S.; Szarowski, D.; Pace, C.; Nagy, G.; Turner, J.; Roysam, B. Rapid Automated Three-dimensional Tracing of Neurons from Confocal Image Stacks. IEEE Trans. Inf. Technol. Biomed. 2002, 6, 171–187. [Google Scholar] [CrossRef]
  10. Rodriguez, A.; Ehlenberger, D.B.; Dickstein, D.L.; Hof, P.R.; Wearne, S.L. Automated Three-Dimensional Detection and Shape Classification of Dendritic Spines from Fluorescence Microscopy Images. PLoS ONE 2008, 3, e1997. [Google Scholar] [CrossRef]
  11. Rodriguez, A.; Ehlenberger, D.B.; Hof, P.R.; Wearne, S.L. Three-dimensional Neuron Tracing by Voxel Scooping. J. Neurosci. Methods 2009, 184, 169–175. [Google Scholar] [CrossRef]
  12. Choromanska, A.; Chang, S.F.; Yuste, R. Automatic Reconstruction of Neural Morphologies with Multi-Scale Tracking. Front. Neural Circuits 2012, 6, 25. [Google Scholar] [CrossRef]
  13. Wang, Y.; Narayanaswamy, A.; Tsai, C.L.; Roysam, B. A Broadly Applicable 3-D Neuron Tracing Method Based on Open-Curve Snake. Neuroinformatics 2011, 9, 193–217. [Google Scholar] [CrossRef]
  14. Cohen, L.D.; Kimmel, R. Global Minimum for Active Contour Models: A Minimal Path Approach. Int. J. Comput. Vis. 1997, 24, 57–78. [Google Scholar] [CrossRef]
  15. Peng, H.; Long, F.; Myers, G. Automatic 3D Neuron Tracing Using All-path Pruning. Bioinformatics 2011, 27, i239–i247. [Google Scholar] [CrossRef]
  16. Xiao, H.; Peng, H. APP2: Automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree. Bioinformatics 2013, 29, 1448–1454. [Google Scholar] [CrossRef]
  17. Basu, S.; Condron, B.; Aksel, A.; Acton, S. Segmentation and Tracing of Single Neurons from 3D Confocal Microscope Images. IEEE Trans. Inf. Technol. Biomed. Publ. IEEE Eng. Med. Biol. Soc. 2012, 17, 319–335. [Google Scholar] [CrossRef]
  18. Liu, S.; Zhang, D.; Song, Y.; Peng, H.; Cai, W. Automated 3-D Neuron Tracing With Precise Branch Erasing and Confidence Controlled Back Tracking. IEEE Trans. Med. Imaging 2018, 37, 2441–2452. [Google Scholar] [CrossRef]
  19. Yang, J.; Hao, M.; Liu, X.; Wan, Z.; Zhong, N.; Peng, H. FMST: An Automatic Neuron Tracing Method Based on Fast Marching and Minimum Spanning Tree. Neuroinformatics 2019, 17, 185–196. [Google Scholar] [CrossRef] [PubMed]
  20. Dorkenwald, S.; Schneider-Mizell, C.M.; Brittain, D.; Halageri, A.; Jordan, C.; Kemnitz, N.; Castro, M.A.; Silversmith, W.; Maitin-Shephard, J.; Troidl, J.; et al. CAVE: Connectome Annotation Versioning Engine. bioRxiv 2023. [Google Scholar] [CrossRef]
  21. Wang, S.; Li, X.; Mitra, P.; Wang, Y. Topological Skeletonization and Tree-Summarization of Neurons Using Discrete Morse Theory. bioRxiv 2018. [Google Scholar] [CrossRef]
  22. Gala, R.; Chapeton, J.; Jitesh, J.; Bhavsar, C.; Stepanyants, A. Active Learning of Neuron Morphology for Accurate Automated Tracing of Neurites. Front. Neuroanat. 2014, 8, 37. [Google Scholar] [CrossRef]
  23. Wang, C.W.; Lee, Y.C.; Pradana, H.; Zhou, Z.; Peng, H. Ensemble Neuron Tracer for 3D Neuron Reconstruction. Neuroinformatics 2017, 15, 185–198. [Google Scholar] [CrossRef]
  24. Li, Q.; Shen, L. Neuron Segmentation Using 3D Wavelet Integrated Encoder–decoder Network. Bioinformatics 2022, 38, 809–817. [Google Scholar] [CrossRef]
  25. Wang, H.; Zhang, D.; Song, Y.; Liu, S.; Huang, H.; Chen, M.; Peng, H.; Cai, W. Multiscale Kernels for Enhanced U-Shaped Network to Improve 3D Neuron Tracing. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 1105–1113. [Google Scholar]
  26. Yang, B.; Liu, M.; Wang, Y.; Zhang, K.; Meijering, E. Structure-Guided Segmentation for 3D Neuron Reconstruction. IEEE Trans. Med. Imaging 2022, 41, 903–914. [Google Scholar] [CrossRef]
  27. Megjhani, M.; Rey-Villamizar, N.; Merouane, A.; Lu, Y.; Mukherjee, A.; Trett, K.; Chong, P.; Harris, C.; Shain, W.; Roysam, B. Population-scale three-dimensional reconstruction and quantitative profiling of microglia arbors. Bioinformatics 2015, 31, 2190–2198. [Google Scholar] [CrossRef]
  28. Becker, C.; Rigamonti, R.; Lepetit, V.; Fua, P. Supervised Feature Learning for Curvilinear Structure Segmentation. In Advanced Information Systems Engineering; Hutchison, D., Kanade, T., Kittler, J., Kleinberg, J.M., Mattern, F., Mitchell, J.C., Naor, M., Nierstrasz, O., Pandu Rangan, C., Steffen, B., et al., Eds.; Springer: Berlin/Heidelberg, Gernamy, 2013; pp. 526–533. [Google Scholar]
  29. Liu, Y.; Wang, G.; Ascoli, G.A.; Zhou, J.; Liu, L. Neuron Tracing from Light Microscopy Images: Automation, Deep Learning and Bench Testing. Bioinformatics 2022, 38, 5329–5339. [Google Scholar] [CrossRef]
  30. Manubens-Gil, L.; Zhou, Z.; Chen, H.; Ramanathan, A.; Liu, X.; Liu, Y.; Bria, A.; Gillette, T.; Ruan, Z.; Yang, J.; et al. BigNeuron: A Resource to Benchmark and Predict Performance of Algorithms for Automated Tracing of Neurons in Light Microscopy Datasets. Nat. Methods 2023, 20, 824–835. [Google Scholar] [CrossRef]
  31. Glaser, J.R.; Glaser, E.M. Neuron Imaging with Neurolucida—A PC-based System for Image Combining Microscopy. Comput. Med. Imaging Graph. 1990, 14, 307–317. [Google Scholar] [CrossRef]
  32. Gleeson, P.; Steuber, V.; Silver, R.A. neuroConstruct: A Tool for Modeling Networks of Neurons in 3D Space. Neuron 2007, 54, 219–235. [Google Scholar] [CrossRef] [PubMed]
  33. Eberhard, J.; Wanner, A.; Wittum, G. NeuGen: A Tool for The Generation of Realistic Morphology of Cortical Neurons and Neural Networks in 3D. Neurocomputing 2006, 70, 327–342. [Google Scholar] [CrossRef]
  34. Wilson, M.A.; Bhalla, U.S.; Uhley, J.D.; Bower, J.M. GENESIS: A System for Simulating Neural Networks. In Proceedings of the 2nd International Conference on Neural Information Processing Systems, Cambridge, MA, USA, 1 January 1988; NIPS’88; pp. 485–492. [Google Scholar]
  35. Brito, J.P.; Mata, S.; Bayona, S.; Pastor, L.; DeFelipe, J.; Benavides-Piccione, R. Neuronize: A Tool for Building Realistic Neuronal Cell Morphologies. Front. Neuroanat. 2013, 7, 15. [Google Scholar] [CrossRef]
  36. Garcia-Cantero, J.J.; Brito, J.P.; Mata, S.; Bayona, S.; Pastor, L. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement. Front. Neuroinform. 2017, 11, 38. [Google Scholar] [CrossRef]
  37. Abdellah, M.; Hernando, J.; Eilemann, S.; Lapere, S.; Antille, N.; Markram, H.; Schürmann, F. NeuroMorphoVis: A Collaborative Framework for Analysis and Visualization of Neuronal Morphology Skeletons Reconstructed from Microscopy Stacks. Bioinformatics 2018, 34, i574–i582. [Google Scholar] [CrossRef]
  38. Abdellah, M.; Guerrero, N.R.; Lapere, S.; Coggan, J.S.; Keller, D.; Coste, B.; Dagar, S.; Courcol, J.D.; Markram, H.; Schürmann, F. Interactive Visualization and Analysis of Morphological Skeletons of Brain Vasculature Networks with VessMorphoVis. Bioinformatics 2020, 36, i534–i541. [Google Scholar] [CrossRef]
  39. Abdellah, M.; Foni, A.; Zisis, E.; Guerrero, N.R.; Lapere, S.; Coggan, J.S.; Keller, D.; Markram, H.; Schürmann, F. Metaball Skinning of Synthetic Astroglial Morphologies into Realistic Mesh Models for in silico Simulations and Visual Analytics. Bioinformatics 2021, 37, i426–i433. [Google Scholar] [CrossRef]
  40. Zhu, X.; Liu, X.; Liu, S.; Shen, Y.; You, L.; Wang, Y. Robust Quasi-uniform Surface Meshing of Neuronal Morphology Using Line Skeleton-based Progressive Convolution Approximation. Front. Neuroinform. 2022, 16, 953930. [Google Scholar] [CrossRef]
  41. Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Commun. ACM 2022, 65, 99–106. [Google Scholar] [CrossRef]
  42. Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4455–4465. [Google Scholar]
  43. Park, J.J.; Florence, P.; Straub, J.; Newcombe, R.; Lovegrove, S. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 165–174. [Google Scholar]
  44. Zang, G.; Idoughi, R.; Li, R.; Wonka, P.; Heidrich, W. IntraTomo: Self-supervised Learning-based Tomography via Sinogram Synthesis and Prediction. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021; pp. 1940–1950. [Google Scholar]
  45. Wu, Q.; Li, Y.; Sun, Y.; Zhou, Y.; Wei, H.; Yu, J.; Zhang, Y. An Arbitrary Scale Super-Resolution Approach for 3D MR Images via Implicit Neural Representation. IEEE J. Biomed. Health Inform. 2023, 27, 1004–1015. [Google Scholar] [CrossRef] [PubMed]
  46. Shen, L.; Pauly, J.; Xing, L. NeRP: Implicit Neural Representation Learning With Prior Embedding for Sparsely Sampled Image Reconstruction. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 770–782. [Google Scholar] [CrossRef] [PubMed]
  47. Wu, Q.; Li, Y.; Xu, L.; Feng, R.; Wei, H.; Yang, Q.; Yu, B.; Liu, X.; Yu, J.; Zhang, Y. IREM: High-Resolution Magnetic Resonance Image Reconstruction via Implicit Neural Representation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2021; De Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 12906, pp. 65–74. [Google Scholar]
  48. Sun, Y.; Liu, J.; Xie, M.; Wohlberg, B.; Kamilov, U. CoIL: Coordinate-Based Internal Learning for Tomographic Imaging. IEEE Trans. Comput. Imaging 2021, 7, 1400–1412. [Google Scholar] [CrossRef]
  49. Wolterink, J.M.; Zwienenberg, J.C.; Brune, C. Implicit Neural Representations for Deformable Image Registration. In Proceedings of Machine Learning Research PMLR, Proceedings of the 5th International Conference on Medical Imaging with Deep Learning, Zurich, Switzerland, 6–8 July 2022; Konukoglu, E., Menze, B., Venkataraman, A., Baumgartner, C., Dou, Q., Albarqouni, S., Eds.; ML Research Press: Cambridge, MA, USA, 2022; Volume 172, pp. 1349–1359. [Google Scholar]
  50. Sun, S.; Han, K.; You, C.; Tang, H.; Kong, D.; Naushad, J.; Yan, X.; Ma, H.; Khosravi, P.; Duncan, J.S.; et al. Medical Image Registration via Neural Fields. Med. Image Anal. 2024, 97, 103249. [Google Scholar] [CrossRef] [PubMed]
  51. Yang, R.; Xiao, T.; Cheng, Y.; Cao, Q.; Qu, J.; Suo, J.; Dai, Q. SCI: A Spectrum Concentrated Implicit Neural Compression for Biomedical Data. Proc. AAAI Conf. Artif. Intell. 2023, 37, 4774–4782. [Google Scholar] [CrossRef]
  52. Yang, R. TINC: Tree-Structured Implicit Neural Compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 18517–18526. [Google Scholar]
  53. Zhang, H.; Wang, R.; Zhang, J.; Li, C.; Yang, G.; Spincemaille, P.; Nguyen, T.B.; Wang, Y. NeRD: Neural Representation of Distribution for Medical Image Segmentation. arXiv 2021, arXiv:2103.04020. [Google Scholar]
  54. Wiesner, D.; Suk, J.; Dummer, S.; Svoboda, D.; Wolterink, J.M. Implicit Neural Representations for Generative Modeling of Living Cell Shapes. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2022; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Springer Nature: Cham, Switzerland, 2022; Volume 13434, pp. 58–67. [Google Scholar]
  55. Alblas, D.; Hofman, M.; Brune, C.; Yeung, K.K.; Wolterink, J.M. Implicit Neural Representations for Modeling of Abdominal Aortic Aneurysm Progression. In Functional Imaging and Modeling of the Heart; Bernard, O., Clarysse, P., Duchateau, N., Ohayon, J., Viallon, M., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 356–365. [Google Scholar]
  56. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  57. Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph Attention Networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  58. Lorensen, W.E.; Cline, H.E. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. ACM SIGGRAPH Comput. Graph. 1987, 21, 163–169. [Google Scholar] [CrossRef]
  59. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4490–4499. [Google Scholar]
  60. Li, Q.; Jia, X.; Zhou, J.; Shen, L.; Duan, J. Rediscovering BCE Loss for Uniform Classification. arXiv 2024, arXiv:2403.07289. [Google Scholar]
  61. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
  62. Li, X.; Sun, X.; Meng, Y.; Liang, J.; Wu, F.; Li, J. Dice Loss for Data-imbalanced NLP Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 465–476. [Google Scholar]
  63. Peng, H.; Xie, P.; Liu, L.; Kuang, X.; Wang, Y.; Qu, L.; Gong, H.; Jiang, S.; Li, A.; Ruan, Z.; et al. Morphological Diversity of Single Neurons in Molecularly Defined Cell Types. Nature 2021, 598, 174–181. [Google Scholar] [CrossRef]
  64. Peng, H.; Hawrylycz, M.; Roskams, J.; Hill, S.; Spruston, N.; Meijering, E.; Ascoli, G. BigNeuron: Large-Scale 3D Neuron Reconstruction from Optical Microscopy Images. Neuron 2015, 87, 252–256. [Google Scholar] [CrossRef]
  65. Mancini, M.; Jones, D.K.; Palombo, M. Lossy Compression of Multidimensional Medical Images Using Sinusoidal Activation Networks: An Evaluation Study. In Computational Diffusion MRI; Cetin-Karayumak, S., Christiaens, D., Figini, M., Guevara, P., Pieciak, T., Powell, E., Rheault, F., Eds.; Springer Nature: Cham, Switzerland, 2022; Volume 13722, pp. 26–37. [Google Scholar]
  66. Sitzmann, V.; Martel, J.N.P.; Bergman, A.W.; Lindell, D.B.; Wetzstein, G. Implicit neural representations with periodic activation functions. In Proceedings of the NIPS’20: Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020; Curran Associates Inc.: Red Hook, NY, USA, 2020. [Google Scholar]
  67. Gao, T.; Sun, S.; Liu, H.; Gao, H. Exploring the Impact of Activation Functions in Training Neural ODEs. In Proceedings of the Thirteenth International Conference on Learning Representations, Singapore, 24–28 April 2025. [Google Scholar]
  68. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
Figure 1. Comparison of traditional methods and ours. (a) Framework of conventional neuronal reconstruction. (b) Our proposed reconstruction framework. (c) Examples of tracking result. It shows projection of skeleton files in entire brain atlas.
Figure 1. Comparison of traditional methods and ours. (a) Framework of conventional neuronal reconstruction. (b) Our proposed reconstruction framework. (c) Examples of tracking result. It shows projection of skeleton files in entire brain atlas.
Mathematics 13 01276 g001
Figure 2. During network training, the input consists of image stacks and their corresponding skeletons, which are processed by an image feature encoder and a graph feature encoder. These extracted features are then jointly fed into an MLP with 12 hidden layers. These predictions are compared against the ground truth obtained by voxelizing [59] the skeletons. The resulting loss is computed and backpropagated through the network to optimize the model.
Figure 2. During network training, the input consists of image stacks and their corresponding skeletons, which are processed by an image feature encoder and a graph feature encoder. These extracted features are then jointly fed into an MLP with 12 hidden layers. These predictions are compared against the ground truth obtained by voxelizing [59] the skeletons. The resulting loss is computed and backpropagated through the network to optimize the model.
Mathematics 13 01276 g002
Figure 3. A comparison of time consumption between different methods.
Figure 3. A comparison of time consumption between different methods.
Mathematics 13 01276 g003
Figure 4. (ac) Raw image stacks cropped from the dendritic region of Peng’s dataset. (df) The skeletonized file of the tracked region, serving as the ground truth after voxelization. (gi) The 3D neuron model reconstructed using our implicit representation method. (jl) The reconstruction result using the DeepSDF method. (mo) The reconstruction result using the Wiesner method.
Figure 4. (ac) Raw image stacks cropped from the dendritic region of Peng’s dataset. (df) The skeletonized file of the tracked region, serving as the ground truth after voxelization. (gi) The 3D neuron model reconstructed using our implicit representation method. (jl) The reconstruction result using the DeepSDF method. (mo) The reconstruction result using the Wiesner method.
Mathematics 13 01276 g004
Figure 5. (ad) Raw image stacks from the gold166 dataset. (eh) The skeletonized file of the tracked region, serving as the ground truth after voxelization. (il) The 3D neuron model reconstructed using our implicit representation method. (mp) The reconstruction result using the DeepSDF method. (qt) The reconstruction result using the Wiesner method.
Figure 5. (ad) Raw image stacks from the gold166 dataset. (eh) The skeletonized file of the tracked region, serving as the ground truth after voxelization. (il) The 3D neuron model reconstructed using our implicit representation method. (mp) The reconstruction result using the DeepSDF method. (qt) The reconstruction result using the Wiesner method.
Mathematics 13 01276 g005
Figure 6. Different morphological dendrites of neurons and their corresponding reconstructions. (ac,gi) Raw image stacks. (df,jl) The 3D neuron model reconstructed using our implicit representation method.
Figure 6. Different morphological dendrites of neurons and their corresponding reconstructions. (ac,gi) Raw image stacks. (df,jl) The 3D neuron model reconstructed using our implicit representation method.
Mathematics 13 01276 g006
Figure 7. (a,b) The original image stacks. Especially, the red frame indicates that the proportion of the effective area in the original image is relatively small. (c,d) The reconstruction result. (e,f) The optimized effect after post-processing.
Figure 7. (a,b) The original image stacks. Especially, the red frame indicates that the proportion of the effective area in the original image is relatively small. (c,d) The reconstruction result. (e,f) The optimized effect after post-processing.
Mathematics 13 01276 g007
Figure 8. Neuron statistics. (a) Two-dimensional t-SNE plot of the skeletons. (b) L 2 distance between INRs of neurons with similar structures.
Figure 8. Neuron statistics. (a) Two-dimensional t-SNE plot of the skeletons. (b) L 2 distance between INRs of neurons with similar structures.
Mathematics 13 01276 g008
Table 1. Quantitative analysis for volume reconstructed from 2 datasets using different methods.
Table 1. Quantitative analysis for volume reconstructed from 2 datasets using different methods.
DatasetOriginalOursDeepSDFWiesner
Peng25,247.3 ± 16,013.227,081.7 ± 10,657.11504.0 ± 1151.82928.3 ± 1280.1
gold1664,869,980.7 ± 3,471,001.54,957,674.0 ± 99,378.0523,663.0 ± 5,430,585.7165,344.0 ± 17,911.9
Table 2. Quantitative analysis for surface reconstructed from 2 datasets using different methods.
Table 2. Quantitative analysis for surface reconstructed from 2 datasets using different methods.
DatasetOriginalOursDeepSDFWiesner
Peng25,122.3 ± 8887.228,860.0 ± 5462.31483.6 ± 824.23942.0 ± 943.1
gold166887,116.5 ± 591,552.0761,935.0 ± 46,552.1326,129.7 ± 246,538.425,914.5 ± 26,041.1
Table 3. The accuracy of volume reconstruction for the two datasets using different methods.
Table 3. The accuracy of volume reconstruction for the two datasets using different methods.
DatasetOursDeepsdfWiesner
Peng92.7%5.9%11.6%
gold16698.2%10.8%3%
Table 4. The accuracy of surface reconstruction for the two datasets using different methods.
Table 4. The accuracy of surface reconstruction for the two datasets using different methods.
DatasetOursDeepsdfWiesner
Peng85.1%5.9%15.7%
gold16685.9%36.8%2.9%
Table 5. Quantitative evaluation of 3D reconstruction performance on Peng’s dataset using three complementary metrics: Chamfer- L 1 distance (measuring point-wise surface accuracy, with lower values indicating better performance), volumetric IoU (assessing overall shape similarity, with higher values indicating better reconstruction), and localized volumetric IoU (evaluating accuracy in critical regions). Together, these metrics provide a comprehensive assessment of reconstruction quality, capturing both global shape accuracy and fine-grained geometric details.
Table 5. Quantitative evaluation of 3D reconstruction performance on Peng’s dataset using three complementary metrics: Chamfer- L 1 distance (measuring point-wise surface accuracy, with lower values indicating better performance), volumetric IoU (assessing overall shape similarity, with higher values indicating better reconstruction), and localized volumetric IoU (evaluating accuracy in critical regions). Together, these metrics provide a comprehensive assessment of reconstruction quality, capturing both global shape accuracy and fine-grained geometric details.
MethodCD ↓ IoU ↑Localized IoU ↑
Ours14.90.850.77
DeepSDF94.10.050.045
Wiesner84.30.100.09
Table 6. Quantitative evaluation of 3D reconstruction performance on the gold166 dataset using three complementary metrics.
Table 6. Quantitative evaluation of 3D reconstruction performance on the gold166 dataset using three complementary metrics.
MethodCD ↓ IoU ↑Localized IoU ↑
Ours14.10.900.81
DeepSDF63.20.080.07
Wiesner97.10.020.018
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, X.; Zhao, Y.; You, L. Neuronal Mesh Reconstruction from Image Stacks Using Implicit Neural Representations. Mathematics 2025, 13, 1276. https://doi.org/10.3390/math13081276

AMA Style

Zhu X, Zhao Y, You L. Neuronal Mesh Reconstruction from Image Stacks Using Implicit Neural Representations. Mathematics. 2025; 13(8):1276. https://doi.org/10.3390/math13081276

Chicago/Turabian Style

Zhu, Xiaoqiang, Yanhua Zhao, and Lihua You. 2025. "Neuronal Mesh Reconstruction from Image Stacks Using Implicit Neural Representations" Mathematics 13, no. 8: 1276. https://doi.org/10.3390/math13081276

APA Style

Zhu, X., Zhao, Y., & You, L. (2025). Neuronal Mesh Reconstruction from Image Stacks Using Implicit Neural Representations. Mathematics, 13(8), 1276. https://doi.org/10.3390/math13081276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop