Next Article in Journal
Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda
Previous Article in Journal
Detecting Driver Drowsiness Using Hybrid Facial Features and Ensemble Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement

1
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 211116, China
2
Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
3
University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Information 2025, 16(4), 295; https://doi.org/10.3390/info16040295
Submission received: 11 December 2024 / Revised: 2 April 2025 / Accepted: 3 April 2025 / Published: 8 April 2025
(This article belongs to the Special Issue Disease Diagnosis Based on Medical Images and Signals)

Abstract

:
Deep neural networks have made significant strides in medical image segmentation tasks, but their large-scale parameters and high computational complexity limit their applicability on resource-constrained edge devices. To address this challenge, this paper introduces a lightweight nuclear segmentation network called Attention-Enhanced U-Net (AttE-Unet) for cell segmentation. AttE-Unet enhances the network’s feature extraction capabilities through an attention mechanism and combines the strengths of deep learning with traditional image filtering algorithms, while substantially reducing computational and storage demands. Experimental results on the PanNuke dataset demonstrate that AttE-Unet, despite its significant reduction in model size—with the number of parameters and floating-point operations per second reduced to 1.57% and 0.1% of the original model, respectively—still maintains a high level of segmentation performance. Specifically, the F1 score and Intersection over Union (IoU) score are 91.7% and 89.3% of the original model’s scores. Furthermore, deployment on an MCU consumes only 2.09 MB of Flash and 1.38 MB of RAM, highlighting the model’s lightweight nature and its potential for practical deployment as a medical image segmentation solution on edge devices.

1. Introduction

Medical image segmentation technology plays a crucial role in the medical diagnosis process, particularly in the qualitative annotation and analysis of cell images, which significantly impacts a doctor’s judgment of a patient’s condition [1,2]. Accurate cell segmentation is crucial not only for academic research, such as quantifying cell morphology, tracking cell proliferation, and identifying disease-associated phenotypes, but also for facilitating advanced diagnostic tools within clinical settings. By automating the identification and delineation of cells, segmentation algorithms can significantly reduce the workload of pathologists, enhance diagnostic consistency, and support more rapid and precise medical decision-making. Moreover, as healthcare systems move toward greater digitalization and intelligent automation, cell segmentation becomes a pivotal component in developing robust computer-assisted diagnostic platforms. These platforms enable telemedicine and remote consultation, particularly beneficial for underserved regions where access to trained specialists and advanced equipment is limited. Integrating accurate and efficient cell segmentation into medical imaging workflows drives the transition toward smarter, more accessible, and more equitable healthcare solutions.
However, traditional cell labeling relies heavily on the doctor’s personal experience, making it time-consuming and susceptible to subjective bias [3]. As a result, automated medical image segmentation algorithms have emerged as an effective alternative, offering significant value in supporting diagnostic decisions. However, most medical image processing algorithms require substantial computing power and entail high deployment costs [4,5,6]. Therefore, designing a lightweight algorithm that can be deployed on edge devices for auxiliary diagnosis is of great practical importance, as it can help mitigate the scarcity of medical resources [7].
Medical image segmentation methods can be broadly categorized into two types: traditional methods and deep learning methods [8]. Traditional medical image segmentation approaches offer good interpretability and have low computational complexity, but they generally lack versatility and often fail to meet clinical needs when used independently [9,10]. In contrast, the use of deep learning methods to analyze medical imaging data has seen rapid growth [11,12]. Deep learning segmentation algorithms typically achieve higher accuracy and efficiency compared to traditional methods, making them the mainstream approach for medical image segmentation [13,14,15].
To date, convolutional neural networks (CNNs) and other deep learning techniques have been extensively applied to various medical image analysis tasks. These methods not only provide a high-performance computer-aided diagnosis (CAD) system framework but also offer solutions for other medical image processing subtasks [16,17]. Medical image processing algorithms can be divided into three technical routes: regression algorithms, target detection algorithms, and semantic segmentation algorithms. Among these, semantic segmentation algorithms implement an end-to-end approach that outputs pixel-level labels corresponding to the size of the input image. Key examples include the CNN-based U-Net and its variants [18,19,20,21], such as U-Net [22], UNet++ [23], RU-Net [24], Attention U-Net [25], and transformer-inspired network structures like MedT [26,27]. These techniques are commonly evaluated on public benchmarks designed for specific clinical tasks. For instance, CVC-ClinicDB [28] is widely used for polyp segmentation, while the 2018 Data Science Bowl (DSB) [29] focuses on nuclei segmentation in microscopic images. ISIC-2018 [30] is employed for delineating skin lesion boundaries to aid early melanoma detection, SegPC-2021 [31] targets multi-class segmentation of Myeloma Plasma cells, and BraTS-2021 [32] is used for brain tumor segmentation, providing multi-class volumetric labels, including GD-enhancing tumor, invaded tissue, and necrotic core. In addition to PanNuke [33] and MoNuSeg [34], these datasets collectively drive advancements in task-specific model development.
Based on the U-Net architecture, UNet++ introduces enhanced skip connections, effectively reducing the semantic gap between feature maps. Additionally, RU-Net incorporates the principles of recurrent convolutional neural networks and residual networks into the U-Net framework, significantly improving the network’s feature extraction capabilities. Similarly, Attention U-Net integrates an attention mechanism into the U-Net structure, which enhances the model’s ability to focus on relevant areas of the image, particularly in scenarios with complex backgrounds. The MedT network, which is based on the Transformer mechanism, demonstrates an improvement of 0.06% to 2.19% compared to traditional U-Net and its variants on public datasets. Their mean Intersection over Union (mIoU) scores range from approximately 0.6 to 0.9, with parameter sizes typically varying between 2.6 million and 35 million. However, although they have achieved excellent performance on cell segmentation tasks, these deep learning networks require a large number of parameters and high computational complexity, making them challenging to deploy on edge devices. Lightweight networks, on the other hand, require fewer computational and storage resources, enabling them to quickly and accurately process large volumes of medical image data. This capability is crucial for rapid clinical disease screening and holds significant practical value for medical research and clinical auxiliary diagnosis.
To address these challenges, this paper proposes a lightweight, attention-enhanced cell nucleus segmentation network for small medical datasets, named Attention-Enhanced U-Net (AttE-Unet). By proposing and introducing a lightweight attention-enhancing branch specifically designed to emphasize prior information, this network combines the strengths of deep learning and traditional image segmentation methods, compensating for the limited feature extraction capabilities of lightweight networks by incorporating prior information from traditional image filtering algorithms. Unlike conventional U-Net variants that heavily rely on large parameter sets, our approach leverages an easily interpretable, filter-based attention mechanism to capture critical cell boundary features. This strategy substantially reduces model parameters while preserving segmentation accuracy and cell integrity. The proposed method was trained and validated on the PanNuke cell segmentation public dataset and evaluated on the STM32 MCU [35,36].
The remainder of this paper is organized as follows: Section 2 introduces the proposed attention-enhanced cell nucleus segmentation network for small medical datasets, AttE-UNet, Section 3 presents the experimental setup, methods, and results, and Section 4 reports the findings and conclusions.

2. AttE-Unet Structure Introduction

In a previous study [37], we established AttE-Unet as a high-accuracy solution for medical cell segmentation. This paper transforms the architecture into a resource-efficient variant through systematic compression to enable deployment on memory-limited medical edge devices, demonstrating its lightweight potential. As shown in Figure 1, the AttE-Unet is divided into two main branches: a lightweight feature encoding and decoding branch, and an edge attention enhancement branch. In the lightweight feature encoding and decoding branch, the original two convolutions per layer are replaced with one, reducing the number of channels in the feature maps at each level of the encoder. Deep separable convolution is also utilized to decrease the number of parameters and the computational complexity of the network. The attention enhancement branch supplements prior information to each layer of the feature encoding branch, enhancing its feature extraction capability through designed filters and fusion gates. In the cell segmentation task, a multi-stage Canny filter is utilized along with a learnable gated fusion (GF) unit to control the fusion ratio of edge information and features.

2.1. Lightweight Feature Encoding and Decoding Branch

To achieve effective initial disease screening while maintaining a certain level of prediction accuracy, reducing the input size of the model is beneficial for decreasing the parameter count. In this study, the input images were downsampled to 64 × 64 pixels, and the corresponding semantic labels were adjusted to the same size. In the Attention U-Net, each level of the encoder and decoder performed two convolution operations. However, in this study, these were innovatively reduced to a single convolution, reducing the number of channels in the feature maps at each encoder level. Additionally, the convolutional block was replaced with a depthwise separable convolution to minimize the number of network parameters and computational complexity.
To compensate for the reduction in feature extraction capability caused by the significant decrease in computation, an edge attention-enhanced branch was introduced. The feature map generated by the branch was concatenated with the corresponding encoder-level feature map. The essence of this innovation lies in replacing the original two data-driven deep filters in the feature extraction branch with one lightweight data-driven filter and one edge attention-based filter. These improvements leverage edge prior knowledge to enhance the model’s feature extraction capabilities, significantly reduce computational complexity, and ease the learning process on small medical datasets, thereby substantially improving the convergence of models trained on such datasets.
The encoder of the lightweight feature encoding–decoding branch is to progressively encode high-level semantic information from the original input image and decode it step by step into a semantic label map of the same size as the input image. The encoder follows a specific process: after receiving the input image, depthwise separable convolutions are applied, extracting features using convolutional filters at each level. During each encoding stage, the number of channels is expanded, while max pooling is used to downscale the feature map. Additionally, the output from each level of the edge filtering module is concatenated with the corresponding encoder feature map during the encoding process. This integration serves as an additional channel, providing strong prior information that contributes to the gradually deepening encoding process.
The decoder consists of upsampling layers, depthwise separable convolution layers, and cross-layer fusion pathways controlled by attention fusion gates. The fusion gates have been proven to effectively fuse features across layers in various datasets. While this study retains the use of the soft attention mechanism, it introduces structural modifications to the feature extraction layers of the encoder–decoder. Specifically, the original two convolutional blocks per layer are reduced to one, and depthwise separable convolutions are adopted. The incorporation of strong prior edge features further supplements this reduction, significantly decreasing the number of network parameters while enhancing the model’s ability to combine features. The parameters of each network layer in the lightweight feature encoder and decoder are detailed in Table 1 and Table 2.

2.2. Attention Enhancement Branch

Compared to the U-Net, which concatenates feature maps directly between the encoder and decoder, the attention-based feature encoding–decoding branch is widely recognized for its stronger contextual understanding, enabling the encoder to better supply edge details to the decoder. However, attention-based network architectures come with high computational complexity, and reducing their parameter count can hinder the learning process of the attention mechanism, creating a challenge in balancing computational complexity and accuracy. Additionally, attention-based models are often validated on large-scale segmentation datasets, where they can more easily learn the long-range relationships between pixels. In contrast, medical imaging datasets are typically small, and learning attention-based knowledge from a limited number of samples is more difficult. Under these conditions, reducing network complexity and incorporating attention gates into the backbone can result in decreased accuracy, manifested as rough foreground edges and internal holes in segmentation results, especially on cell segmentation datasets.
To address this issue, an edge attention enhancement branch is proposed, with its output integrated as a strong prior into the feature extraction network. To maintain low computational complexity and algorithm interpretability, a traditional image edge detection filter is chosen for the edge attention enhancement module. The filter offers high interpretability, strong robustness, and low computational complexity, making it ideal for lightweight networks. Additionally, learnable feature map gated fusion units (GF) are introduced to control the amount of edge prior information fused into the backbone network, allowing the model to adapt to the varying distributions of different datasets.
In practical applications, the selection of edge filters should be based on the specific requirements of the task. In cell segmentation datasets, the difference between non-cellular structures and target cell structures in RGB space is minimal, making single-stage filtering algorithms less effective in resisting noise. Therefore, this study employs a multi-stage Canny filter as the edge detection filter, whose performance on the cell segmentation dataset is shown in Figure 2. The Canny filter offers advantages of strong robustness, high accuracy, well-established algorithms, and low computational complexity. The process consists of four steps: (1) applying a Gaussian filter to smooth the image and reduce noise, (2) using the Sobel operator to compute the gradient direction and detect edges, (3) applying non-maximum suppression to eliminate coarse edges and retain fine edges, and (4) using double-threshold edge linking to reduce noise interference. Afterward, the original image is processed by the Canny filter and passed through average pooling layers, resulting in feature maps with strong edge priors, each corresponding to one of the encoder feature levels in the feature encoding–decoding branch. The feature maps containing edge information are weighted by the gated fusion unit (GF) and concatenated with the corresponding encoder feature maps, thus integrating into the feature encoding–decoding process. The parameters of each layer of the edge attention enhancement network are shown in Table 3, and the process for feature map fusion in the edge detection branch is represented by Algorithm 1.
Algorithm 1. Feature Map Fusion Process of the Edge Attention Enhancement Branch
Input: Original Canny edge feature maps x k , where k = 1, 2, 3, …
Output: Fused feature maps y k , where k = 1, 2, 3, …
1.  # Max pooling operation across height and width for each level k
2.  for k = 1, 2, 3, …
3.           for h = 0 to H − 1
4.                  for w = 0 to W − 1
5.                           x[i, j, k + 1] = max(x[i, j, k])    # Max pooling
6.                   end for
7.             end for
8.  end for
9.  
10. # Fusion of edge features with gated fusion unit (GF)
11. for k = 1, 2, 3, …
12.           y[k] = α [k] * x[k] # Weighted fusion through GF unit
13. end for
Definitions:
x k R 1 × H 2 k × W 2 k : Edge feature maps after average pooling at level k.
y k R 1 × H 2 k × W 2 k : Fused feature maps after modulation by fusion gates (GF).
x[i, j, k]: Pixel value at coordinate (i, j) in feature map at level k.
y i , j , k : Pixel value at coordinate (i, j) in fused feature map at level k.
α k : Fusion weight for level k, provided by the GF unit.
Compared with Attention U-Net, the lightweight structure of AttE-Unet effectively shifts pure data-driven deep learning towards incorporating additional prior information. This enhancement not only improves the feature extraction capability but also significantly reduces the parameter count and computational complexity of the model. As a result, AttE-Unet enhances the accuracy of cell segmentation while reducing model complexity, leading to faster convergence during training. Additionally, the reduced data requirements make the proposed model particularly suitable for medical applications, considering the limited sample sizes and the high cost and difficulty associated with medical data collection. Notably, the proposed network structure is highly adaptable, as the attention enhancement branch can be modified to include different filter modules or pretrained networks to suit a variety of tasks.

3. Results and Discussion

3.1. Experimental Setup

We employed the PanNuke public dataset to evaluate our proposed method. The dataset was split into training, validation, and testing sets in an 8:1:1 ratio. PanNuke comprises 481 images covering 19 different tissue types. The 481 cropped images (256 × 256 pixels) were sampled from over 20,000 whole-slide images originating from multiple sources and magnifications. Collectively, PanNuke contains 205,343 annotated nuclei, each with a corresponding instance segmentation mask. For our study, we selected cancer cells and lymphocytes as the positive class, while considering all other regions as background, thereby converting the multi-class instance segmentation annotations into a binary semantic segmentation task. This modification ensures a consistent comparison between our proposed method and baseline networks. To mitigate the limited size typically found in medical image datasets and to enhance model generalization, we applied several common data augmentation techniques with a probability-based approach on the training set. These techniques include geometric transformations, occlusion and noise, and brightness and contrast adjustments. Each original training sample was augmented up to 10 times, leading to a substantially larger and more diverse training set. Additionally, image pixels were normalized from the [0, 255] range to [0, 1], and the output feature maps of the proposed edge detection module were similarly normalized. This consistent scaling satisfies layer normalization requirements throughout the network.
Our experiments were conducted on a system equipped with four NVIDIA (Santa Clara, CA, USA) GeForce RTX 3090 GPUs, an Intel(R) Xeon(R) Silver 4210R CPU @ 2.40 GHz, and 125 GB of RAM, using the PyTorch 2.6.0 deep learning framework. The dataset images were first resized to 64 × 64 pixels, and normalization was applied as a preprocessing step. For the training set, online data augmentation was conducted before feeding the data into the EE-AttE-UNet model. We employed an initial learning rate of 0.001 and trained for a maximum of 200 epochs. During training, we used an Adam optimizer with a batch size of 40. The model was trained by computing the loss function and optimizing parameters until a stopping criterion was reached, indicating that, if the validation loss failed to decrease for five consecutive epochs, we terminated the training to prevent overfitting and to maintain strong generalization on unseen data.
For the test set, similar preprocessing was applied without data augmentation, after which the trained model parameters were used to generate output. The segmentation task in this study is a binary classification problem, where 0 represents the background and 1 represents the cells to be segmented. Therefore, binary cross-entropy between the predicted labels and the ground truth labels was used as the loss function. For model evaluation, the F1 score and Intersection over Union (IoU) were used as accuracy metrics, while FLOPS and the number of parameters were used to evaluate computational complexity. The F1 score was particularly employed to address class imbalance issues between positive and negative samples. IoU was used to measure the overlap between the predicted and ground truth boundaries.
The F 1   S c o r e = 2 × R e c a l l × R e c a l l P r e c i s i o n + R e c a l l , where P r e c i s i o n = T r u e   P o s i t i v e s + F a l s e   P o s i t i v e s T r u e   P o s i t i v e s and R e c a l l = T r u e   P o s i t i v e s + F a l s e   N e g a t i v e s T r u e   P o s i t i v e s , is a metric used to evaluate the accuracy of a model, particularly in tasks involving classification and segmentation. It is the harmonic mean of precision and recall, providing a balance between the two. The F1 score ranges from 0 to 1, where 1 indicates perfect precision and recall and 0 indicates the worst performance. Intersection over Union (IoU), also known as the Jaccard Index, is a metric used to evaluate the accuracy of an object detector or segmentation model by comparing the predicted segmentation mask with the ground truth mask. Intersection refers to the area where the predicted segmentation and the ground truth overlap while Union refers to the total area covered by both the predicted segmentation and the ground truth. The IoU is calculated as I o U = A r e a   o f   U n i o n A r e a   o f   I n t e r s e c t i o n , where IoU = 1 indicates perfect overlap between the predicted segmentation and the ground truth and IoU = 0 indicates no overlap.

3.2. Algorithm Evaluation

The baseline networks used for comparison with the proposed lightweight AttE-UNet include the original Attention U-Net, Unet, Unet++ and a lightweight version without the addition of the attention enhancement branch (referred to as Attention U-Net*). The results on the PanNuke public dataset are presented in Table 4. Although AttE-UNet shows slightly lower F1 and IoU scores compared to U-Net and UNet++, it still maintains a reasonable level of accuracy. The AttE-UNet achieves a drastic reduction in both FLOPS and parameter count, using less than 0.1% of the computational cost of U-Net and UNet++. Due to the lightweight feature encoding and decoding branches and the incorporation of edge attention information, AttE-UNet achieves approximately 90% of the performance of the original Attention U-Net, with an IoU of 0.654 and an F1 score of 0.763, while using only 0.1% of FLOPS and 1.57% of parameters. Although accuracy has decreased due to lightweight modifications, AttE-UNet remains effective for cell segmentation and fulfills the requirements for preliminary medical image screening, as evidenced by the classification results. Given the substantial reduction in computational complexity, this trade-off is worthwhile. Furthermore, compared to the Attention U-Net*—which only reduces the complexity of the encoder–decoder without incorporating the edge attention enhancement—the proposed lightweight AttE-UNet, which is based on attention enhancement, improves the IoU score by 0.078 and the F1 score by 0.067, with virtually no increase in FLOPS or parameters. It demonstrates that the attention enhancement significantly outperforms a purely lightweight version of the Attention U-Net.
The MedT network, which is based on a Transformer backbone, achieves an IoU that is 7.6% lower than that of the proposed method. This demonstrates that the proposed approach, by retaining the advantages of CNNs and incorporating edge attention enhancement, achieves superior feature extraction capability compared to Transformer-based networks on small medical datasets. The underperformance of MedT can be attributed to its large number of model parameters, which makes it challenging for the model to converge effectively on small-scale medical datasets. Transformers typically require extensive training data to capture the relationships between features effectively, and the limited amount of data in small medical datasets is insufficient for this purpose, resulting in suboptimal performance.
From the PR curve in Figure 3, it can be observed that there is little difference between the precision and recall of non-lightweight networks. In terms of recall, which is particularly important in the medical field, the Attention UNet (AttUnet) performs significantly better. The PR curve of the proposed AttE-UNet lies between that of the lightweight Attention UNet* (AttUnet*) and the non-lightweight networks, as indicated by the green line in the figure. It is evident that simply lightweighting the Attention U-Net results in poor precision and recall, making it unsuitable for precise medical screening tasks. In contrast, the edge feature fusion module proposed in this study demonstrates a significant improvement in the feature extraction capability of the lightweight encoder–decoder, enhancing the overall performance of the model while maintaining low computational complexity.
Figure 4 illustrates a challenging sample from the dataset, repeated in the first row for ease of comparison. The second row shows Grad-CAM visualizations, using heatmaps to highlight the regions that the trained network focuses on in its intermediate layers. The third row presents the predicted segmentation maps, with red boxes marking the differences among the various networks’ segmentations. Finally, the fourth row displays the ground truth labels, providing a reference for evaluating the segmentation results.
Figure 4f demonstrates that the proposed AttE-UNet effectively integrates edge prior information. The Grad-CAM visualizations in the second row highlight the target cells, indicating that the model focuses on both cell edges and interiors. The redder regions in the heatmap signify greater confidence in the segmentation. The predicted masks in the third row are relatively complete, without any internal gaps. Although the lightweight design of AttE-UNet causes some cell mask edges to appear slightly unclear with occasional merging, the addition of edge attention effectively compensates for this by enhancing the model’s ability to recognize edge features. As a result, the predicted mask contours align well with the ground truth, with no false positives or missed segments. AttE-UNet is lightweight yet provides accurate qualitative segmentation, making it suitable for deployment on edge devices for preliminary disease screening.
Comparing Figure 4c,f, both the original Attention U-Net and AttE-UNet adopt attention-based feature encoding–decoding branches, allowing the model to learn global relationships between cells. Compared to the CNNs without attention mechanisms in Figure 4a,b, the Grad-CAM visualizations in the second row show that these attention-based models focus better on target cells, resulting in accurate predictions without misclassifications in the fourth row. This demonstrates the necessity of the attention-based feature encoding–decoding branch, which enhances feature extraction capability at a relatively low computational cost.
In contrast, the MedT model in Figure 4d produces fragmented predictions of cells, with minor linear breaks in the cell masks near the seams between two image blocks. This indicates that the small medical dataset used is insufficient for training the global attention mechanism of the Transformer. Given the high computational complexity and lower accuracy of Transformer networks, the CNN architecture adopted in this study is more suitable for edge device deployment.
Figure 4e visualizes the Attention U-Net* network, which is derived from AttE-UNet by removing the edge attention enhancement branch. Without the supplementary edge attention prior, Attention U-Net* demonstrates insufficient recognition of fine details at the boundaries between cell nuclei. It results in a misclassification of non-cellular structures as target cells on the left side of the predicted map, reflecting its weaker feature extraction capability. It shows that simply reducing convolution operations, shrinking feature map sizes, and downsizing the channel count to achieve only 0.1% of the original computational cost can lead to severe degradation of feature extraction performance. In contrast, the proposed edge attention enhancement branch significantly improves the feature extraction capability of lightweight networks.
The proposed model incorporates fusion gates to precisely control the multi-level integration of edge prior feature maps with the original backbone feature extraction network. It significantly mitigates the loss of accuracy often seen in lightweight models and enhances the efficient and precise utilization of small medical sample datasets. The lightweight design of this model makes it well suited for initial disease screening in medical imaging and favorable for deployment on edge devices, effectively addressing challenges related to limited computational resources.
Although reducing a network’s size typically compromises accuracy, our primary innovation lies in introducing the attention-enhancing branch designed to counteract this trade-off by leveraging prior boundary information. First, by incorporating a dedicated branch that captures edge information (e.g., via the Canny filter), the proposed approach preserves critical cell boundary details despite a drastically reduced network size, ensuring sufficient discriminative power. Second, our AttE-Unet uses only 0.1% of the FLOPS and 1.57% of the parameters of the original Attention U-Net but still achieves an IoU of 0.654 and F1 of 0.763, whereas a parameter-matched baseline (Attention U-Net*) reports IoU and F1 scores of just 0.588 and 0.696, respectively (see Figure 4 and Table 4). This significant reduction in parameters with minimal accuracy loss underscores the effectiveness of our boundary-aware attention mechanism in resource-constrained scenarios. Moreover, although we employ the Canny filter in this study, our architecture can integrate alternative filters or pre-trained models, enhancing its adaptability to a diverse range of medical segmentation tasks. Consequently, the main contribution of our work is not simply a smaller network but a smaller network structure equipped with a novel attention-enhancing mechanism that maintains competitive segmentation accuracy under limited computational resources.
We conducted an in-depth comparative analysis of our model, AttE-Unet, with two state-of-the-art models, LKCell-L [38] and CellViT-SAM-H [39], on the PanNuke dataset. Despite AttE-Unet’s parameter count of only 0.548 million, significantly lower than the 800 million parameters of CellViT-SAM-H and 163.84 million of LKCell-L, it achieved an F1 score of 76.3%, demonstrating robust performance. This result highlights AttE-Unet’s ability to maintain high segmentation accuracy while remaining lightweight. In contrast, although CellViT-SAM-H achieved the highest F1 score of 86.8%, its substantial parameter count may limit its practical applicability in resource-constrained environments. Our model strikes an effective balance between resource efficiency and execution efficiency, making it particularly suitable for real-world applications with limited computational resources.

3.3. Hardware Implementation and Evaluation

This study utilized the ultra-low-power STM32U595AJH3 microcontroller from STMicroelectronics (Geneva, Switzerland), based on the Arm Cortex-M33 core, as the hardware platform for deploying the proposed lightweight AttE-UNet in an embedded system. The STM32U595AJH3 operates at a frequency of up to 160 MHz, features a single-precision floating-point unit, and supports all Arm single-precision data processing and DSP instructions. Its high-speed memory includes 4 MB of flash and 2.5 MB of SRAM, supplemented by multiple memory interfaces, a range of peripherals, and graphical capabilities. These features make the hardware particularly well suited for deploying medical image segmentation algorithms. However, the large number of parameters in the original Attention U-Net makes it difficult to deploy directly on existing edge microcontrollers. To address this, all baseline networks were exported as ONNX files and evaluated using the X-CUBE-AI toolkit from STMicroelectronics to determine the minimum Flash and RAM usage required for deployment.
The hardware resource usage of the proposed lightweight AttE-UNet and the evaluation results of each baseline network are summarized in Table 5. The large number of parameters and the substantial computational resources required by MedT make it unsuitable for deployment on edge devices in this scenario. Hence, MedT is not included in Table 5 for comparison. On the STM32U595AJH3 microcontroller, the AttE-UNet requires only 2.09 MB of Flash and 1.38 MB of RAM. Compared to the original, non-lightweight Attention U-Net, it represents just 1.57% of the Flash usage and 1.99% of the RAM usage. Meanwhile, the proposed lightweight AttE-UNet demonstrates significantly higher resource efficiency compared to U-Net and UNet++. Its lightweight design achieves a drastic reduction in resource usage, with Flash and RAM occupancies reduced to less than 2% of those required by U-Net and UNet++. The minimal resource requirements fit well within the memory constraints of the microcontroller, making AttE-UNet highly suitable for deployment on most edge devices.
Pathology diagnosis often involves time-consuming steps—tissue fixation, staining, slide preparation, and expert manual interpretation. This can delay definitive diagnoses by 10 to 30 days, thereby postponing treatment decisions. Many hospitals, especially in regions with limited medical resources, face heavy workloads and a shortage of trained pathologists. A lightweight AttE-Unet automated solution that can be integrated into a microscope’s control system can offer real-time, on-site preliminary screening for cancer cells, significantly reducing the burden on medical staff. By analyzing stained tissue samples in near real-time, edge-deployed AttE-Unet could rapidly flag suspicious regions for closer examination. This early detection approach improves workflow efficiency and reduces the likelihood of missing critical cells, ultimately assisting physicians in making timely treatment decisions.

4. Conclusions

This study proposes a lightweight medical cell image segmentation network, AttE-Unet, based on edge attention enhancement. The AttE-Unet is composed of lightweight encoding and decoding branches, an attention enhancement branch, and fusion gates with learnable parameters. It aims to address the issues of feature learning difficulties and reduced accuracy in traditional segmentation networks after lightweight processing. Tests on the PanNuke dataset have validated the effectiveness of the AttE-Unet, which achieves 91.7% of the original Attention U-Net’s F1 score and 89.3% of its IoU using only 0.1% of the FLOPS and 1.57% of the parameters. Specifically, it reaches an F1 score of 0.763 and an IoU of 0.654 with only 70 MFLOPS and 0.548 MB of computational and parameter resources, fulfilling the cell segmentation task. The AttE-Unet has been deployed on the STM32U595AJH3 microcontroller, consuming only 2.09 MB of Flash and 1.38 MB of RAM, demonstrating feasibility for edge device deployment.
Our contribution lies in proposing and introducing a lightweight attention-enhanced branch that guides the attention mechanism toward the critical boundary features of cells, thereby improving segmentation accuracy while maintaining an extremely small parameter footprint. This low computational footprint enables deployment on portable medical devices, making efficient and accurate cell segmentation accessible in resource-constrained environments where trained personnel and advanced equipment may be scarce. By integrating our lightweight segmentation network (AttE-Unet) directly into the control systems of devices such as microscopes, clinicians can perform real-time cell segmentation, thereby streamlining diagnostic workflows and reducing reliance on centralized servers, particularly in regions with limited healthcare resources.
Beyond segmentation, this approach can serve as the foundational step toward more sophisticated tasks, including classifying and identifying cancer cells and tissues based on morphological and pathological characteristics, which is our future study topic. Ultimately, this vision supports the development of an edge-based artificial intelligence system capable of assisting in cancer-related diagnoses, thereby improving patient outcomes in underdeveloped or remote regions.

Author Contributions

Conceptualization, Q.S. and S.X.; methodology, Q.S. and C.Y.; software, Y.Z., S.X. and Q.S.; validation, Y.Z., S.X. and Z.W.; formal analysis, K.M. and M.L.; investigation, Q.S. and S.X.; resources, Y.Z. and Z.W.; data curation, Q.S. and C.Y.; writing—original draft preparation, Q.S. and S.X.; writing—review and editing, K.M. and M.L.; visualization, Q.S. and S.X.; supervision, K.M.; project administration, M.L.; funding acquisition, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number: 52474270.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data derived from public domain resources: the data presented in this study are available in PanNuke at 10.1007/978-3-030-23937-4_2. These data were derived from the following resources available in the public domain: https://warwick.ac.uk/fac/cross_fac/tia/data/pannuke (accessed on 2 April 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shen, D.; Wu, G.; Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [PubMed]
  2. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed]
  3. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [PubMed]
  4. Vicar, T.; Balvan, J.; Jaros, J.; Jug, F.; Kolar, R.; Masarik, M.; Gumulec, J. Cell segmentation methods for label-free contrast microscopy: Review and comprehensive comparison. BMC Bioinform. 2019, 20, 360. [Google Scholar] [CrossRef]
  5. Wen, T.; Tong, B.; Liu, Y.; Pan, T.; Du, Y.; Chen, Y.; Zhang, S. Review of research on the instance segmentation of cell images. Comput. Methods Programs Biomed. 2022, 227, 107211. [Google Scholar]
  6. Huang, Q.; Zhang, W.; Chen, Y.; Chen, J.; Yang, Z. Review of cervical cell segmentation. Multimed. Tools Appl. 2024, 1–40. [Google Scholar] [CrossRef]
  7. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  8. Xu, Y.; Quan, R.; Xu, W.; Huang, Y.; Chen, X.; Liu, F. Advances in Medical Image Segmentation: A Comprehensive Review of Traditional, Deep Learning and Hybrid Approaches. Bioengineering 2024, 11, 1034. [Google Scholar] [CrossRef]
  9. Ding, L.; Goshtasby, A. On the Canny edge detector. Pattern Recognit. 2001, 34, 721–725. [Google Scholar]
  10. Iglovikov, V.; Shvets, A. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
  11. Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar]
  12. Wang, J.; Zhu, H.; Wang, S.-H.; Zhang, Y.-D. A review of deep learning on medical image analysis. Mob. Netw. Appl. 2021, 26, 351–380. [Google Scholar] [CrossRef]
  13. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
  14. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  15. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  16. Wang, R.S.; Lei, T.; Cui, R.X.; Zhang, B.T.; Meng, H.Y.; Nandi, A.K. Medical image segmentation using deep learning: A survey. IET Image Process. 2022, 16, 1243–1267. [Google Scholar] [CrossRef]
  17. Liu, X.B.; Song, L.P.; Liu, S.; Zhang, Y.D. A Review of Deep-Learning-Based Medical Image Segmentation Methods. Sustainability 2021, 13, 1224. [Google Scholar] [CrossRef]
  18. Siddique, N.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V. U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications. IEEE Access 2021, 9, 82031–82057. [Google Scholar] [CrossRef]
  19. Xu, Q.; Ma, Z.; Na, H.; Duan, W. DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation. Comput. Biol. Med. 2023, 154, 106626. [Google Scholar] [CrossRef]
  20. Ghaznavi, A.; Rychtáriková, R.; Saberioon, M.; Štys, D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput. Biol. Med. 2022, 147, 105805. [Google Scholar] [CrossRef]
  21. Li, Z.; Zhang, H.; Li, Z.; Ren, Z. Residual-attention unet++: A nested residual-attention u-net for medical image segmentation. Appl. Sci. 2022, 12, 7149. [Google Scholar] [CrossRef]
  22. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  23. Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J.M. UNet plus plus: A Nested U-Net Architecture for Medical Image Segmentation. In Proceedings of the 4th International Workshop on Deep Learning in Medical Image Analysis (DLMIA)/8th International Workshop on Multimodal Learning for Clinical Decision Support (ML-CDS), Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar]
  24. Alom, M.Z.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Nuclei Segmentation with Recurrent Residual Convolutional Neural Networks based U-Net (R2U-Net). In Proceedings of the IEEE National Aerospace and Electronics Conference (NAECON), Dayton, OH, USA, 23–26 July 2018; pp. 228–233. [Google Scholar]
  25. Oktay, O.; Schlemper, J.; Le Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  26. Xiao, H.; Li, L.; Liu, Q.; Zhu, X.; Zhang, Q. Transformers in medical image segmentation: A review. Biomed. Signal Process. Control 2023, 84, 104791. [Google Scholar] [CrossRef]
  27. Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical Transformer: Gated Axial-Attention for Medical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Electr Network, Strasbourg, France, 27 September–1 October 2021; pp. 36–46. [Google Scholar]
  28. Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, G.; Gil, D.; Rodríguez, C.; Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. 2015, 43, 99–111. [Google Scholar] [CrossRef] [PubMed]
  29. Caicedo, J.C.; Goodman, A.; Karhohs, K.W.; Cimini, B.A.; Ackerman, J.; Haghighi, M.; Heng, C.; Becker, T.; Doan, M.; McQuin, C. Nucleus segmentation across imaging experiments: The 2018 Data Science Bowl. Nat. Methods 2019, 16, 1247–1253. [Google Scholar] [CrossRef]
  30. Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  31. Gupta, A.; Gupta, R.; Gehlot, S.; Goswami, S. Segpc-2021: Segmentation of Multiple Myeloma Plasma Cells in Microscopic Images, 2021. Available online: https://doi.org/10.21227/7np1-2q42 (accessed on 2 April 2025).
  32. Baid, U.; Ghodasara, S.; Mohan, S.; Bilello, M.; Calabrese, E.; Colak, E.; Farahani, K.; Kalpathy-Cramer, J.; Kitamura, F.C.; Pati, S. The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification. arXiv 2021, arXiv:2107.02314. [Google Scholar]
  33. Gamper, J.; Alemi Koohbanani, N.; Benet, K.; Khuram, A.; Rajpoot, N. Pannuke: Anopen pan-cancer histology dataset for nuclei instance segmentation andclassification. In Proceedings of the Digital Pathology: 15th European Congress, ECDP 2019, Warwick, UK, USA, 10–13 April 2019; Proceedings 15. pp. 11–19. [Google Scholar]
  34. Sahasrabudhe, M.; Christodoulidis, S.; Salgado, R.; Michiels, S.; Loi, S.; André, F.; Paragios, N.; Vakalopoulou, M. Self-supervised nuclei segmentation in histopathological images using attention. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; Proceedings, Part V 23. pp. 393–402. [Google Scholar]
  35. De Vita, F.; Nocera, G.; Bruneo, D.; Tomaselli, V.; Falchetto, M. On-device training of deep learning models on edge microcontrollers. In Proceedings of the 2022 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing & Communications (GreenCom) and IEEE Cyber, Physical & Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), Espoo, Finland, 22–25 August 2022; pp. 62–69. [Google Scholar]
  36. Falaschetti, L.; Bruschi, S.; Alessandrini, M.; Biagetti, G.; Crippa, P.; Turchetti, C. An U-Net Semantic Segmentation Vision System on a Low-Power Embedded Microcontroller Platform. Procedia Comput. Sci. 2023, 225, 4473–4482. [Google Scholar] [CrossRef]
  37. Zhou, Y.; Ma, K.; Sun, Q.; Wang, Z.; Liu, M. Edge-Guided Cell Segmentation on Small Datasets Using an Attention-Enhanced U-Net Architecture. Information 2024, 15, 198. [Google Scholar] [CrossRef]
  38. Cui, Z.; Yao, J.; Zeng, L.; Yang, J.; Liu, W.; Wang, X. LKCell: Efficient Cell Nuclei Instance Segmentation with Large Convolution Kernels. arXiv 2024, arXiv:2407.18054. [Google Scholar]
  39. Hörst, F.; Rempe, M.; Heine, L.; Seibold, C.; Keyl, J.; Baldini, G.; Ugurel, S.; Siveke, J.; Grünwald, B.; Egger, J. Cellvit: Vision transformers for precise cell segmentation and classification. Med. Image Anal. 2024, 94, 103143. [Google Scholar] [CrossRef]
Figure 1. Framework diagram of lightweight AttE-Unet network model for cell segmentation.
Figure 1. Framework diagram of lightweight AttE-Unet network model for cell segmentation.
Information 16 00295 g001
Figure 2. Effect of Canny filter algorithm on PanNuke cell segmentation public dataset.
Figure 2. Effect of Canny filter algorithm on PanNuke cell segmentation public dataset.
Information 16 00295 g002
Figure 3. PR curves of AttE-UNet and other baseline networks on the PanNuke dataset.
Figure 3. PR curves of AttE-UNet and other baseline networks on the PanNuke dataset.
Information 16 00295 g003
Figure 4. Cell segmentation results for challenging samples in the test set. A heatmap generated using Grad-CAM shows the results of the first convolution layer, highlighting the most critical parts of the image for the model to make a specific classification decision. The red boxes marking the differences among the various networks’ segmentations. * refers to a lightweight version of Attention Unet without the addition of the attention enhancement branch.
Figure 4. Cell segmentation results for challenging samples in the test set. A heatmap generated using Grad-CAM shows the results of the first convolution layer, highlighting the most critical parts of the image for the model to make a specific classification decision. The red boxes marking the differences among the various networks’ segmentations. * refers to a lightweight version of Attention Unet without the addition of the attention enhancement branch.
Information 16 00295 g004
Table 1. Parameters of each layer of the lightweight network encoder.
Table 1. Parameters of each layer of the lightweight network encoder.
Network LayerNumber of Kernels/Stride/SizeNumber of Input ChannelsSize of Output Feature Map/Number of Channels
Input-364 × 64/3
Block1_Catenate1-164 × 64/4
Block1_conv17/1/3 × 3464 × 64/7
Max-pool1-/2/2 × 2732 × 32/7
Block2_Catenate1-732 × 32/8
Block2_conv115/1/3 × 3832 × 32/15
Max-pool2 -/2/2 × 21516 × 16/15
Block3_Catenate1-1516 × 16/16
Block3_conv131/1/3 × 31616 × 16/31
Max-pool3-/2/2 × 2318 × 8/31
Block4_Catenate1-318 × 8/32
Block4_conv163/1/3 × 3328 × 8/63
Max-pool4-/2/2 × 2634 × 4/63
Block5_Catenate1-634 × 4/64
Block5_conv1128/1/3 × 3644 × 4/128
Table 2. Parameters of each layer of the lightweight network decoder.
Table 2. Parameters of each layer of the lightweight network decoder.
Network LayerNumber of Kernels/Stride/SizeNumber of Input ChannelsSize of Output Feature Map/Number of Channels
Block6_up-conv164/1/2 × 21288 × 8/64
Block6_Catenate1--8 × 8/128
Block6_conv164/1/3 × 31288 × 8/64
Block7_up-conv132/1/2 × 26416 × 16/32
Block7_Catenate1--16 × 16/64
Block7_conv132/1/3 × 36416 × 16/32
Block8_up-conv116/1/2 × 23232 × 32/16
Block8_Catenate1--32 × 32/32
Block8_conv116/1/3 × 33232 × 32/16
Block9_up-conv164/1/2 × 21664 × 64/8
Block9_Catenate1--64 × 64/16
Block9_conv164/1/3 × 31664 × 64/8
Block9_conv22/1/3 × 3864 × 64/2
Output--64 × 64/-
Table 3. Parameters of each layer of the edge attention enhancement network.
Table 3. Parameters of each layer of the edge attention enhancement network.
Network LayerNumber of Kernels/Stride/SizeNumber of Input ChannelsSize of Output Feature Map/Number of Channels
Input-164 × 64/1
Avg-pool 11/2/2 × 2132 × 32/1
Avg-pool 21/2/2 × 2116 × 16/1
Avg-pool 31/2/2 × 218 × 8/1
Avg-pool 41/2/2 × 214 × 4/1
Output--4 × 4/1
Table 4. Quantitative analysis of the performance of each network on the test set.
Table 4. Quantitative analysis of the performance of each network on the test set.
ModelsF1IoUFLOPS (M)Parameter (M)
U-Net0.8200.71865,52634.52
UNet++0.8170.714138,66436.52
Attention U-Net0.8320.73266,63634.87
Attention U-Net *0.6960.588690.543
MedT0.6830.578487,951156.4
AttE-Unet0.7630.654700.548
* A lightweight version of Attention Unet without the addition of the attention enhancement branch.
Table 5. Hardware resource usage of each network.
Table 5. Hardware resource usage of each network.
ModelMinimum Flash Occupancy (MB)Minimum RAM Occupancy (MB)
U-Net131.6964.75
UNet++139.70232.62
Attention U-Net133.0269.29
Attention U-Net *2.111.30
AttE-Unet2.091.38
* A lightweight version of Attention Unet without the addition of the attention enhancement branch.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xia, S.; Sun, Q.; Zhou, Y.; Wang, Z.; You, C.; Ma, K.; Liu, M. A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement. Information 2025, 16, 295. https://doi.org/10.3390/info16040295

AMA Style

Xia S, Sun Q, Zhou Y, Wang Z, You C, Ma K, Liu M. A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement. Information. 2025; 16(4):295. https://doi.org/10.3390/info16040295

Chicago/Turabian Style

Xia, Shuang, Qian Sun, Yiheng Zhou, Zhaoyuxuan Wang, Chaoxing You, Kainan Ma, and Ming Liu. 2025. "A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement" Information 16, no. 4: 295. https://doi.org/10.3390/info16040295

APA Style

Xia, S., Sun, Q., Zhou, Y., Wang, Z., You, C., Ma, K., & Liu, M. (2025). A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement. Information, 16(4), 295. https://doi.org/10.3390/info16040295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop