Previous Article in Journal
An AI Approach to Markerless Augmented Reality in Surgical Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PRONOBIS: A Robotic System for Automated Ultrasound-Based Prostate Reconstruction and Biopsy Planning

1
Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, Ul. Ivana Lučića 5, 10000 Zagreb, Croatia
2
School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
3
Croatian Academy of Sciences and Arts, Trg Nikole Šubića Zrinskog 11, 10000 Zagreb, Croatia
4
RONNA Medical Ltd., Slavonska avenija 6, 10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Robotics 2025, 14(8), 100; https://doi.org/10.3390/robotics14080100
Submission received: 15 April 2025 / Revised: 7 July 2025 / Accepted: 19 July 2025 / Published: 22 July 2025
(This article belongs to the Section Sensors and Control in Robotics)

Abstract

This paper presents the PRONOBIS project, an ultrasound-only, robotically assisted, deep learning-based system for prostate scanning and biopsy treatment planning. The proposed system addresses the challenges of precise prostate segmentation, reconstruction and inter-operator variability by performing fully automated prostate scanning, real-time CNN-transformer-based image processing, 3D prostate reconstruction, and biopsy needle position planning. Fully automated prostate scanning is achieved by using a robotic arm equipped with an ultrasound system. Real-time ultrasound image processing utilizes state-of-the-art deep learning algorithms with intelligent post-processing techniques for precise prostate segmentation. To create a high-quality prostate segmentation dataset, this paper proposes a deep learning-based medical annotation platform, MedAP. For precise segmentation of the entire prostate sweep, DAF3D and MicroSegNet models are evaluated, and additional image post-processing methods are proposed. Three-dimensional visualization and prostate reconstruction are performed by utilizing the segmentation results and robotic positional data, enabling robust, user-friendly biopsy treatment planning. The real-time sweep scanning and segmentation operate at 30 Hz, which enable complete scan in 15 to 20 s, depending on the size of the prostate. The system is evaluated on prostate phantoms by reconstructing the sweep and by performing dimensional analysis, which indicates 92% and 98% volumetric accuracy on the tested phantoms. Three-dimansional prostate reconstruction takes approximately 3 s and enables fast and detailed insight for precise biopsy needle position planning.

1. Introduction

Prostate cancer is the second most common malignant neoplasm in men worldwide and the fourth leading cause of cancer-related death [1]. Diagnosis relies on histopathological evaluation of prostate biopsies. These are typically performed following an elevated prostate-specific antigen (PSA) level and/or an abnormal digital rectal examination (DRE), even though most patients are asymptomatic. The prostate is anatomically divided into four main zones—peripheral, transitional, central, and anterior—with approximately 70% of prostate cancers arising in the peripheral zone [2].
Multiparametric MRI (mpMRI) is now recommended before biopsy to detect suspicious lesions, which are subsequently targeted using either cognitive or fusion approaches; current evidence suggests no significant difference in accuracy between these methods [3]. However, a key challenge remains: the precise targeting of lesions, as targeted prostate biopsy is highly operator-dependent and subject to significant inter-operator variability [4]. This variability raises concerns regarding the consistent application of the mpMRI diagnostic pathway and the risk of improper diagnosis.
To address these challenges, current initiatives aim to standardize the entire diagnostic workflow—including MRI acquisition, interpretation, biopsy planning, and execution—through robust quality assurance and control measures [4]. One proposed solution is the integration of robotic prostate platforms and artificial intelligence to minimize inter-operator bias and reduce the learning curve [5], although studies on this approach have yielded conflicting results [6].
Robotic biopsy techniques that combine transrectal ultrasound (TRUS) with MR imaging have been introduced. Although designs vary, these systems typically fuse mpMRI-identified lesions with TRUS images to enable a robotic arm to target the areas of interest accurately. The system determines the needle’s penetration angle and depth by positioning a needle guide equipped with a stop bar; the surgeon then manually inserts and fires the needle gun at the preset location and depth [7,8]. The targeted biopsy results have higher success rates than systematic biopsy [9,10]. Therefore, a novel approach that enables clinicians to pick the exact positions for biopsy needle impact exclusively via ultrasound is crucial. Incorporating targeted biopsy into the workflow requires an end-to-end software solution that enables biopsy procedure planning.
One of the challenges for robotic biopsy is the implementation of safety measures and robustness of the robots that operate in close proximity to the human body. By using a redundant 7-DOF robot arm, it is possible to optimize each trajectory avoiding joint limits and singularities [11], crucial to perform complex movements in space-restricted areas. Zhang et al. presented a design of a 7-DOF manipulator for a TRUS probe [12]. The manipulator was evaluated in an environment that indicated the mechanism can reach any position in working space of the biopsy procedure. The robotic arm structure of continuous body type for prostate biopsy procedures in confined spaces has been introduced [13]. Two sets of drive lines run through the robot and are fixed to the end nodes. The end position is controlled by adjusting the angle of the servo motor. Experiments indicate the introduced robot design achieves 2.5 mm positioning error which is a significant improvement for robotic biopsy procedures.
The robotic platform for prostate biopsy, the iSR’obot Mona Lisa™ (Biobot Surgical, Singapore) [14], allows clinicians to create biopsy procedure planning by fusing MRI and ultrasound images. Additionally, a novel Vector MRI/ultrasound fusion transperineal biopsy technique, utilizing electromagnetic needle tracking, has been developed [15]. Ipsen et al. [16] demonstrated 4D (volumetric) ultrasound image acquisition with motion compensation over longer periods of time. The study presented the approach evaluated on five volunteers for periods of 30 min with dynamic pressure adjustments with target force of 10 N to determine the feasibility and safety of the proposed approach.
A transperineal prostate biopsy approach that proposes robotically guided fusion of MRI and ultrasound images [17] has been introduced. After scanning the phantom used in the experimental phase, it creates a 3D model, enabling biopsy planning with an average error of 1.44 mm, which is below the clinical threshold required to distinguish cancerous from healthy tissue. Furthermore, the approach is similar to traditional biopsy approaches and can be adopted in a clinical setting. Another high-precision robot introduced by Stoianovici et al. is designed for transperineal prostate biopsy with an accuracy of 2.55 mm [18]. The promising accuracy results of targeting the exact positions for the biopsy needle using robots provide the foundation for our research.
The robotically assisted biopsy needle positioning study resulted in a robot, guided by an MRI image, that achieved average precision of 2.39 mm compared to 3.71 mm with the manual approach [19]. Although a robot can eliminate human error, it introduces challenges such as robot positioning and pressure applied to the patient, which causes prostate deformation. A recent study addressed the challenges by introducing a robot that minimizes prostate deformation [20] with a 4-DOF probe manipulator used for TRUS; needle targeting was achieved with a precision of 1 mm. To minimize prostate deformation, the approach involved three essential aspects; optimization of the biopsy trajectory, optimization of the order of biopsy points, and a prostate coordinate system that the team developed.
Ultrasound image segmentation plays a crucial role in ensuring adequate precision in prostate scanning, 3D reconstruction, and robot-assisted biopsy procedure planning. Inhomogeneities in the intensity of ultrasound images and weak object boundaries make segmentation difficult for conventional image processing methods [21], such as simple standard or adaptive thresholding, Otsu’s binarization [22] or contour detection, which do not fully meet the requirements for high accuracy, robustness, and real-time performance [23]. Deep learning-based approaches demonstrate superior accuracy and robustness due to the ability to learn local and global image features. The use of Convolutional Neural Networks (CNNs) for medical image segmentation is an important solution. One of the most significant CNN architectures for medical image segmentation is the U-Net model [24], an encoder–decoder network capable of detecting local image features and generating accurate high-resolution segmentations. The latest research utilizes the Transformer architecture for image processing. The Vision Transformer (ViT) architecture for classification [25] demonstrates remarkable performance and outperforms state-of-the-art CNN models. One of the most relevant algorithms that combines CNNs and ViT models for medical image segmentation is TransUNet [26]. The TransUNet hybrid architecture combines U-Net and ViT to eliminate the limitations of separate solutions; specifically, it enables segmentation of images with significant inter-patient variations in terms of specific object size, shape, and tissue texture. One of the current state-of-the-art segmentation models for micro-ultrasound prostate images is MicroSegNet model [27], based on the TransUNet architecture with a modified annotation-guided binary cross-entropy (AG-BCE) loss function and multi-scale deep supervision (MSDS) that modifies the segmentation performance at different decoder layers, assuring precise segmentation globally and locally. The segmentation results of deep learning-based methods encourage their usage in robotically operated scanning procedure.
This article introduces robotically assisted prostate scanning and biopsy treatment planning. Our method integrates robotics, ultrasound imaging, and deep learning-based algorithms for image analysis with the aim of generating prostate visualizations for biopsy needle placement planning. A key challenge addressed is achieving an ultrasound-only, deep learning-based scanning and reconstruction method that is comparable to MRI-US fusion methods [14]. Ultrasound-only prostate reconstruction and biopsy treatment is fast, cost-effective, and less physically demanding for the patient. It enables the development of an ultrasound-only 3D reconstruction procedure that can be easily integrated into existing clinical workflows and contributes to their standardization. This can be achieved by using micro-ultrasound as described in [28,29,30]. Fully automated, robot-assisted prostate scanning ensures positional tracking of sweep slices for precise prostate reconstruction. The use of deep learning algorithms with image post-processing enables real-time, precise segmentation across the whole prostate sweep. For precise segmentation performance, the model is trained on a hybrid dataset consisting of real prostate and phantom prostate images, annotated using our Medical Annotation Platform (MedAP), which uses zero-shot CNN architectures. To select the optimal deep learning algorithm as a base for the segmentation of medical ultrasound prostate images, DAF3D and MicroSegNet architectures were trained and compared on a hybrid dataset, with MicroSegNet being chosen. The chosen model is enhanced with curvature evaluation post-processing in order to eliminate false positive segmentations. The 3D visualization and prostate reconstruction based on the segmentation results and robotic positional information are evaluated on two prostate phantoms (CIRS Prostate Training Phantom Model 070L and CIRS Tissue Equivalent Ultrasound Prostate Phantom Model 053L) achieving accurate dimensional and volumetric reconstructions. The 3D prostate reconstruction takes approximately 3 s and provides a detailed insight into the planning of the biopsy needle position. It is envisioned that the proposed system is to be integrated similarly to the RONNA robotic system for neuronavigation which has already been clinically validated in brain biopsies and EVD placement [31,32].

2. Materials and Methods

The proposed robotic approach includes several components to create a comprehensive solution. The goal is to seamlessly integrate the robotic system into the existing prostate biopsy procedure while maintaining simplicity for the clinician and increasing patient comfort. The complete workflow of the procedure is shown in Figure 1.

2.1. Hardware and Software

The KUKA (Augsburg, Germany) LBR iiwa collaborative industrial robot is used for this research. Customized robotic systems, such as the Mona Lisa [14], offer significant design flexibility and a tailored approach to robot architecture and procedural flow, but they require extensive development. In contrast, using a KUKA LBR robotic arm significantly reduces development time and offers a lean, cost-effective, and easily deployable solution with proven performance in industrial and medical environments. At this stage of development, the P25 clinical ultrasound system and the BCL10-5 TRUS probe developed by SonoScape (Shenzhen, China) are used. The TRUS probe emits a 7.5 MHz signal and receives signals between 6 and 16 MHz. The recorded ultrasound image is 60 mm wide and has a depth of between 3 and 90 mm. The robot, with the TRUS probe attached, is shown in Figure 2 and Figure 3.
Robot Operating System 2 (ROS2) [33] is used for robot control and communication within the system components: robot, ultrasound, segmentation and visualization. ROS2 enables the integration of system components without compromising the integrity or functionality of existing elements. The integration between the robot and the ROS2 Humble framework is achieved through the use of LBR-Stack [34]. LBR-Stack takes KUKA’s network communication protocols and integrates them into the ROS2 framework to enable seamless use of these robots.
The proposed end-to-end software was developed using Python programming language version 3.10 for calibration, robot control and segmentation, while version 3.12 was used for the process of reconstruction and visualization as it provides speed improvement. The deep learning algorithms used for segmentation require CUDA-enabled PyTorch 2.5.1. To implement the reconstruction and visualization, we used the following libraries: OpenCV 4.10. and PyQt6. The complete solution uses the HP Z4 workstation with Intel Xeon W-2245 processor, 64 GB RAM memory, Quadro RTX 5000 GPU with 16 GB VRAM memory.

Robot Setup

After the TRUS probe is mounted on the robot, the TCP must be calibrated. Since the TRUS image needs to be calibrated with the robot’s TCP, a digital crosshair has been created to position a 3D-printed cone in the center of the image, as shown in Figure 2b. The center of the image is aligned to the calibration cone from 4 different orientations, and TCP position is then calculated using KUKA’s built-in calibration algorithm. The TCP calibration error with this method is 0.87 mm. Similarly, the orientation of the TCP frame is determined by aligning the cone at 3 points in the image plane to determine the X and Y axes, using KUKA’s built-in algorithm.
After successful calibration, the TRUS probe is placed inside the specified prostate phantom using the robot’s hand-guidance mode. The segmentation starts the process and the robot rotates the TRUS probe around its axis to acquire the US images. Once the segmentation process detects no prostate on the image, the algorithm sends the end signal and the robot reverses its rotational direction to scan the other half. Prostate images, fused with the synchronized robot arm data, are constantly being published via ROS2 for visualization purposes. Once the entire prostate scan is complete, the program stops. A detailed image of the prostate scanning equipment and the prostate sweep scanning procedure can be found in Figure 3.

2.2. Prostate Segmentation

Prostate segmentation processes a raw ultrasound image and outputs a binary segmentation mask. Our segmentation algorithm uses a hybrid CNN-ViT model to achieve robust image segmentation by combining the strengths of both approaches. It enables the automatic analysis of prostate ultrasound images by combining local feature extraction using CNNs and global context extraction using ViT. This combination was trained on a large, diverse and balanced dataset, and delivers state-of-the-art segmentation performance.
To evaluate the proposed neural networks, two metrics are used: Dice score, commonly known as Dice coefficient, which is defined as (1), and the Jaccard score or Jaccard index, which is defined as (2). In binary image segmentation, y represents the ground truth segmentation and y ˜ represents the prediction of the segmentation model:
D ( y , y ˜ ) = 2 | y y ˜ | | y | + | y ˜ |
J ( y , y ˜ ) = | y y ˜ | | y y ˜ |

2.2.1. MedAP

As part of the development of the segmentation training dataset, we developed the Medical Annotation Platform (https://github.com/CRTA-Lab/MedAP) (accessed on 18 July 2025), an advanced annotation framework designed for semi-automatic segmentation of medical images, as shown in Figure 4. MedAP uses state-of-the-art zero-shot segmentation models to achieve fast and accurate extraction of objects from medical images. The platform combines deep learning models with human annotation expertise in case the deep learning models have difficulties in segmenting complex structures, such as granular appearing tissue in medical ultrasound images.
The zero-shot segmentation model used for MedAP is Meta AI’s Segment Anything Model (SAM) [35], a state-of-the-art transformer-based model designed for segmenting objects in images and videos with minimal user input. The model represents an innovative approach to the segmentation process by utilizing various standard prompt-driven annotations. SAM was selected for use in MedAP due to its versatility in input options such as positive and negative point input, bounding box input, and free-form text.
The MedAP annotation pipeline is shown in Figure 5. The process starts with the import of prostate ultrasound images, followed by MedAP-assisted annotation using standard prompts or manual tools to ensure accurate segmentation and high dataset quality. The annotation platform stores the pair of original image and segmentation mask together with the image metadata, including the image parameters and probe angle information required for 3D reconstruction. The created dataset is used to evaluate two deep learning segmentation models, DAF3D and MicroSegNet, on prostate phantom images in order to determine the most suitable solution for robust segmentation. DAF3D and MicroSegNet are state-of-the-art ultrasound segmentation models with specialized architectures, which are introduced in this paper and evaluated on the prostate phantom segmentation task, which is crucial for performing our prostate phantom reconstruction.

2.2.2. Deep Attentive Features for Prostate Segmentation (DAF3D)

DAF3D is a 3D CNN with built-in attention modules that utilize the information encoded in the different layers of the CNN called multilevel features [36]. The feature extraction part of the network is based on the 3D ResNeXt architecture [37]. The network also uses 3D atrous spatial pyramid pooling for resampling the attentive features at different rates to achieve a more accurate representation of the prostate. The model uses a hybrid loss function that combines binary cross-entropy loss and Dice loss to preserve the edge details while achieving a compact segmentation result. The model achieves values of 0.90 and 0.82 for the Dice score and the Jaccard score, respectively. After the segmentation step, the predicted masks are saved and their contours are plotted on the original image.

2.2.3. MicroSegNet

The MicroSegNet model [27], designed for micro-ultrasound prostate segmentation, has introduced the annotation-driven binary cross entropy loss AG-BCE, which allows attention to be focused on regions that are difficult to segment. The model also introduces multiscale deep monitoring, which enables robust performance that is independent of the appearance of the object on the image. Although MicroSegNet has achieved a Dice score of 0.939 and a Hausdorff distance of 2.02 mm, this approach struggles with the challenge of labeling the edges of the prostate and generating empty masks when the prostate is not present in the image. To address these challenges, MicroSegNet in combination with the image post-processing algorithm eliminates false positive segmentation results. The image post-processing includes a filtering algorithm that calculates the curvature of segmentation mask contour at the prostate edges, the areas with small mask area, and eliminates outliers, as described in Algorithm 1. A post-processing step is required for full prostate sweep segmentation due to frequent false positive model predictions on the edges, caused by unusual prostate shapes and indistinct outer contours.
Algorithm 1 Post-processing procedure for segmentation images
  • Require: List of segmentation masks M, probe angles A, angle threshold θ , area threshold T a , circularity threshold T c
  • Ensure: Filtered segmentation masks M
  • for each ( m i , a i ) in ( M , A )  do
  •      if  | a i | > θ  then
  •           a r e a computeArea ( m i )
  •          if  a r e a > T a  then
  •               m i emptyMask ( )
  •          end if
  •      end if
  •       C r i c u l a r i t y computeCircularity ( m i )
  •      if  C i r c u l a r i t y > T c  then
  •           m i emptyMask ( )
  •      end if
  •      Append m i to M
  • end for
  • return  M
The state-of-the-art model results on prostate sweeps, with included edges and background images, are primarily achieved by curvature filtering on the edges, i.e., by calculating the roundness/circularity of the created shape contours. The contour shape analysis calculates the circularity:
C i r c u l a r i t y = 4 π A r e a / P e r i m e t e r 2
where 1.0 is a completely circular shape. The Area is contour area and perimeter is contour perimeter. The objective is to detect the contours with circularity score less than 0.6 or 60%, the experimentally determined threshold. If defects, represented as insufficient circularity, are detected, the algorithm discards the problematic images. The filtered images are then used for 3D reconstruction and prostate visualization, required for further treatment planning.
As part of our research, the Graphical User Interface (GUI), which enables intuitive prostate procedure planning, was developed in collaboration with clinicians (T.H., T.K., T.Z.) who indicated the basic requirements and useful tools for our software. The application shows the reconstructed prostate and allows the clinician to select the biopsy target locations. The clinician can select an ultrasound slice with suspected lesions and determine the exact target for biopsy. After selecting the target(s), they are displayed on the reconstructed 3D visualization model.

3. Results

3.1. Model Performance Evaluation

To achieve real-time ultrasound prostate scanning and segmentation, which is crucial for 3D visualization and biopsy treatment planning, existing deep learning solutions must first be compared based on their segmentation performance. After a thorough literature review focusing on the Dice score coefficient and Jaccard index, two state-of-the-art models, DAF3D and MicroSegNet, were evaluated on a dataset that combines the open-source MicroSegNet dataset (https://zenodo.org/records/10475293) (accessed on 18 July 2025), consisting of 75 prostate sweeps ranging from 40 to 60 slices per sweep, and our dataset of six sweeps, consisting of CIRS 053L and CIRS 070L prostate phantoms with up to 300 slices per sweep (https://www.kaggle.com/datasets/lukaiktar/crta-pronobis-prostate-phantom-ultrasound-dataset) (accessed on 18 July 2025). Each model was trained on the five uniform hybrid dataset splits to analyze the performance on unbiased image sets with different augmentations. The best performing model was then trained on the entire dataset and used for real-time segmentation.
The DAF3D and MicroSegNet models were evaluated on a hybrid dataset, consisting of 55 training and 20 test sweeps of the real prostate from the open-source MicroSegNet dataset as well as 3 sweeps from the 053L phantom and 3 sweeps from the 070L phantom. Since the phantom sweeps consist of 300 slices, they were randomly divided into smaller sets and added to the MicroSegNet dataset. The resulting dataset was split five times to obtain five folds with the same train, test, and validation splits (80%–10%–10%), making sure to avoid data leakage between training, testing, and validation sets in each fold. Each fold consisted of 18,054 original and augmented images. Only training data were subjected to augmentation. The MicroSegNet model was trained on 128 × 128 pixel images for 30 epochs with batch size 8, and adaptive learning rate initialized to 0.001. The DAF3D model was trained on the same folds with the same image size, for 50 epochs, adaptive batch size, and mixed precision training in order to reduce computational costs. In addition, a learning rate scheduler was used so that the learning rate was reduced once the Dice score reached a plateau to achieve optimal model performance. The different training parameters were derived as the optimal hyperparameter values from the respective research papers [27,36]. The training results for all folds are presented in Table 1, alongside the p-values obtained via Wilcoxon signed-rank test.
MicroSegNet performed better on the hybrid dataset, so MicroSegNet was selected for the further training process and use. The selected model was fine-tuned on a complete hybrid dataset consisting of 2152 real prostate images and 3042 phantom prostate images further enhanced with 5194 augmented images. The augmentation procedure includes random vertical and horizontal flipping, random rotations in the range of ±10°, and random addition of Gaussian noise. The model was trained on images of size 224 × 224 pixels for 30 epochs with batch size of 8 and an adaptive learning rate of 0.001. A patch size of 16 pixels and a weight for the hard regions of 4 were used. The model, enahnced with post-processing, achieved the Dice score of 0.943988 and the Jaccard score of 0.886911. Examples of segmentation results for prostate phantoms 035L and 070L are shown in Figure 6. The overall processing speed of the system is 30 Hz, which is considered real–time for the transformer-based segmentation performance.
The segmentation results are saved together with the positional data of the robot arm in DICOM format and are available for further analysis, e.g., for mask contour extraction and 3D visualization. An intuitive and detailed 3D prostate visualization, enriched with the list of recorded 2D slices, is required for biopsy treatment planning.

3.2. Prostate Reconstruction

After segmentation of each ultrasound image, the prostate gland is reconstructed using the contours of the segmented masks. As described in Section 2, the robotic arm provides the probe’s angular position synchronized with each acquired ultrasound frame. Since the probe rotates around its axis, the slices used for reconstruction and visualization are not parallel. The 3D reconstruction is performed by transforming each segmented slice from its coordinate frame (assumed to lie in the xy plane) to the visualization frame x y z , using the transformation matrix. The transformation of vector a into to the visualization frame is shown in Equation (4). The transformation matrix, denoted by T r i , consists of a translation and rotation components, represented in Equation (6). The combined transformation matrix is expressed in Equation (5). The visualization frame is aligned with the initial slice frame, which is located in the center of the prostate, in accordance with standard probe placement in clinical settings. In these equations, the symbol ϕ denotes the angular offset of the probe from the central slice, and r represents the probe radius, which is defined as the distance between the center of rotation and the beginning of the image acquisition area.
a = T r i · a
T r i = T · R
T = 1 0 0 0 0 1 0 r · cos ( ϕ ) 0 0 1 r · sin ( ϕ ) 0 0 0 1 R = 1 0 0 0 0 cos ( ϕ ) sin ( ϕ ) 0 0 sin ( ϕ ) cos ( ϕ ) 0 0 0 0 1
The presented method is used to calculate the point cloud representation of the outer surface of the prostate gland. The ultrasound sweep and the corresponding prostate allow the clinician to analyze the suspected lesions and mark the biopsy targets, as shown in Figure 7. This approach is tested and evaluated on prostate phantoms by comparing the 3D-reconstructed phantom dimensions with the specification dimensions of the CIRS 053L and CIRS 070L phantoms. The specifications of the CIRS 070L phantom do not include the dimensions, so the reconstruction is only evaluated based on the specified volume. The reconstruction was tested on 10 phantom sweeps performed with the same pressure force perpendicular to the phantom rectal wall along the y-direction of the image of 6.5 N for CIRS 053L and 9 N for CIRS 070L. The sweeps did not incorporate any force control, but the force measured through joint torque estimation only deviated about 0.2 N due to strictly rotational motion of the probe. The reconstructed point clouds are used to generate a surface approximation in MeshLab, then measured and compared with the datasheet specifications to evaluate the reconstruction accuracy. For the dimensional measurements, we used the outer dimensions of the bounding box aligned to the x y z coordinate frame, and the volume was calculated using SolidWorks software (SolidWorks 2020 SP3). After 10 measurements, the average dimensions of the CIRS 053L prostate phantom, given in the Table 2, were 55.9 × 42.9 × 37.3 mm and the volume of 54,058.2 mm3, for the given dimensions of the phantom of 50 × 45 × 40 mm and the volume of 53 cm3. The difference in volume of the reconstructed models is within 2% of the given volume. There are larger differences in the dimensions, mainly due to the deformations caused by the presence of the probe, but also influenced by inaccuracies in probe calibration and segmentation as well as differences in measurement methodology. The average volume of CIRS 070L, shown in Table 3, was 53,217.6 mm3, with a given volume of 49 cm3. This phantom has a larger volume difference than the first one, most probably due to the unclear prostate dimensions used for datasheet volume calculation. The presented robotically assisted ultrasound-only prostate scanning and 3D reconstruction evaluated on our phantoms showed results comparable to the volumetric and dimensional results of MR-ultrasound solutions [17].

4. Discussion

This paper presents a robotically assisted proof-of-concept system for prostate scanning and biopsy treatment planning. It uses a robot arm with a connected TRUS probe for real-time scanning, followed by deep learning-based segmentation required for 3D reconstruction of the prostate. Accurate prostate visualization enables thorough analysis and planning of the path for the biopsy needle. In contrast to hybrid methods that combine MR and ultrasound imaging, our system, uses only ultrasound. Although ultrasound-only approaches have drawbacks such as reduced image quality and non-parallel scanning, the introduction of micro-ultrasound yields results comparable to the MRI-US solutions.
Scanning and reconstruction of the prostate, evaluated with the CIRS Prostate Training Phantom Model 070L and the CIRS Tissue Equivalent Ultrasound Prostate Phantom Model 053L, results in a volumetric accuracy of 92% and 98% for 070L and 053L, respectively. In this work, a hybrid dataset of open-source prostate sweeps and our prostate phantom sweeps is used to enable fully automated real-time scanning and segmentation of the prostate at 30 Hz. To create a balanced dataset, we developed MedAP, a medical annotation platform that uses zero-shot segmentation models. The state-of-the-art ultrasound segmentation models were evaluated, and the best algorithm was enhanced by curvature evaluation post-processing to remove outliers at the prostate edges. A Dice score of 0.944 and a Jaccard score of 0.887 were achieved.
To facilitate the selection of target points for prostate needle path planning, we developed a graphical user interface to assist the clinician in examining the prostate and selecting the exact target points for biopsy. The end-to-end software system presented was conceptualized, designed, and validated with valuable insights from urological clinicians for whom the system is intended. Our work to date provides the basis for a more sophisticated approach aimed at addressing the challenges of patient movement, as illustrated in [38], as well as prostate deformation, especially tissue inflammation caused by biopsy needle penetration. Future work should also validate the precision and accuracy of the biopsy by measuring the distance between the target and the site where the sample is taken.
Another important consideration for the future is to automate the process of targeting locations, i.e., the lesions and other areas of interest. Furthermore, the next step is to consider human–robot interaction in surgical scenarios to facilitate intuitive interaction with the system [39]. Also, one of the objectives is to implement a system for optimal robot positioning to avoid joint limits and singularities and facilitate simpler robot operation. These steps ensure a seamless transition from the research phase to a clinical environment, which is the final goal of this project.

Author Contributions

Conceptualization, M.Š., F.Š., B.Š., T.H., T.K. and B.J.; methodology, M.M., L.M., J.J. and L.Š.; software, M.M., L.M., J.J. and L.Š.; validation, M.M., L.M., J.J., L.Š., F.Š., B.Š. and M.Š.; formal analysis, M.M., L.M., J.J. and L.Š.; investigation, M.Š., T.Z., M.M., L.M., J.J. and L.Š.; resources, M.Š., F.Š., B.Š., T.H., T.K. and T.Z.; data curation, M.M., L.M., J.J. and L.Š.; writing—original draft preparation, M.M., L.M., J.J., L.Š. and T.Z.; writing—review and editing, M.M., L.M., J.J., L.Š., T.Z., F.Š., B.Š., M.Š. and B.Ć.; visualization, M.M., L.M., J.J. and L.Š.; supervision, M.Š.; project administration, M.Š. All authors have read and agreed to the published version of the manuscript.

Funding

This project has been funded by the European Union—NextGenerationEU through the Recovery and Resilience Facility. The authors would like to acknowledge the Croatian Science Foundation through the project PRONOBIS—Robotically navigated prostate biopsy (project code: NPOO.C3.2.R3-I1.04.0181).

Informed Consent Statement

Not applicable.

Data Availability Statement

Relevant datasets and models can be found at phantom-US-dataset and dataset (accessed on 18 July 2025).

Acknowledgments

We acknowledge the academician Željko Kaštelan together with academician Bojan Jerbić who initiated the PRONOBIS project idea.

Conflicts of Interest

Bojan Jerbić is employed by RONNA Medical Ltd. and he is a member of the Croatian Academy of Sciences and Arts. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSAProstate-Specific Antigen
DREDigital Rectal Examination
MRIMagnetic Resonance Imaging
mpMRIMultiparametric MRI
DOFDegree of Freedom
ROSRobot Operating System
TRUSTransrectal Ultrasound
CNNConvolutional Neural Network
ViTVision Transformer
AG-BCEAnnotation-Guided Binary Cross-Entropy
MSDSMulti-Scale Deep Supervision
TCPTool Center Point
MedAPMedical Annotation Platform
SAMSegment Anything
DICOMDigital Imaging and Communications in Medicine
NIfTINeuroimaging Informatics Technology Initiative
DAF3DDeep Attentive Features for Prostate Segmentation
GUIGraphical User Interface

References

  1. Filho, A.M.; Laversanne, M.; Ferlay, J.; Colombet, M.; Piñeros, M.; Znaor, A.; Parkin, D.M.; Soerjomataram, I.; Bray, F. The GLOBOCAN 2022 cancer estimates: Data sources, methods, and a snapshot of the cancer burden worldwide. Int. J. Cancer 2025, 156, 1336–1346. [Google Scholar] [CrossRef]
  2. McNeal, J.E.; Redwine, E.A.; Freiha, F.S.; Stamey, T.A. Zonal distribution of prostatic adenocarcinoma. Correlation with histologic pattern and direction of spread. Am. J. Surg. Pathol. 1988, 12, 897–906. [Google Scholar] [CrossRef]
  3. Wegelin, O.; van Melick, H.H.E.; Hooft, L.; Bosch, J.L.H.R.; Reitsma, H.B.; Barentsz, J.O.; Somford, D.M. Comparing Three Different Techniques for Magnetic Resonance Imaging-targeted Prostate Biopsies: A Systematic Review of In-bore versus Magnetic Resonance Imaging-transrectal Ultrasound fusion versus Cognitive Registration. Is There a Preferred Technique? Eur. Urol. 2017, 71, 517–531. [Google Scholar] [CrossRef]
  4. Barrett, T.; de Rooij, M.; Giganti, F.; Allen, C.; Barentsz, J.O.; Padhani, A.R. Quality checkpoints in the MRI-directed prostate cancer diagnostic pathway. Nat. Rev. Urol. 2023, 20, 9–22. [Google Scholar] [CrossRef]
  5. Patel, M.I.; Muter, S.; Vladica, P.; Gillatt, D. Robotic-assisted magnetic resonance imaging ultrasound fusion results in higher significant cancer detection compared to cognitive prostate targeting in biopsy naive men. Transl. Androl. Urol. 2020, 9, 601–608. [Google Scholar] [CrossRef] [PubMed]
  6. Rouvière, O.; Jaouen, T.; Baseilhac, P.; Benomar, M.L.; Escande, R.; Crouzet, S.; Souchon, R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts?—A systematic review. Diagn. Interv. Imaging 2023, 104, 221–234. [Google Scholar] [CrossRef] [PubMed]
  7. Maris, B.; Tenga, C.; Vicario, R.; Palladino, L.; Murr, N.; De Piccoli, M.; Calanca, A.; Puliatti, S.; Micali, S.; Tafuri, A.; et al. Toward autonomous robotic prostate biopsy: A pilot study. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1393–1401. [Google Scholar] [CrossRef]
  8. Wetterauer, C.; Trotsenko, P.; Matthias, M.O.; Breit, C.; Keller, N.; Meyer, A.; Brantner, P.; Vlajnic, T.; Bubendorf, L.; Winkel, D.J.; et al. Diagnostic accuracy and clinical implications of robotic assisted MRI-US fusion guided target saturation biopsy of the prostate. Sci. Rep. 2021, 11, 20250. [Google Scholar] [CrossRef] [PubMed]
  9. Lee, A.Y.; Yang, X.Y.; Lee, H.J.; Law, Y.M.; Huang, H.H.; Lau, W.K.; Lee, L.S.; Ho, H.S.; Tay, K.J.; Cheng, C.W.; et al. Multiparametric MRI-ultrasonography software fusion prostate biopsy: Initial results using a stereotactic robotic-assisted transperineal prostate biopsy platform comparing systematic vs. targeted biopsy. BJU Int. 2020, 126, 568–576. [Google Scholar] [CrossRef]
  10. Porpiglia, F.; De Luca, S.; Passera, R.; Manfredi, M.; Mele, F.; Bollito, E.; De Pascale, A.; Cossu, M.; Aimar, R.; Veltri, A. Multiparametric-Magnetic Resonance/Ultrasound Fusion Targeted Prostate Biopsy Improves Agreement Between Biopsy and Radical Prostatectomy Gleason Score. Anticancer Res. 2016, 36, 4833–4840. [Google Scholar] [CrossRef]
  11. Chou, W.; Liu, Y. An Analytical Inverse Kinematics Solution with the Avoidance of Joint Limits, Singularity and the Simulation of 7-DOF Anthropomorphic Manipulators. Trans. FAMENA 2024, 48, 117–132. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Liang, D.; Sun, L.; Guo, X.; Jiang, J.; Zuo, S.; Zhang, Y. Design and experimental study of a novel 7-DOF manipulator for transrectal ultrasound probe. Sci. Prog. 2020, 103, 0036850420970366. [Google Scholar] [CrossRef] [PubMed]
  13. Duan, H.; Zhang, Y.; Liu, H. Continuous Body Type Prostate Biopsy Robot for Confined Space Operation. IEEE Access 2023, 11, 113667–113677. [Google Scholar] [CrossRef]
  14. Ho, H.; Yuen, J.S.P.; Mohan, P.; Lim, E.W.; Cheng, C.W.S. Robotic transperineal prostate biopsy: Pilot clinical study. Urology 2011, 78, 1203–1208. [Google Scholar] [CrossRef]
  15. Fletcher, P.; De Santis, M.; Ippoliti, S.; Orecchia, L.; Charlesworth, P.; Barrett, T.; Kastner, C. Vector Prostate Biopsy: A Novel Magnetic Resonance Imaging/Ultrasound Image Fusion Transperineal Biopsy Technique Using Electromagnetic Needle Tracking Under Local Anaesthesia. Eur. Urol. 2023, 83, 249–256. [Google Scholar] [CrossRef]
  16. Ipsen, S.; Wulff, D.; Kuhlemann, I.; Schweikard, A.; Ernst, F. Towards automated ultrasound imaging—robotic image acquisition in liver and prostate for long-term motion monitoring. Phys. Med. Biol. 2021, 66, 094002. [Google Scholar] [CrossRef]
  17. Wang, W.; Pan, B.; Fu, Y.; Liu, Y. Development of a transperineal prostate biopsy robot guided by MRI-TRUS image. Int. J. Med. Robot. Comput. Assist. Surg. 2021, 17, e2266. [Google Scholar] [CrossRef]
  18. Stoianovici, D.; Kim, C.; Petrisor, D.; Jun, C.; Lim, S.; Ball, M.W.; Ross, A.; Macura, K.J.; Allaf, M. MR Safe Robot, FDA Clearance, Safety and Feasibility Prostate Biopsy Clinical Trial. IEEE/ASME Trans. Mechatron. 2017, 22, 115–126. [Google Scholar] [CrossRef]
  19. Tilak, G.; Tuncali, K.; Song, S.E.; Tokuda, J.; Olubiyi, O.; Fennessy, F.; Fedorov, A.; Penzkofer, T.; Tempany, C.; Hata, N. 3T MR-guided in-bore transperineal prostate biopsy: A comparison of robotic and manual needle-guidance templates: Robotic Template for MRI-Guided Biopsy. J. Magn. Reson. Imaging 2015, 42, 63–71. [Google Scholar] [CrossRef]
  20. Lim, S.; Jun, C.; Chang, D.; Petrisor, D.; Han, M.; Stoianovici, D. Robotic Transrectal Ultrasound Guided Prostate Biopsy. IEEE Trans. Biomed. Eng. 2019, 66, 2527–2537. [Google Scholar] [CrossRef]
  21. Li, X.; Li, C.; Fedorov, A.; Kapur, T.; Yang, X. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges. Med. Phys. 2016, 43, 3090–3103. [Google Scholar] [CrossRef]
  22. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  23. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  25. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. arXiv 2021. [Google Scholar] [CrossRef]
  26. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021. [Google Scholar] [CrossRef]
  27. Jiang, H.; Imran, M.; Muralidharan, P.; Patel, A.; Pensa, J.; Liang, M.; Benidir, T.; Grajo, J.R.; Joseph, J.P.; Terry, R.; et al. MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images. Comput. Med. Imaging Graph. 2024, 112, 102326. [Google Scholar] [CrossRef]
  28. Avolio, P.P.; Lughezzani, G.; Paciotti, M.; Maffei, D.; Uleri, A.; Frego, N.; Hurle, R.; Lazzeri, M.; Saita, A.; Guazzoni, G.; et al. The use of 29 MHz transrectal micro-ultrasound to stratify the prostate cancer risk in patients with PI-RADS III lesions at multiparametric MRI: A single institutional analysis. Urol. Oncol. Semin. Orig. Investig. 2021, 39, 832.e1–832.e7. [Google Scholar] [CrossRef]
  29. Kinnaird, A.; Luger, F.; Cash, H.; Ghai, S.; Urdaneta-Salegui, L.F.; Pavlovich, C.P.; Brito, J.; Shore, N.D.; Struck, J.P.; Schostak, M.; et al. Microultrasonography-Guided vs MRI-Guided Biopsy for Prostate Cancer Diagnosis: The OPTIMUM Randomized Clinical Trial. JAMA 2025, 333, 1679–1687. [Google Scholar] [CrossRef]
  30. Sountoulides, P.; Pyrgidis, N.; Polyzos, S.A.; Mykoniatis, I.; Asouhidou, E.; Papatsoris, A.; Dellis, A.; Anastasiadis, A.; Lusuardi, L.; Hatzichristou, D. Micro-Ultrasound-Guided vs Multiparametric Magnetic Resonance Imaging-Targeted Biopsy in the Detection of Prostate Cancer: A Systematic Review and Meta-Analysis. J. Urol. 2021, 205, 1254–1262. [Google Scholar] [CrossRef]
  31. Dlaka, D.; Švaco, M.; Chudy, D.; Jerbić, B.; Šekoranja, B.; Šuligoj, F.; Vidaković, J.; Romić, D.; Raguž, M. Frameless stereotactic brain biopsy: A prospective study on robot-assisted brain biopsies performed on 32 patients by using the RONNA G4 system. Int. J. Med. Robot. Comput. Assist. Surg. MRCAS 2021, 17, e2245. [Google Scholar] [CrossRef]
  32. Raguž, M.; Dlaka, D.; Orešković, D.; Kaštelančić, A.; Chudy, D.; Jerbić, B.; Šekoranja, B.; Šuligoj, F.; Švaco, M. Frameless stereotactic brain biopsy and external ventricular drainage placement using the RONNA G4 system. J. Surg. Case Rep. 2022, 2022, rjac151. [Google Scholar] [CrossRef]
  33. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef]
  34. Huber, M.; Mower, C.E.; Ourselin, S.; Vercauteren, T.; Bergeles, C. LBR-Stack: ROS 2 and Python Integration of KUKA FRI for Med and IIWA Robots. J. Open Source Softw. 2024, 9, 6138. [Google Scholar] [CrossRef]
  35. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 3992–4003. [Google Scholar] [CrossRef]
  36. Wang, Y.; Dou, H.; Hu, X.; Zhu, L.; Yang, X.; Xu, M.; Qin, J.; Heng, P.A.; Wang, T.; Ni, D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE Trans. Med. Imaging 2019, 38, 2768–2778. [Google Scholar] [CrossRef]
  37. Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar] [CrossRef]
  38. Suligoj, F.; Heunis, C.M.; Sikorski, J.; Misra, S. RobUSt—An Autonomous Robotic Ultrasound System for Medical Imaging. IEEE Access 2021, 9, 67456–67465. [Google Scholar] [CrossRef]
  39. Zhang, J.; Li, Y.; Hu, F.; Chen, P.; Zhang, H.; Song, L.; Yu, Y. Human-Robot Interaction of a Craniotomy Robot Based on Fuzzy Model Reference Learning Control. Trans. FAMENA 2024, 48, 155–171. [Google Scholar] [CrossRef]
Figure 1. Proposed workflow of robotically guided prostate biopsy. The robot scans the prostate by rotating the ultrasound probe around its axis. The ultrasound images are recorded on the PC and analyzed in real time. Using the segmentation masks, a 3D visualization is created and displayed on the GUI. Using GUI commands, the user can choose target coordinates for the biopsy needle placement on the individual slice images. The target coordinates are sent to the robot so it can place the aiming reticle attached to the probe in position for the clinician to take a biopsy sample at the selected location.
Figure 1. Proposed workflow of robotically guided prostate biopsy. The robot scans the prostate by rotating the ultrasound probe around its axis. The ultrasound images are recorded on the PC and analyzed in real time. Using the segmentation masks, a 3D visualization is created and displayed on the GUI. Using GUI commands, the user can choose target coordinates for the biopsy needle placement on the individual slice images. The target coordinates are sent to the robot so it can place the aiming reticle attached to the probe in position for the clinician to take a biopsy sample at the selected location.
Robotics 14 00100 g001
Figure 2. Calibration of the US probe on the robot. (a) US probe mounted on the robot submerged in water for the calibration. (b) US image of the calibration cone with the aiming reticle for the calibration of the TCP.
Figure 2. Calibration of the US probe on the robot. (a) US probe mounted on the robot submerged in water for the calibration. (b) US image of the calibration cone with the aiming reticle for the calibration of the TCP.
Robotics 14 00100 g002
Figure 3. Ultrasound scanning procedure. A robot-mounted US probe is inserted into the prostate phantom. By rotating the probe around its axis, the robot can perform a scan of the whole prostate.
Figure 3. Ultrasound scanning procedure. A robot-mounted US probe is inserted into the prostate phantom. By rotating the probe around its axis, the robot can perform a scan of the whole prostate.
Robotics 14 00100 g003
Figure 4. MedAP—Medical Annotation Platform, examples. (a) CIRS 053L—easy visible edges that can be segmented with SAM. (b) CIRS 070L—partially indistinctive edges on the right that are manually annotated using polygons.
Figure 4. MedAP—Medical Annotation Platform, examples. (a) CIRS 053L—easy visible edges that can be segmented with SAM. (b) CIRS 070L—partially indistinctive edges on the right that are manually annotated using polygons.
Robotics 14 00100 g004
Figure 5. MedAP Workflow for training and testing dataset creation. The image shows complete data flow with standard prompting options (point/points, bounding box, polygon) and segmentation results.
Figure 5. MedAP Workflow for training and testing dataset creation. The image shows complete data flow with standard prompting options (point/points, bounding box, polygon) and segmentation results.
Robotics 14 00100 g005
Figure 6. Segmentation results. (ac) are phantom 1 images and (df) are phantom 2 images. (a,d) show original images, (b,e) show segmentation results while (c,f) show segmentation masks.
Figure 6. Segmentation results. (ac) are phantom 1 images and (df) are phantom 2 images. (a,d) show original images, (b,e) show segmentation results while (c,f) show segmentation masks.
Robotics 14 00100 g006
Figure 7. Graphical User Interface. The program is divided into three different interactive visualizations. In the top left corner, a reconstructed 3D model of the prostate gland is shown. The green cylinders represent biopsy targets picked by the clinician. The clinician can zoom in/out and rotate the model (using mouse and keyboard) to examine it. Next to the model is a small representation of the male human body, to check the prostate orientation. The bottom part contains all the US images recorded during the sweep, while the right top corner shows the selected US image on which the clinician can select an area from which a sample will be taken via biopsy.
Figure 7. Graphical User Interface. The program is divided into three different interactive visualizations. In the top left corner, a reconstructed 3D model of the prostate gland is shown. The green cylinders represent biopsy targets picked by the clinician. The clinician can zoom in/out and rotate the model (using mouse and keyboard) to examine it. Next to the model is a small representation of the male human body, to check the prostate orientation. The bottom part contains all the US images recorded during the sweep, while the right top corner shows the selected US image on which the clinician can select an area from which a sample will be taken via biopsy.
Robotics 14 00100 g007
Table 1. Comparison of segmentation performance for the two models: DAF3D and MicroSegNet (MSN). Since two differrent metrics were used, we applied the Bonferroni correction to the standard p-value threshold of 0.05, resulting in an adjusted p-value threshold of 0.05/2 = 0.025. The folds for which the p-value for both Dice and Jaccard scores is lower than the adjusted threshold are denoted by an asterisk *.
Table 1. Comparison of segmentation performance for the two models: DAF3D and MicroSegNet (MSN). Since two differrent metrics were used, we applied the Bonferroni correction to the standard p-value threshold of 0.05, resulting in an adjusted p-value threshold of 0.05/2 = 0.025. The folds for which the p-value for both Dice and Jaccard scores is lower than the adjusted threshold are denoted by an asterisk *.
Fold No.Dice (DAF3D)Dice (MSN)p-ValueJaccard (DAF3D)Jaccard (MSN)p-Value
10.905 ± 0.0350.948 ± 0.0300.0840.829 ± 0.5900.905 ± 0.0270.065
20.898 ± 0.0600.953 ± 0.0200.0270.820 ± 0.0950.911 ± 0.0390.027
30.899 ± 0.0480.942 ± 0.0460.0840.820 ± 0.0760.909 ± 0.0480.027
4 *0.906 ± 0.0410.950 ± 0.0140.0020.831 ± 0.0630.908 ± 0.0250.002
5 *0.903 ± 0.0400.964 ± 0.0150.0060.826 ± 0.0670.930 ± 0.0300.010
Table 2. Reconstruction performance over 10 sweep-phantom CIRS 053L.
Table 2. Reconstruction performance over 10 sweep-phantom CIRS 053L.
Length/[mm]Width/[mm]Height/[mm]Volume/[mm3]
Measured55.9 ± 0.4742.9 ± 0.4237.3 ± 0.6154,058.2 ± 652.4
Ground truth50454053,000
Table 3. Reconstruction performance over 10 sweep-phantom CIRS 070L. Dimensional ground truth data are not specified.
Table 3. Reconstruction performance over 10 sweep-phantom CIRS 070L. Dimensional ground truth data are not specified.
Length/[mm]Width/[mm]Height/[mm]Volume/[mm3]
Measured58.0 ± 0.1643.9 ± 0.4337.4 ± 0.1953,217.6 ± 546.6
Ground truth---49,000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Markulin, M.; Matijević, L.; Jurdana, J.; Šiktar, L.; Ćaran, B.; Zekulić, T.; Šuligoj, F.; Šekoranja, B.; Hudolin, T.; Kuliš, T.; et al. PRONOBIS: A Robotic System for Automated Ultrasound-Based Prostate Reconstruction and Biopsy Planning. Robotics 2025, 14, 100. https://doi.org/10.3390/robotics14080100

AMA Style

Markulin M, Matijević L, Jurdana J, Šiktar L, Ćaran B, Zekulić T, Šuligoj F, Šekoranja B, Hudolin T, Kuliš T, et al. PRONOBIS: A Robotic System for Automated Ultrasound-Based Prostate Reconstruction and Biopsy Planning. Robotics. 2025; 14(8):100. https://doi.org/10.3390/robotics14080100

Chicago/Turabian Style

Markulin, Matija, Luka Matijević, Janko Jurdana, Luka Šiktar, Branimir Ćaran, Toni Zekulić, Filip Šuligoj, Bojan Šekoranja, Tvrtko Hudolin, Tomislav Kuliš, and et al. 2025. "PRONOBIS: A Robotic System for Automated Ultrasound-Based Prostate Reconstruction and Biopsy Planning" Robotics 14, no. 8: 100. https://doi.org/10.3390/robotics14080100

APA Style

Markulin, M., Matijević, L., Jurdana, J., Šiktar, L., Ćaran, B., Zekulić, T., Šuligoj, F., Šekoranja, B., Hudolin, T., Kuliš, T., Jerbić, B., & Švaco, M. (2025). PRONOBIS: A Robotic System for Automated Ultrasound-Based Prostate Reconstruction and Biopsy Planning. Robotics, 14(8), 100. https://doi.org/10.3390/robotics14080100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop