Next Article in Journal
Framework for Implementing Individualised Dosing of Anti-Cancer Drugs in Routine Care: Overcoming the Logistical Challenges
Next Article in Special Issue
A 15-Year Single-Institution Retrospective Study of Primary Pancreatic Cancer Treated with Non-Ablative Palliative Radiotherapy
Previous Article in Journal
Secondary-Type Mutations in Acute Myeloid Leukemia: Updates from ELN 2022
Previous Article in Special Issue
deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ultrasound Imaging with Flexible Array Transducer for Pancreatic Cancer Radiation Therapy

1
Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21287, USA
2
Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
3
Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21287, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Cancers 2023, 15(13), 3294; https://doi.org/10.3390/cancers15133294
Submission received: 4 May 2023 / Revised: 2 June 2023 / Accepted: 19 June 2023 / Published: 22 June 2023
(This article belongs to the Special Issue Radiation Therapy for Pancreatic Cancer)

Abstract

:

Simple Summary

Ultrasound (US) imaging has been widely used for tumor tracking in image-guided radiotherapy. The quality of US images using conventional probes highly depends on user proficiency and anatomical changes, which has severely hindered the use of US-based organ motion monitoring for pancreatic cancer. The flexible array transducer is a novel and promising solution to address the limitation of conventional US probes. At the same time, its strength and flexible geometry also makes the image reconstruction and delay-and-sum (DAS) beamforming very challenging. Inaccuracy in delay calculation results in defocused and distorted US images. In this study, we proposed a novel shape estimation for flexible array transducer to enhance abdomen motion monitoring.

Abstract

Pancreatic cancer with less than 10% 3-year survival rate is one of deadliest cancer types and greatly benefits from enhanced radiotherapy. Organ motion monitoring helps spare the normal tissue from high radiation and, in turn, enables the dose escalation to the target that has been shown to improve the effectiveness of RT by doubling and tripling post-RT survival rate. The flexible array transducer is a novel and promising solution to address the limitation of conventional US probes. We proposed a novel shape estimation for flexible array transducer using two sequential algorithms: (i) an optical tracking-based system that uses the optical markers coordinates attached to the probe at specific positions to estimate the array shape in real-time and (ii) a fully automatic shape optimization algorithm that automatically searches for the optimal array shape that results in the highest quality reconstructed image. We conducted phantom and in vivo experiments to evaluate the estimated array shapes and the accuracy of reconstructed US images. The proposed method reconstructed US images with low full-width-at-half-maximum (FWHM) of the point scatters, correct aspect ratio of the cyst, and high-matching score with the ground truth. Our results demonstrated that the proposed methods reconstruct high-quality ultrasound images with significantly less defocusing and distortion compared with those without any correction. Specifically, the automatic optimization method reduced the array shape estimation error to less than half-wavelength of transmitted wave, resulting in a high-quality reconstructed image.

1. Introduction

The organ and tumor movement during radiotherapy (RT) treatment delivery has negative adverse impact on the effectiveness of the RT and its clinical outcome [1]. To compensate for tumor motion and its related uncertainty, as the standard of care, a safety margin is added to the target region. This expansion of irradiated target region results in the surrounding organs at risk (OAR) also getting exposed to radiation dose [2,3]. Moreover, this unwanted OAR irradiation limits the dose escalation. For instance, with less than 10% 3-year survival rate [4], pancreatic cancer is a devastating disease that can greatly benefit from dose escalation. Studies have shown that dose-escalated pancreatic cancer doubles and triples the 2-year and 3-year survival rate [5,6,7,8,9,10,11,12,13]. Real-time motion tracking is the most effective method for motion compensation [2], that in turn, allows RT to be performed with higher precision by adjusting the radiation beam to the tumor location [14] and, thus, enables the dose-escalation [15,16,17,18,19,20,21,22].
The major cause of intrafraction OAR and target motion is respiratory-induced motion. Respiratory-related tumor motion in pancreatic cancer can reach up to 35 mm [23]. This considerably high motion uncertainty can be greatly mitigated by real-time tumor positioning. The tumor positioning methods can be divided into two major techniques: direct and indirect [24]. Direct methods are those that use imaging systems to directly track the tumor position, and such systems are kilovolt X-ray imaging-based methods [25,26,27,28], magnetic resonance (MR) imaging-based approaches [29], electromagnetic sensing systems [30,31], and ultrasound-based techniques [32,33,34,35]. In indirect position tracking technique, the motion trajectory is tracked using a surrogate target that is highly associated with the tumor motion, for instance, thoracoabdominal surface displacement tracking using surface optical trackers. By tracking and analyzing the position of markers that are placed on the skin, external surface motion can be obtained [36,37]. Although, indirect methods are non-invasive, to increase the accuracy these methods, an accurate model is required for converting the motion between the body surface and the internal organ [2,36].
Direct tumor tracking methods are highly accurate, but use of X-ray imaging or electromagnetic sensing can result in exposing patients to extra radiation dose or implanting invasive electromagnetic markers, and the associated cost is high, and they cannot retrofit MR imaging to existing RT treatment machines. On the other hand, ultrasound-based direct approach has several advantages [19,33,38]: (i) low cost, (ii) non-invasive (iii) ability for image enhancement with contrast agents, and (iv) easily integratable with current standard of care. Ultrasound (US) tumor tracking has been successfully applied to prostate intrafraction motion tracking and is commercially available [36,37]. Even though the studies have shown promising results for US-based abdominal intrafraction motion tracking [33,39,40,41], its application in clinics is highly hindered. This is mainly due to the need for a special holder for abdominal sites. The holder should be compatible with treatment setup, and the probe and holder should be placed so that it does not block the radiation [33]. Moreover, the conventional US probes are rigid and operator-dependent [42] and require contact pressure that may cause an anatomical change on each day of radiotherapy delivery and, thus, cause dose deviation from the original treatment plan depending on the original patient anatomy from initial ultrasound probe pressure and location [43,44,45].
The flexible array transducer is a promising design that can address user dependency and induced anatomical changes of the conventional US transducer [46,47,48]. The flexible array transducer is a thin US transducer that can be attached to the body surface and adapt to different geometries [49,50]. In recent years, many studies have shown the application of flexible array transducers in biomedical imaging [50,51]. Our group has been investigating the application of the flexible US probe for real-time tumor tracking during radiotherapy for both abdominal and head-and-neck cancers. Although the flexible probe is wearable and takes the shape of subject’s body, the image reconstruction and ultrasound beamforming are very challenging due to its flexible geometry.
The delay-and-sum (DAS) beamformer is commonly for B-mode image reconstruction from radiofrequency (RF) channel data. DAS requires an accurate array shape to calculate the time-of-flight (ToF) between elements and focal point, and accordingly, apply proper time delays to each channel [52]. As a result, the use of DAS as a flexible array transducer is very challenging as the array shape is generally unknown at any given point in time and changes with the body surface. For a 5 MHz probe, a half-wavelength error in element position, or 0.154 mm, potentially causes the summation of wrongly delayed signals in the opposite phase and introduces signal loss [53]. Therefore, if the array shape is not correctly defined, the reconstructed US image is significant, defocused, and distorted [54].
Numerous approaches have been proposed to address the beamforming problem. De Oliveira et al. [55] and Boerkamp et al. [56] developed flexible array transducers with strain gauges and piezoelectric sensors bonded to their surfaces for detecting local curvature. Although these sensors could estimate curvature with reasonable accuracy, the additional hardware increases costs and limits the number of elements. Chang et al. [57] and Noda et al. [58] proposed mathematical model-based shape estimation algorithms to achieve optimal image quality using flexible array transducers. However, the estimation time for both algorithms is relatively long, resulting in a frame rate that is too low for real-time imaging. Moreover, Huang et al. [59] and Noda et al. [60] developed methods to directly reconstruct B-mode images or estimate array geometry from radio frequency (RF) data of flexible array transducers using deep neural networks (DNNs). Despite their efforts, both methods failed to reduce the array shape error to less than half the wavelength of the transmitted wave and suffered from blurry reconstructed images.
In this study, we propose a novel shape-estimating method for a flexible array transducer using a sequential approach. The optical-based shape-estimation algorithm uses the optical tracking system to collect the spatial coordinates of the array and estimate its shape, while the shape optimization algorithm further optimizes the estimated shape by searching for the array shape that can reconstruct the beamformed image with highest quality without any external device. We conducted phantom and in vivo experiments and evaluated the accuracy of the estimated array shapes and reconstructed ultrasound images.

2. Materials and Methods

2.1. Optical-Based Shape Estimation Algorithm

The infrared (IR) optical tracker can emit and detect the reflected IR light from the passive marker spheres and triangulate the real-time spatial coordinates of the spheres in 3D space [61]. Several passive sphere markers were fixed on the flexible probe surface and their spatial coordinates were collected. The coordinate system for specifying the flexible array element positions is shown in Figure 1. The center positions of all the array elements are specified with x and z coordinates and azimuth angles (angle from the normal to the positive z axis), while the y coordinates are always equal to 0. However, as the spheres were manually mounted, there would be potential position errors on the y axis. Therefore, we applied the principal component analysis (PCA) to the x-y-z coordinates of the spheres and extracted the first two principal components [62]. The extracted components could be considered as the x and z coordinates of the flexible array.
As the abdominal surface shape of the human is similar to an arc, the shape of the flexible array was first simplified as a concave circular arc with a specific radius. With x-z coordinates of all the spheres, a circle can be fitted to the data points based on the least-squares fitting algorithm developed by Pratt [63], and the radius R f i t of this circle is estimated based on the minimization of the following function:
R f i t = argmin R f i t i = 1 M ( ( X i A ) 2 + ( Z i B ) 2 R f i t 2 ) 2 R f i t 2  
where M is the total number of the spheres, ( X i , Z i ) are the coordinates of the spheres, and ( A , B ) is the estimated center of the fitted circle. As the optical tracking system measures the coordinates of the spheres’ center, the real radius of the flexible array R can be calculated as follows:
R = R f i t R s p h e r e t h i c k n e s s
where R s p h e r e is the radius of the passive marker spheres and t h i c k n e s s is the thickness of the flexible array transducer. Therefore, the center position ( x k ,   z k ) and the azimuth angle α k of the k th array element can be calculated as follows:
x k = R · sin ( 2 k K 1 2 · p R )
z k = R R · cos ( 2 k K 1 2 · p R )
α k = K + 1 2 k 2 · p R
where K is the total element number, p is the pitch, and the center of the array was set at ( 0 ,   0 ) . In this way, the array shape could be fully described by x k , z k , and α k .

2.2. DAS Beamforming

With the knowledge of the array shape, the DAS beamforming can be performed to reconstruct the beamformed image. Specifically, the time-of-flight or τ T o F from the t th transmitting element to the r th receiving element through the focal point ( x f ,   z f ) is formulated as follows:
τ T o F ( t , r , x f , z f ) = ( x f x t ) 2 + ( z f z t ) 2 + ( x f x r ) 2 + ( z f z r ) 2 c
where c is a constant speed of sound. Then, the RF channel data can be properly delayed and summed. The focal point in the beamformed image I ( x f ,   z f ) is reconstructed as follows:
I ( x f ,   z f ) = t = 1 T r = 1 R n = 1 N R F ( t , r , n ) · δ t , r ( n )
where T is the number of transmitting elements, R is the number of receiving elements, N is the length of receiving samples, R F ( t , r , n ) represents the t th transmitting elements, r th receiving elements, and n th sample of the RF channel data. δ t , r ( n ) is the Kronecker delta function for data extraction expressed as follows [58]:
δ t , r ( n ) = { 1 ,     i f   | n τ T o F ( t , r , x f , z f ) · f s | 1 2 0 ,     i f   | n τ T o F ( t , r , x f , z f ) · f s | > 1 2
where f s is the sampling frequency. In this way, the beamformed image is reconstructed from the RF channel data based on the estimated array shape.

2.3. Shape Optimization Algorithm

As mentioned, a half-wavelength error in element position will result in significant defocusing in the reconstructed image. However, in clinical practice, the spatial resolution of the optical tracking system is normally larger than half-wavelength of the transmitted wave from the flexible array transducer (e.g., 0.25 mm accuracy with the NDI Polaris Spectra System). Therefore, the optical-based shape estimation algorithm may not achieve the expected accuracy of array shape. To further improve the accuracy, we developed a shape optimization algorithm that could describe the array shape with a more detailed model and search for the optimal shape to reconstruct ultrasound images with the highest quality.

2.3.1. Array Shape Model

A more detailed mathematical model was designed to describe the flexible array shape. As the center-to-center distance between two successive elements is fixed as the pitch p , the array shape could be fully described with K 1 parameters P = { Δ α 1 , Δ α 2 , , Δ α K 1 } , where Δ α k is the external angle between the k th element and the k + 1 th element. Therefore, the azimuth angle α k of the k th array element can be calculated as follows:
α k = { 0 ,   i f   k = 1 i = 1 k 1 Δ α i ,                 i f   1 < k K
With the knowledge of azimuth angles { α 1 , α 2 , , α K } and pitch p , the center position ( x k ,   z k ) of the k th array element can be calculated as follows:
x k = { 0 , i f   k = 1 p · i = 1 k 1 cos α i ,                 i f   1 < k K
z k = { 0 , i f   k = 1 p · i = 1 k 1 sin α i ,                   i f   1 < k K
where the azimuth angle of the first element is set to 0, and the center position of the first element is set at ( 0 , 0 ) . Then, the DAS beamforming can be performed to reconstruct the beamformed image I from the RF channel data based on this array shape model.

2.3.2. Evaluation of Beamformed Image

An evaluation method for the beamformed image I was selected to indicate the accuracy of the estimated array shape. Inspired by the maximum entropy (MEM) image reconstruction algorithm [64], studies on transducer array shape estimation [58], and autofocusing of synthetic-aperture radar (SAR) images [65,66], we used the Shannon image entropy as the metric for evaluating the quality of beamformed images [67]. The beamformed image I was normalized to I n o r m , and its entropy H can be calculated with the following function:
H ( I n o r m ) = i = 1 M p i log 2 p i
where p i represents the probability of seeing the i th possible outcome of the beamformed image I n o r m . Entropy measures the information contents of images, and entropies near 0 indicate images with little or no information, while larger entropies indicate more information contents. However, there is a conflict between different studies. MEM maximizes the entropy to optimize the reconstructed image, while SAR autofocusing minimizes the entropy to achieve best focus. To correctly use the entropy for our study, we tested the relationship between the array shape accuracy and the beamformed image entropy.
When performing the test scans on different imaging targets, the flexible array transducer was set in a flat shape to make the ground truth of the array shape known. Then, different array shape assumptions were randomly generated and used for reconstructing the beamformed image. The mean absolute errors between the assumptions and ground truth were plotted versus the entropy of their corresponding beamformed images. Figure 2 shows an example of the scatter plot of mean absolute errors of the array shape and entropy scores of the corresponding reconstructed images from 1000 random shape assumptions.
The results show that the entropy of the beamformed image tends to be larger when the array shape error is smaller. Therefore, we set the principle that the beamformed image using an array shape of higher accuracy would have a larger entropy. More in-depth discussions about the effect of entropy in ultrasound images will be presented in the discussion.

2.3.3. Array Shape Optimization

The circular arc array shape estimated using the optical-based shape estimation algorithm is used as the initial shape assumption for further optimization. The optimal shape parameters P = { Δ α 1 , Δ α 2 , , Δ α K 1 } are globally searched to achieve maximum entropy of the corresponding beamformed image. The following global maximum problem was solved:
P ^   =   argmax P f ( P ) ,         l b     P     u b
where f ( P ) is the entropy evaluation function of the beamformed image reconstructed based on P , and l b and u b are the lower bounds and upper bounds of the shape parameters. The optimization is terminated when the stopping criterion is met. Overview of the shape optimization algorithm is presented in Figure 3. Details on the optimization model, constraints, and conditions are described in the methods.

2.4. Scan Conversion

With the knowledge of azimuth angle and center position of each element, the angle and origin position of each transmission scan-line can be determined. Scan conversion is performed to transform the polar coordinate ultrasound data into Cartesian coordinate data. Bilinear interpolation is applied to the scan conversion task. The method uses four adjacent data points in the beamformed image { I ( x l , z l ) , I ( x l , z u ) , I ( x u , z l ) , I ( x u , z u ) } to compute the value for the needed pixel S ( x , z ) , which can be expressed as follows [68,69]:
T 1 = x u x x u x l · I ( x l , z l ) + x x l x u x l · I ( x u , z l )
T 2 = x u x x u x l · I ( x l , z u ) + x x l x u x l · I ( x u , z u )
S ( x , z ) = z u x z u z l · T 1 + z z l z u z l · T 2
where T 1 and T 2 are the intermediate points for computing the pixel value S ( x , z ) . Then, the imaging region is calculated based on the scan-line profile, and a binary mask is applied to the interpolated image to get the scan converted ultrasound image. Figure 4 illustrates the profile of each scan-line and the corresponding scan converted ultrasound image.

2.5. Experiments

2.5.1. Optical-Based Shape Estimation Experiments

The NDI Polaris Spectra System (Northern Digital Inc., Waterloo, ON, Canada), rigid body tools, and passive marker spheres were all used for optical-based shape estimation. Five passive marker spheres were attached to the back surface of a flexible array transducer (made by Hitachi and Japan Probe, Yokohama, Japan). Parameters of the transducer are listed in Table 1. All the spheres were aligned with the transducer array, and the first and last spheres were positioned at the two ends of the transducer array. The spatial coordinates of all the spheres were collected using the optical tracker, and the proposed shape estimation algorithm was implemented in the MATLAB (version R2020b, MathWorks, Natick, MA, USA) to estimate the array shape.
Three cylinder-shaped objects with known radius R o b j e c t were used to evaluate the accuracy of the shape estimation algorithm. The experimental set-up is shown in Figure 5. The flexible array transducer was attached to the surface of the cylinder-shaped object, and the radius of the transducer array R a r r a y was estimated with the proposed algorithm.
The flexible array transducer was also attached to the surface of the abdominal region of an ABDFAN Abdominal Ultrasound Phantom (Kyoto Kagaku Co., Kyoto, Japan) in lateral direction. The cross-sectional X-ray image of the phantom and transducer was captured using a Cios Alpha C-arm imaging system (Siemens Healthineers, Erlangen, Germany). Then, the transducer array was segmented from the X-ray image and fitted into a curve shape. This segmented array shape was considered as the ground truth and compared with the optical-based estimated and optimized array shape.

2.5.2. RF Data Acquisition and Image Reconstruction

The flexible array transducer connected with the Vantage System (Verasonics Inc., Kirkland, WA, USA) was used to scan a CIRS General Purpose Ultrasound Phantom (Computerized Imaging Reference Systems Inc., Norfold, VA, USA), liver area of an ABDFAN Abdominal Ultrasound Phantom (Kyoto Kagaku Co., Japan), and a healthy volunteer under Johns Hopkins Institutional Review Board approval. Ultrasound gel was used to create a random curvature for the scanning surface of flexible array transducer, which is depicted in Figure 6. In addition, a linear array transducer (ATL L7-4 38 mm, Philips Healthcare, Cambridge, MA, USA) was used to scan the same imaging region. Parameters of the linear array transducer is shown in Table 2, and the speed of sound was set as uniform at 1540 m/s. The RF channel data was acquired by transmitting acoustic signals and receiving the backscattered signals.
The DAS beamforming, scan conversion, and image post-processing were implemented in MATLAB. The RF channel data and estimated array shape parameters were used to reconstruct the “corrected” ultrasound images. In addition, the RF channel data was wrongly delayed and reconstructed under the assumption that the flexible array is flat and linear. These “uncorrected” results stand for the images reconstructed without any correction on array shape. The RF channel data from the linear array transducer was also reconstructed as the ground truth, which will be evaluated and compared with the “corrected” and “uncorrected” results of the flexible array transducer.

2.5.3. Shape Optimization Algorithm Implementation

The shape optimization algorithm was implemented in the MATLAB with the “surrogateopt” function, which is a global solver for time-consuming objective functions. To achieve a faster convergence, the initial shape parameters P 0 were set to the circular arc array shape estimated by optical-based estimation algorithm. In addition, to keep the array shape smooth and continuous, lower bounds l b and upper bounds u b were set as follows:
l b = P 0 π / 720 ,     u b = P 0 + π / 720
The bounding values were set based on the abdominal surface shape of the human. The shape optimization was terminated after 200 iterations. The array shape parameters P = { Δ α 1 , Δ α 2 , , Δ α K 1 } were optimized to achieve maximum entropy of the beamformed image.

2.6. Evaluation Metrics

As defocusing and distortion are the two major problems of the reconstructed ultrasound image without the knowledge of array shape, the following metrics were used to evaluate the quality and accuracy of the results. For the CIRS phantom results, the defocusing effect was assessed by evaluating the lateral full-width-at-half-maximum (FWHM) of all the point scatters, while the image distortion was evaluated by the aspect ratio of the cyst. The aspect ratio is the ratio of lateral and axial axes of the cyst. An ellipse with aspect ratio of 1 is a circle:
A s p e c t   R a t i o = L a t e r a l   A x i s A x i a l   A x i s
For the ABDFAN phantom and liver scan results, the cysts, blood vessels, and muscles were manually segmented. The image distortion was evaluated by computing the Sørensen–Dice coefficient (Dice score) [70], Jaccard similarity coefficient (Jaccard index) [71], and Hausdorff distance [72] between the segmentations of flexible array transducer results and the ground truth (linear array transducer results). Before the evaluation, rigid registration was performed to the segmentations to eliminate the translational and rotational difference between the results and ground truth. For X and Y being two non-empty sets of the segmentation, the Dice score D S C , Jaccard index J , and Hausdorff distance H D can be computed as follows:
D S C = 2 | X Y | | X | + | Y | ,     J = | X Y | | X Y |
H D ( X , Y ) = max x X {   min y Y {   d ( x , y )   }   }
where x and y are points in the sets X and Y , respectively, and d ( x , y ) is the Euclidian distance between points x and y . In addition, the contrast-to-noise ratio (CNR) and generalized CNR (GCNR) [73] were used to evaluate the quality of the reconstructed image. The CNR and GCNR are calculated from the inner and outer regions defined in the figures:
C N R = 20   l o g 10 ( | μ o u t μ i n | σ o u t 2 + σ i n 2 )   ,
where μ i n and μ o u t are the mean intensity, and σ i n and σ o u t are the standard deviation of the inner and outer region, respectively. The GCNR is defined as:
G C N R = 1 min { p o u t ( x ) , p i n ( x ) } d x .
where p i n ( x ) and p o u t ( x ) are the probability density function of the signal of inner and outer region, and x is the pixel intensity. If G C N R = 0 , the distributions are entirely overlapped; if GCNR = 1, the distributions are completely independent, which represents a high contrast.

3. Results

3.1. Array Shape Estimation and Optimization

The ground truth radius of the cylinder-shaped objects R o b j e c t and the estimated radius of the flexible array R a r r a y from the optical-based shape estimation algorithm are listed in Table 3. The absolute errors of the radii are 0.20 mm for object 1, 0.73 mm for object 2, and 1.55 mm for object 3. The cross-sectional X-ray image of the ABDFAN phantom and the flexible array transducer is shown in Figure 7, and the black line on the surface of the phantom is the array. Figure 8 depicts the comparison of the array shape results from different methods, where the X-ray segmented shape (red line) is considered as the ground truth. The mean absolute error of the element positions between the ground truth and optical-based estimated shape is 0.3604 mm, while that of the optimized shape is 0.1488 mm. Half-wavelength of the transmitted wave is 0.15 mm, and therefore, the shape optimization algorithm could achieve the accuracy for reconstructing optimal ultrasound images.

3.2. CIRS Phantom Results

The uncorrected, optical-based estimation, and optimization results of the CIRS phantom are shown in Figure 9. The results illustrate that without any correction on array shape, the reconstructed image will have strong defocusing and distortion, while both proposed algorithms can correct these effects. The lateral FWHM of the point scatters in different depths are plotted in Figure 9d. The averaged lateral FWHM of the uncorrected, estimation, and optimization results are 6.04 mm, 2.42 mm, and 2.75 mm, respectively. The aspect ratio, CNR, and GCNR of the center hyperechoic cyst and the second left anechoic cyst are listed in Table 4. Both corrected results have significantly lower distortion and higher contrast than the uncorrected result. Specifically, analyzing the cysts in the CIRS phantom, the optimization result has a more accurate shape, clearer boundary, and higher contrast compared with the optical-based estimation result. Therefore, it is believed that the shape optimization algorithm has the overall best performance on estimating the array shape.

3.3. ABDFAN Phantom and Liver Scan Results

The uncorrected, optical-based estimation, and optimization results of the ABDFAN phantom and liver scan are shown in Figure 10 and Figure 11. The ground truth images from the linear array transducer are shown in Figure 10d and Figure 11d, and the same regions are cropped from the flexible array transducer results, and examples of the uncorrected results are depicted in Figure 10e and Figure 11e. The uncorrected results have significant distortions compared with the ground truth and corrected results. To quantitatively analyze the distortion, the cysts, blood vessels, and mussels are segmented as shown in Figure 12, and the Dice score, Jaccard index, and Hausdorff distance between the results and ground truth are evaluated and listed in Table 5, Table 6, and Table 7. The results show that both estimation and optimization algorithms can correct the distortions of the reconstructed image, and there is no significant difference between the two algorithms. The CNR and GCNR of the center cyst in the ABDFAN phantom and the large blood vessel in the liver scan are listed in Table 5, Table 6, and Table 7. In conclusion, the images reconstructed by both algorithms have an overall higher accuracy and contrast than the uncorrected images, and the optimization algorithm has a slightly better performance on estimating the array shape.

4. Discussion

The evaluation results showed that the proposed optical-based shape estimation algorithm could successfully estimate the array shape of the flexible array transducer with reasonable accuracy and reconstruct ultrasound images with significantly less defocusing and distortion than those without any shape correction. The proposed shape optimization algorithm could further improve the accuracy of the array shape estimation. The mean absolute error of the element positions was less than half-wavelength of the transmitted wave, which means the algorithm could reconstruct ultrasound images with optimal accuracy and quality. In addition, the computation time of the shape estimation algorithm was less than 0.01 s, while that of the shape optimization algorithm for 200 iterations was about 1000 s. Therefore, the proposed shape estimation algorithm could achieve real-time ultrasound imaging with acceptable accuracy and frame rate. For specific frames that require higher accuracy and quality, the shape optimization algorithm could be used to further improve the reconstructed images.
The Shannon entropy is a measurement of information and uncertainty in random variables [67]. Hughes first used the entropy for analyzing ultrasound signals and indicated that entropy can be used to quantitatively characterize the changes in the microstructures of scattering media [74,75]. Tsui et al. applied the entropy of ultrasound backscattered signals to multiple diseases assessment [76,77]. The studies concluded that increasing the scatterer concentration would generate a stronger effect of constructive wave interference and lead to a larger backscattered amplitude. In this condition, various echo amplitudes exist, and the signal uncertainty and unpredictability (entropy) increase. Noda et al. proposed an assumption that the beam-summed image using an array shape of higher accuracy would have smaller entropy [58]. However, there was no in-depth discussion of this assumption. To the best of our knowledge, there is no study on evaluating the ultrasound beamforming process with entropy.
As the RF channel data is delayed and summed based on the array shape to form the beamformed image, errors in array shape will make the delayed RF data sum in the opposite phase, which causes significant signal loss. It would lead to a smaller backscattered amplitude and the signal’s entropy would increase. In addition to our testing of the correlation between the array shape accuracy and beamformed image entropy, we also tested the effect of entropy by maximizing and minimizing the entropy in our shape optimization algorithm and compared the results. Figure 13 shows the reconstructed images using the initial shape, maximum entropy-optimized shape, and minimum entropy-optimized shape. The results illustrate that the array shape optimized by minimizing the entropy is worse than the initial shape. Therefore, we believe that the beamformed ultrasound image using an array shape of higher accuracy would have a larger entropy.
Here, we presented a shape estimation approach that consists of two sequential methods to estimate the array shape and reconstruct B-mode images acquired by flexible array transducer using deep learning method. One potential application of our method is to improve the ultrasound imaging reconstruction technique for in vivo tumors shape and size calculations [78]. With current developments of nanomedicine, it has shown a great potential for diagnosis and treatment of many disease [79]. Nanocarriers can actively and precisely target the tumor by binding to the cancer cell-overexpressed receptors [80]. However, tumor size and shape might be an important consideration for designing nanocarriers [79]. The method presented in this paper may increase the accuracy of tumor shape and size estimation, which in turn, can increase the efficiency of nanocarriers design. Future studies are warranted to demonstrate the effectiveness of our method for shape estimation.
There are some limitations of this study: First, the computation time of the shape optimization algorithm is long. As it is explained in the methods, the Kronecker delta function δ t , r ( n ) was used for reconstructing the beamformed image I . As δ t , r ( n ) cannot be differentiated by the shape parameters P , our shape optimization algorithm was implemented with a global solver without using the gradient to P , which significantly increased the computation time compared with gradient decent based optimizer. Inspired by the previous literature [58], a future direction of this study will be replacing the Kronecker delta function with a function that is differentiable by P . Second, the optical-based estimation algorithm relies on external devices including the infrared optical tracker and passive marker spheres, which may be occluded during radiotherapy. The optical tracking system should be properly set up in advance to avoid any interference. In addition, the speed of sound was set to be homogeneous in this study. Changes in the speed of sound will influence the ToF of the focal points, which has the same effect as the errors in array shape. Therefore, the proposed shape optimization algorithm may simultaneously correct the array shape and the heterogeneous speed of sound. Future work needs to be conducted to study how heterogeneous speed of sound may affect the performance of the proposed algorithms.

5. Conclusions

In this study, we proposed a novel shape estimation approach that consists of two sequential methods to estimate the array shape and reconstruct B-mode images acquired by flexible array transducer. The optical-based shape estimation algorithm used the optical tracking system to collect the spatial coordinates of the array and estimate the array shape as a circular arc with specific radius. The shape optimization algorithm searched for the array shape that maximized the entropy of its beamformed image. The evaluation results showed that the estimation algorithm could reconstruct ultrasound images with significantly less defocusing and distortion than those without any correction, while the optimization algorithm could further reduce the array shape error to less than half-wavelength of the transmitted wave and improve the accuracy and quality of the reconstructed images. Therefore, the proposed algorithms have the potential to enable high-quality real-time ultrasound imaging with the flexible array transducer. Additionally, as no holders are required for probe fixation, the real-time reconstitution enables US-based gastrointestinal organ motion monitoring during the radiotherapy.

Author Contributions

Conceptualization, X.H. and K.D.; methodology, X.H., H.H., and K.D.; software, X.H. and K.D.; validation, X.H. and K.D.; formal analysis, X.H. and K.D.; investigation, X.H. and K.D.; resources, K.D.; data curation, X.H. and K.D.; writing—original draft preparation, X.H., H.H., and K.D.; writing—review and editing, X.H., H.H., K.D., D.C., Z.F., J.L., and M.A.L.B.; visualization, X.H. and K.D.; supervision, K.D.; project administration, K.D.; funding acquisition, K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Cancer Institute of the National Institutes of Health [award number R37CA229417]. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Johns Hopkins University on 5 September 2015.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

The authors would like to thank the staff of the Carnegie Center for Surgical Innovation at Johns Hopkins University for their valuable assistance, Ralph Hubbell at the Center for Leadership Education at Johns Hopkins University for the English language consultation, and the anonymous reviewers for their helpful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han-Oh, S.; Hill, C.; Kang-Hsin Wang, K.; Ding, K.; Wright, J.L.; Alcorn, S.; Meyer, J.; Herman, J.; Narang, A. Geometric Reproducibility of Fiducial Markers and Efficacy of a Patient-Specific Margin Design Using Deep Inspiration Breath Hold for Stereotactic Body Radiation Therapy for Pancreatic Cancer. Adv. Radiat. Oncol. 2021, 6, 100655. [Google Scholar] [CrossRef] [PubMed]
  2. Ting, L.-L.; Chuang, H.-C.; Liao, A.-H.; Kuo, C.-C.; Yu, H.-W.; Tsai, H.-C.; Tien, D.-C.; Jeng, S.-C.; Chiou, J.-F. Tumor Motion Tracking Based on a Four-Dimensional Computed Tomography Respiratory Motion Model Driven by an Ultrasound Tracking Technique. Quant. Imaging Med. Surg. 2020, 10, 26. [Google Scholar] [CrossRef] [PubMed]
  3. Engelsman, M.; Damen, E.M.F.; De Jaeger, K.; van Ingen, K.M.; Mijnheer, B.J. The Effect of Breathing and Set-up Errors on the Cumulative Dose to a Lung Tumor. Radiother. Oncol. 2001, 60, 95–105. [Google Scholar] [CrossRef]
  4. Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer Statistics, 2023. CA Cancer J. Clin. 2023, 73, 17–48. [Google Scholar] [CrossRef] [PubMed]
  5. Berger, A.C.; Garcia, M., Jr.; Hoffman, J.P.; Regine, W.F.; Abrams, R.A.; Safran, H.; Konski, A.; Benson, A.B., 3rd; MacDonald, J.; Willett, C.G. Postresection CA 19-9 Predicts Overall Survival in Patients with Pancreatic Cancer Treated with Adjuvant Chemoradiation: A Prospective Validation by RTOG 9704. J. Clin. Oncol. 2008, 26, 5918–5922. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bruynzeel, A.M.E.; Lagerwaard, F.J. The Role of Biological Dose-Escalation for Pancreatic Cancer. Clin. Transl. Radiat. Oncol. 2019, 18, 128–130. [Google Scholar] [CrossRef] [Green Version]
  7. Zaorsky, N.G.; Lehrer, E.J.; Handorf, E.; Meyer, J.E. Dose Escalation in Stereotactic Body Radiation Therapy for Pancreatic Cancer: A Meta-Analysis. Am. J. Clin. Oncol. 2019, 42, 46–55. [Google Scholar] [CrossRef]
  8. Zhu, X.; Cao, Y.; Su, T.; Zhu, X.; Ju, X.; Zhao, X.; Jiang, L.; Ye, Y.; Cao, F.; Qing, S.; et al. Failure Patterns and Outcomes of Dose Escalation of Stereotactic Body Radiotherapy for Locally Advanced Pancreatic Cancer: A Multicenter Cohort Study. Ther. Adv. Med. Oncol. 2020, 12, 1758835920977155. [Google Scholar] [CrossRef]
  9. Rudra, S.; Jiang, N.; Rosenberg, S.A.; Olsen, J.R.; Roach, M.C.; Wan, L.; Portelance, L.; Mellon, E.A.; Bruynzeel, A.; Lagerwaard, F.; et al. Using Adaptive Magnetic Resonance Image-Guided Radiation Therapy for Treatment of Inoperable Pancreatic Cancer. Cancer Med. 2019, 8, 2123–2132. [Google Scholar] [CrossRef]
  10. Kerdsirichairat, T.; Narang, A.K.; Thompson, E.; Kim, S.-H.; Rao, A.; Ding, K.; Shin, E.J. Feasibility of Using Hydrogel Spacers for Borderline-Resectable and Locally Advanced Pancreatic Tumors. Gastroenterology 2019, 157, 933–935. [Google Scholar] [CrossRef] [Green Version]
  11. Rao, A.D.; Shin, E.J.; Meyer, J.; Thompson, E.L.; Fu, W.; Hu, C.; Fishman, E.K.; Weiss, M.; Wolfgang, C.; Burkhart, R.; et al. Evaluation of a Novel Absorbable Radiopaque Hydrogel in Patients Undergoing Image Guided Radiation Therapy for Borderline Resectable and Locally Advanced Pancreatic Adenocarcinoma. Pract. Radiat. Oncol. 2020, 10, e508–e513. [Google Scholar] [CrossRef] [PubMed]
  12. Hooshangnejad, H.; Han-Oh, S.; Shin, E.J.; Narang, A.; Rao, A.D.; Lee, J.; McNutt, T.; Hu, C.; Wong, J.; Ding, K. Demonstrating the Benefits of Corrective Intra-Operative Feedback in Improving the Quality of Duodenal Hydrogel Spacer Placement. Med. Phys. 2022, 49, 4794–4803. [Google Scholar] [CrossRef] [PubMed]
  13. Hooshangnejad, H.; Youssefian, S.; Narang, A.; Shin, E.J.; Rao, A.D.; Han-Oh, S.; McNutt, T.; Lee, J.; Hu, C.; Wong, J.; et al. Finite Element-Based Personalized Simulation of Duodenal Hydrogel Spacer: Spacer Location Dependent Duodenal Sparing and a Decision Support System for Spacer-Enabled Pancreatic Cancer Radiation Therapy. Front. Oncol. 2022, 12, 833231. [Google Scholar] [CrossRef]
  14. Schweikard, A.; Shiomi, H.; Adler, J. Respiration Tracking in Radiosurgery. Med. Phys. 2004, 31, 2738–2741. [Google Scholar] [CrossRef] [Green Version]
  15. Hooshangnejad, H.; Chen, Q.; Feng, X.; Zhang, R.; Ding, K. DeepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. arXiv 2023, arXiv:2301.11085. [Google Scholar] [CrossRef]
  16. Han, D.; Hooshangnejad, H.; Chen, C.-C.; Ding, K. A Beam-Specific Optimization Target Volume for Stereotactic Proton Pencil Beam Scanning Therapy for Locally Advanced Pancreatic Cancer. Adv. Radiat. Oncol. 2021, 6, 100757. [Google Scholar] [CrossRef]
  17. Rao, A.D.; Feng, Z.; Shin, E.J.; He, J.; Waters, K.M.; Coquia, S.; DeJong, R.; Rosati, L.M.; Su, L.; Li, D.; et al. A Novel Absorbable Radiopaque Hydrogel Spacer to Separate the Head of the Pancreas and Duodenum in Radiation Therapy for Pancreatic Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2017, 99, 1111–1120. [Google Scholar] [CrossRef] [Green Version]
  18. Feng, Z.; Rao, A.D.; Cheng, Z.; Shin, E.J.; Moore, J.; Su, L.; Kim, S.-H.; Wong, J.; Narang, A.; Herman, J.M.; et al. Dose Prediction Model for Duodenum Sparing With a Biodegradable Hydrogel Spacer for Pancreatic Cancer Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 651–659. [Google Scholar] [CrossRef] [Green Version]
  19. Feng, Z.; Hooshangnejad, H.; Shin, E.J.; Narang, A.; Lediju Bell, M.A.; Ding, K. The Feasibility of Haar Feature-Based Endoscopic Ultrasound Probe Tracking for Implanting Hydrogel Spacer in Radiation Therapy for Pancreatic Cancer. Front. Oncol. 2021, 11, 759811. [Google Scholar] [CrossRef] [PubMed]
  20. Hooshangnejad, H.; Youssefian, S.; Guest, J.K.; Ding, K. FEMOSSA: Patient-Specific Finite Element Simulation of the Prostate–Rectum Spacer Placement, a Predictive Model for Prostate Cancer Radiotherapy. Med. Phys. 2021, 48, 3438–3452. [Google Scholar] [CrossRef] [PubMed]
  21. Rao, A.D.; Shin, E.J.; Beck, S.E.; Garrett, C.; Kim, S.-H.; Lee, N.J.; Liapi, E.; Wong, J.; Herman, J.; Narang, A.; et al. Demonstration of Safety and Feasibility of Hydrogel Marking of the Pancreas-Duodenum Interface for Image Guided Radiation Therapy (IGRT) in a Porcine Model: Implications in IGRT for Pancreatic Cancer Patients. Int. J. Radiat. Oncol. Biol. Phys. 2018, 101, 640–645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Kim, S.-H.; Ding, K.; Rao, A.; He, J.; Bhutani, M.S.; Herman, J.M.; Narang, A.; Shin, E.J. EUS-Guided Hydrogel Microparticle Injection in a Cadaveric Model. J. Appl. Clin. Med. Phys. 2021, 22, 83–91. [Google Scholar] [CrossRef] [PubMed]
  23. Minn, A.Y.; Schellenberg, D.; Maxim, P.; Suh, Y.; McKenna, S.; Cox, B.; Dieterich, S.; Xing, L.; Graves, E.; Goodman, K.A.; et al. Pancreatic Tumor Motion on a Single Planning 4D-CT Does Not Correlate With Intrafraction Tumor Motion During Treatment. Am. J. Clin. Oncol. 2009, 32, 364–368. [Google Scholar] [CrossRef] [PubMed]
  24. Keall, P.J.; Mageras, G.S.; Balter, J.M.; Emery, R.S.; Forster, K.M.; Jiang, S.B.; Kapatoes, J.M.; Low, D.A.; Murphy, M.J.; Murray, B.R. The Management of Respiratory Motion in Radiation Oncology Report of AAPM Task Group 76 a. Med. Phys. 2006, 33, 3874–3900. [Google Scholar] [CrossRef] [PubMed]
  25. Poulsen, P.R.; Cho, B.; Sawant, A.; Ruan, D.; Keall, P.J. Dynamic MLC Tracking of Moving Targets with a Single KV Imager for 3D Conformal and IMRT Treatments. Acta Oncol. 2010, 49, 1092–1100. [Google Scholar] [CrossRef]
  26. Worm, E.S.; Høyer, M.; Fledelius, W.; Nielsen, J.E.; Larsen, L.P.; Poulsen, P.R. On-Line Use of Three-Dimensional Marker Trajectory Estimation from Cone-Beam Computed Tomography Projections for Precise Setup in Radiotherapy for Targets with Respiratory Motion. Int. J. Radiat. Oncol. Biol. Phys. 2012, 83, e145–e151. [Google Scholar] [CrossRef] [PubMed]
  27. Han-Oh, S.; Ding, K.; Song, D.; Narang, A.; Wong, J.; Rong, Y.; Bliss, D. Feasibility Study of Fiducial Marker Localization Using Microwave Radar. Med. Phys. 2021, 48, 7271–7282. [Google Scholar] [CrossRef] [PubMed]
  28. Hooshangnejad, H.; Ding, K. Feasibility of Planning-CT-Free Rapid Workflow for Stereotactic Body Radiotherapy: Removing the Need for Planning CT by AI-Driven, Intelligent Prediction of Body Deformation. In Proceedings of the Medical Imaging 2022: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, CA, USA, 20–23 February 2022; SPIE: Bellingham, WA, USA; Volume 12034, pp. 500–507. [Google Scholar]
  29. Hu, P.; Li, X.; Liu, W.; Yan, B.; Xue, X.; Yang, F.; Ford, J.C.; Portelance, L.; Yang, Y. Dosimetry Impact of Gating Latency in Cine Magnetic Resonance Image Guided Breath-Hold Pancreatic Cancer Radiotherapy. Phys. Med. Biol. 2022, 67, 055008. [Google Scholar] [CrossRef] [PubMed]
  30. Hansen, R.; Ravkilde, T.; Worm, E.S.; Toftegaard, J.; Grau, C.; Macek, K.; Poulsen, P.R. Electromagnetic Guided Couch and Multileaf Collimator Tracking on a TrueBeam Accelerator. Med. Phys. 2016, 43, 2387–2398. [Google Scholar] [CrossRef]
  31. Lewis, B.; Guta, A.; Shin, J.; Ji, Z.; Kim, J.S.; Kim, T. Evaluating Motion of Pancreatic Tumors and Anatomical Surrogates Using Cine MRI in 0.35T MRgRT under Free Breathing Conditions. J. Appl. Clin. Med. Phys. 2023, 24, e13930. [Google Scholar] [CrossRef]
  32. Ding, K.; Zhang, Y.; Sen, H.; Lediju Bell, M.; Goldstein, S.; Kazanzides, P.; Iordachita, I.; Wong, J. SU-E-J-114: Towards Integrated CT and Ultrasound Guided Radiation Therapy Using A Robotic Arm with Virtual Springs. Med. Phys. 2014, 41, 182. [Google Scholar] [CrossRef]
  33. Su, L.; Iordachita, I.; Zhang, Y.; Lee, J.; Ng, S.K.; Jackson, J.; Hooker, T.; Wong, J.; Herman, J.M.; Sen, H.T.; et al. Feasibility Study of Ultrasound Imaging for Stereotactic Body Radiation Therapy with Active Breathing Coordinator in Pancreatic Cancer. J. Appl. Clin. Med. Phys. 2017, 18, 84–96. [Google Scholar] [CrossRef] [Green Version]
  34. Huang, P.; Su, L.; Chen, S.; Cao, K.; Song, Q.; Kazanzides, P.; Iordachita, I.; Lediju Bell, M.A.; Wong, J.W.; Li, D.; et al. 2D Ultrasound Imaging Based Intra-Fraction Respiratory Motion Tracking for Abdominal Radiation Therapy Using Machine Learning. Phys. Med. Biol. 2019, 64, 185006. [Google Scholar] [CrossRef] [PubMed]
  35. Şen, H.T.; Bell, M.A.L.; Zhang, Y.; Ding, K.; Wong, J.; Iordachita, I.; Kazanzides, P. System Integration and Preliminary In-Vivo Experiments of a Robot for Ultrasound Guidance and Monitoring during Radiotherapy. In Proceedings of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, Turkey, 10 September 2015; pp. 53–59. [Google Scholar]
  36. Baroni, G.; Riboldi, M.; Spadea, M.F.; Tagaste, B.; Garibaldi, C.; Orecchia, R.; Pedotti, A. Integration of Enhanced Optical Tracking Techniques and Imaging in IGRT. J. Radiat. Res. 2007, 48, A61–A74. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Bert, C.; Metheany, K.G.; Doppke, K.; Chen, G.T.Y. A Phantom Evaluation of a Stereo-vision Surface Imaging System for Radiotherapy Patient Setup. Med. Phys. 2005, 32, 2753–2762. [Google Scholar] [CrossRef] [PubMed]
  38. Feng, Z.; Sun, E.; China, D.; Huang, X.; Hooshangnejad, H.; Gonzalez, E.; Bell, M.; Ding, K. A Novel Ultrasound Beamformer with the Align-Peak-Response for Flexible Array Transducers. In Medical Physics; Wiley: Hoboken, NJ, USA, 2022; Volume 49, p. E298. [Google Scholar]
  39. Şen, H.T.; Bell, M.A.L.; Zhang, Y.; Ding, K.; Boctor, E.; Wong, J.; Iordachita, I.; Kazanzides, P. System Integration and in Vivo Testing of a Robot for Ultrasound Guidance and Monitoring during Radiotherapy. IEEE Trans. Biomed. Eng. 2016, 64, 1608–1618. [Google Scholar] [CrossRef] [PubMed]
  40. Schlosser, J.; Salisbury, K.; Hristov, D. Telerobotic System Concept for Real-time Soft-tissue Imaging during Radiotherapy Beam Delivery. Med. Phys. 2010, 37, 6357–6367. [Google Scholar] [CrossRef]
  41. Lediju Bell, M.A.; Sen, H.T.; Iordachita, I.; Kazanzides, P.; Wong, J. In Vivo Reproducibility of Robotic Probe Placement for a Novel Ultrasound-Guided Radiation Therapy System. J. Med. Imaging 2014, 1, 25001. [Google Scholar] [CrossRef] [Green Version]
  42. Daft, C.M.W. Conformable Transducers for Large-Volume, Operator-Independent Imaging. In Proceedings of the 2010 IEEE International Ultrasonics Symposium (IUS), San Diego, CA, USA, 11–14 October 2010. [Google Scholar]
  43. Tu, P.; Qin, C.; Guo, Y.; Li, D.; Lungu, A.J.; Wang, H.; Chen, X. Ultrasound Image Guided and Mixed Reality-Based Surgical System With Real-Time Soft Tissue Deformation Computing for Robotic Cervical Pedicle Screw Placement. IEEE Trans. Biomed. Eng. 2022, 69, 2593–2603. [Google Scholar] [CrossRef]
  44. Şen, H.T.; Cheng, A.; Ding, K.; Boctor, E.; Wong, J.; Iordachita, I.; Kazanzides, P. Cooperative Control with Ultrasound Guidance for Radiation Therapy. Front. Robot. AI 2016, 3, 49. [Google Scholar] [CrossRef] [Green Version]
  45. Craft, D.; Süss, P.; Bortfeld, T. The Tradeoff Between Treatment Plan Quality and Required Number of Monitor Units in Intensity-Modulated Radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2007, 67, 1596–1605. [Google Scholar] [CrossRef] [PubMed]
  46. China, D.; Feng, Z.; Hooshangnejad, H.; Sforza, D.; Vagdargi, P.; Bell, M.A.L.; Uneri, A.; Ding, K. Real-Time Element Position Tracking of Flexible Array Transducer for Ultrasound Beamforming. In Proceedings of the Medical Imaging 2023: Ultrasonic Imaging and Tomography, San Diego, CA, USA, 19–24 February 2023; SPIE: Bellingham, WA, USA; Volume 12470, pp. 36–43. [Google Scholar]
  47. Zhang, J.; Wiacek, A.; Feng, Z.; Ding, K.; Bell, M.A.L. Comparison of Flexible Array with Laparoscopic Transducer for Photoacoustic-Guided Surgery. In Proceedings of the Photons Plus Ultrasound: Imaging and Sensing 2023, San Francisco, CA, USA, 28 January–3 February 2023; SPIE: Bellingham, WA, USA; Volume 12379, pp. 205–212. [Google Scholar]
  48. Zhang, J.; Wiacek, A.; González, E.; Feng, Z.; Ding, K.; Bell, M.A.L. A Flexible Array Transducer for Photoacoustic-Guided Surgery. In Proceedings of the 2022 IEEE International Ultrasonics Symposium (IUS), Venice, Italy, 10–13 October 2022; IEEE: Piscataway, NJ, USA; pp. 1–4. [Google Scholar]
  49. Casula, O.; Poidevin, C.; Cattiaux, G.; Fleury, G. A Flexible Phased Array Transducer for Contact Examination of Component with Complex Geometry. In Proceedings of the 16th World Conference on NDT, Montreal, QC, Canada, 30 August–3 September 2004. [Google Scholar]
  50. Yang, Y.; Tian, H.; Yan, B.; Sun, H.; Wu, C.; Shu, Y.; Wang, L.G.; Ren, T.L. A Flexible Piezoelectric Micromachined Ultrasound Transducer. RSC Adv. 2013, 3, 24900–24905. [Google Scholar] [CrossRef]
  51. Wang, Z.; Xue, Q.T.; Chen, Y.Q.; Shu, Y.; Tian, H.; Yang, Y.; Xie, D.; Luo, J.W.; Ren, T.L. A Flexible Ultrasound Transducer Array with Micro-Machined Bulk Pzt. Sensors 2015, 15, 2538–2547. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Lu, J.; Zou, H.; Greenleaf, J.F. Biomedical Ultrasound Beam Forming. Ultrasound Med. Biol. 1994, 20, 403–428. [Google Scholar] [CrossRef] [PubMed]
  53. Li, P.C.; Li, M.L. Adaptive Imaging Using the Generalized Coherence Factor. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2003, 50, 128–141. [Google Scholar] [CrossRef]
  54. Karaman, M.; Member, S.; Atalar, A.; Member, S.; Hayrettin Koymen, I. A Phase Aberration Correction Method for Ultrasound Imaging. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1993, 40, 275–282. [Google Scholar] [CrossRef]
  55. de Oliveira, T.F.; Pai, C.N.; Matuda, M.Y.; Adamowski, J.C.; Buiochi, F. Development of a 2.25 MHz Flexible Array Ultrasonic Transducer. Res. Biomed. Eng. 2019, 35, 27–37. [Google Scholar] [CrossRef]
  56. Boerkamp, C.; Costa, T.L.; Jovanova, J. Design of a Flexible Transducer Array and Characterisation of Piezoelectric Sensors for Curvature Compensation. In Proceedings of the Smart Materials, Adaptive Structures and Intelligent Systems, Dearborn, MI, USA, 12–14 September 2022; American Society of Mechanical Engineers: New York, NY, USA; Volume 86274, p. V001T08A002. [Google Scholar]
  57. Chang, J.; Chen, Z.; Huang, Y.; Li, Y.; Zeng, X.; Lu, C. Flexible Ultrasonic Array for Breast-Cancer Diagnosis Based on a Self-Shape–Estimation Algorithm. Ultrasonics 2020, 108, 106199. [Google Scholar] [CrossRef]
  58. Noda, T.; Tomii, N.; Nakagawa, K.; Azuma, T.; Sakuma, I. Shape Estimation Algorithm for Ultrasound Imaging by Flexible Array Transducer. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2020, 67, 2345–2353. [Google Scholar] [CrossRef]
  59. Huang, X.; Bell, M.A.L.; Ding, K. Deep Learning for Ultrasound Beamforming in Flexible Array Transducer. IEEE Trans. Med. Imaging 2021, 40, 3178–3189. [Google Scholar] [CrossRef]
  60. Noda, T.; Azuma, T.; Ohtake, Y.; Sakuma, I.; Tomii, N. Ultrasound Imaging With a Flexible Probe Based on Element Array Geometry Estimation Using Deep Neural Network. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2022, 69, 3232–3242. [Google Scholar] [CrossRef] [PubMed]
  61. Brown, A.J.; Siewerdsen, J.H.; Uneri, A.; de Silva, T.; Manbachi, A. Technical Note: Design and Validation of an Open-Source Library of Dynamic Reference Frames for Research and Education in Optical Tracking. J. Med. Imaging 2018, 10576, 105760M. [Google Scholar] [CrossRef]
  62. Jollife, I.T.; Cadima, J. Principal Component Analysis: A Review and Recent Developments. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2016, 374, 20150202. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Pratt, V. Direct Least-Squares Fitting of Algebraic Surfaces. ACM Siggraph Comput. Graph. 1987, 21, 145–152. [Google Scholar] [CrossRef]
  64. Zhao, Y. Maximum Entropy Image Reconstruction. IEEE Trans. Signal Process. 1991, 39, 1478–1480. [Google Scholar] [CrossRef]
  65. Xi, L.I. Autofocusing of ISAR Images Based on Entropy Minimization. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 1240–1252. [Google Scholar] [CrossRef]
  66. Fletcher, I.; Watts, C.; Miller, E.; Rabinkin, D. Minimum Entropy Autofocus for 3D SAR Images from a UAV Platform. In Proceedings of the 2016 IEEE Radar Conference, RadarConf 2016, Philadelphia, PA, USA, 2–6 May 2016; pp. 1–5. [Google Scholar] [CrossRef]
  67. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  68. Chang, J.H.; Yen, J.T.; Shung, K.K. High-Speed Digital Scan Converter for High-Frequency Ultrasound Sector Scanners. Ultrasonics 2008, 48, 444–452. [Google Scholar] [CrossRef] [Green Version]
  69. Steelman, W.A. Comparison of Real-Time Scan Conversion Methods With an OpenGL Assisted Method. Ultrasound 2010, 1–11. [Google Scholar]
  70. Ait Skourt, B.; el Hassani, A.; Majda, A. Lung CT Image Segmentation Using Deep Neural Networks. Procedia Comput. Sci. 2018, 127, 109–113. [Google Scholar] [CrossRef]
  71. Rosin, P.L.; Ioannidis, E. Evaluation of Global Image Thresholding for Change Detection. Pattern Recognit. Lett. 2003, 24, 2345–2356. [Google Scholar] [CrossRef] [Green Version]
  72. Dubuisson, M.-P.; Jain, A.K. A Modified Hausdorff Distance for Object Matching. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; pp. 566–568. [Google Scholar] [CrossRef]
  73. Rodriguez-Molares, A.; Rindal, O.M.H.; D’Hooge, J.; Masoy, S.E.; Austeng, A.; Lediju Bell, M.A.; Torp, H. The Generalized Contrast-to-Noise Ratio: A Formal Definition for Lesion Detectability. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2020, 67, 745–759. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Hughes, M.S.; McCarthy, J.E.; Bruillard, P.J.; Marsh, J.N.; Wickline, S.A. Entropy vs. Energy Waveform Processing: A Comparison Based on the Heat Equation. Entropy 2015, 17, 3518–3551. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Hughes, M.S.; McCarthy, J.E.; Marsh, J.N.; Arbeit, J.M.; Neumann, R.G.; Fuhrhop, R.W.; Wallace, K.D.; Znidersic, D.R.; Maurizi, B.N.; Baldwin, S.L.; et al. Properties of an Entropy-Based Signal Receiver with an Application to Ultrasonic Molecular Imaging. J. Acoust. Soc. Am. 2007, 121, 3542. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Tsui, P.H. Ultrasound Detection of Scatterer Concentration by Weighted Entropy. Entropy 2015, 17, 6598–6616. [Google Scholar] [CrossRef] [Green Version]
  77. Tsui, P.H.; Wan, Y.L. Effects of Fatty Infiltration of the Liver on the Shannon Entropy of Ultrasound Backscattered Signals. Entropy 2016, 18, 341. [Google Scholar] [CrossRef] [Green Version]
  78. Yang, C.; Lee, D.-H.; Mangraviti, A.; Su, L.; Zhang, K.; Zhang, Y.; Zhang, B.; Li, W.; Tyler, B.; Wong, J.; et al. Quantitative Correlational Study of Microbubble-Enhanced Ultrasound Imaging and Magnetic Resonance Imaging of Glioma and Early Response to Radiotherapy in a Rat Model. Med. Phys. 2015, 42, 4762–4772. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Pramanik, A.; Xu, Z.; Shamsuddin, S.H.; Khaled, Y.S.; Ingram, N.; Maisey, T.; Tomlinson, D.; Coletta, P.L.; Jayne, D.; Hughes, T.A. Affimer Tagged Cubosomes: Targeting of Carcinoembryonic Antigen Expressing Colorectal Cancer Cells Using in Vitro and in Vivo Models. ACS Appl. Mater. Interfaces 2022, 14, 11078–11091. [Google Scholar] [CrossRef]
  80. Danhier, F.; Feron, O.; Préat, V. To Exploit the Tumor Microenvironment: Passive and Active Tumor Targeting of Nanocarriers for Anti-Cancer Drug Delivery. J. Control. Release 2010, 148, 135–146. [Google Scholar] [CrossRef]
Figure 1. Coordinate system of the flexible array transducer.
Figure 1. Coordinate system of the flexible array transducer.
Cancers 15 03294 g001
Figure 2. Scatter plot of mean absolute errors of the array shape and entropy scores of the corresponding reconstructed images from 1000 random shape assumptions.
Figure 2. Scatter plot of mean absolute errors of the array shape and entropy scores of the corresponding reconstructed images from 1000 random shape assumptions.
Cancers 15 03294 g002
Figure 3. An overview of the proposed shape optimization algorithm.
Figure 3. An overview of the proposed shape optimization algorithm.
Cancers 15 03294 g003
Figure 4. Profile of each scan-line and the corresponding scan converted ultrasound image.
Figure 4. Profile of each scan-line and the corresponding scan converted ultrasound image.
Cancers 15 03294 g004
Figure 5. Experimental set-up for evaluating the array shape.
Figure 5. Experimental set-up for evaluating the array shape.
Cancers 15 03294 g005
Figure 6. Experimental set-up for the CIRS phantom scan.
Figure 6. Experimental set-up for the CIRS phantom scan.
Cancers 15 03294 g006
Figure 7. X-ray image of the flexible array transducer with the ABDFAN phantom.
Figure 7. X-ray image of the flexible array transducer with the ABDFAN phantom.
Cancers 15 03294 g007
Figure 8. Comparison of the estimated array shapes from different methods.
Figure 8. Comparison of the estimated array shapes from different methods.
Cancers 15 03294 g008
Figure 9. Reconstructed images of the CIRS phantom (a) without array shape correction, (b) with optical-based estimated shape, and (c) with optimized shape, and (d) line plot of lateral FWHM of the point scatters.
Figure 9. Reconstructed images of the CIRS phantom (a) without array shape correction, (b) with optical-based estimated shape, and (c) with optimized shape, and (d) line plot of lateral FWHM of the point scatters.
Cancers 15 03294 g009
Figure 10. Reconstructed images of the ABDFAN phantom (a) without array shape correction, (b) with optical-based estimated shape and (c) optimized shape, (d) ground truth image from the linear array transducer, and (e) cropped image with the same region as the ground truth.
Figure 10. Reconstructed images of the ABDFAN phantom (a) without array shape correction, (b) with optical-based estimated shape and (c) optimized shape, (d) ground truth image from the linear array transducer, and (e) cropped image with the same region as the ground truth.
Cancers 15 03294 g010
Figure 11. Reconstructed images of the liver scan (a) without array shape correction, (b) with optical-based estimated shape and (c) optimized shape, (d) ground truth image from the linear array transducer, and (e) cropped image with the same region as the ground truth.
Figure 11. Reconstructed images of the liver scan (a) without array shape correction, (b) with optical-based estimated shape and (c) optimized shape, (d) ground truth image from the linear array transducer, and (e) cropped image with the same region as the ground truth.
Cancers 15 03294 g011
Figure 12. Comparison of the segmentations between the flexible array transducer results and the ground truth for the (a) ABDFAN phantom and (b) liver scan. Green regions represent the ground truth, pink regions represent the reconstructed results, white regions represent the common regions, and the red arrows represent the Hausdorff distances.
Figure 12. Comparison of the segmentations between the flexible array transducer results and the ground truth for the (a) ABDFAN phantom and (b) liver scan. Green regions represent the ground truth, pink regions represent the reconstructed results, white regions represent the common regions, and the red arrows represent the Hausdorff distances.
Cancers 15 03294 g012
Figure 13. Reconstructed images with the initial shape, the maximum entropy-optimized shape, and the minimum entropy-optimized shape.
Figure 13. Reconstructed images with the initial shape, the maximum entropy-optimized shape, and the minimum entropy-optimized shape.
Cancers 15 03294 g013
Table 1. Flexible Array Transducer Parameters.
Table 1. Flexible Array Transducer Parameters.
ParametersValue
Number of Elements128
Center Frequency5 MHz
Element Width0.8 mm
Element Pitch1.0 mm
Element Length10 mm
Thickness1.5 mm
TX Elements64
RX Elements128
Table 2. Linear Array Transducer Parameters.
Table 2. Linear Array Transducer Parameters.
ParametersValue
Transducer ModelATL L7-4
Number of Elements128
Center Frequency5 MHz
Element Pitch0.298 mm
Element Length7 mm
TX Elements64
RX Elements128
Table 3. Comparison of the Ground Truth and the Optical-based Estimated Radius.
Table 3. Comparison of the Ground Truth and the Optical-based Estimated Radius.
R object R array
Object 1110.00 mm110.20 mm
Object 272.50 mm71.77 mm
Object 365.00 mm63.45 mm
Table 4. Evaluation Results of the Hyperechoic Cyst in the CIRS Phantom.
Table 4. Evaluation Results of the Hyperechoic Cyst in the CIRS Phantom.
Aspect RatioCNRGCNR
Uncorrected1.681.26 dB0.62
Optical-based estimation1.083.69 dB0.77
Optimization1.004.44 dB0.81
Table 5. Evaluation Results of the Anechoic Cyst in the CIRS Phantom.
Table 5. Evaluation Results of the Anechoic Cyst in the CIRS Phantom.
Aspect RatioCNRGCNR
Uncorrected1.53−8.51 dB0.28
Optical-based estimation1.08−8.51 dB0.26
Optimization1.03−6.92 dB0.29
Table 6. Evaluation Results of the ABDFAN Phantom.
Table 6. Evaluation Results of the ABDFAN Phantom.
DiceJaccardHausdorffCNRGCNR
Uncorrected0.870.776.73 mm2.78 dB0.71
Estimation0.950.901.66 mm2.92 dB0.75
Optimization0.950.911.50 mm3.86 dB0.78
Table 7. Evaluation Results of the Liver Scan.
Table 7. Evaluation Results of the Liver Scan.
DiceJaccardHausdorffCNRGCNR
Uncorrected0.930.873.49 mm−13.11 dB0.18
Estimation0.950.902.92 mm−11.08 dB0.20
Optimization0.960.922.40 mm−9.83 dB0.22
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Hooshangnejad, H.; China, D.; Feng, Z.; Lee, J.; Bell, M.A.L.; Ding, K. Ultrasound Imaging with Flexible Array Transducer for Pancreatic Cancer Radiation Therapy. Cancers 2023, 15, 3294. https://doi.org/10.3390/cancers15133294

AMA Style

Huang X, Hooshangnejad H, China D, Feng Z, Lee J, Bell MAL, Ding K. Ultrasound Imaging with Flexible Array Transducer for Pancreatic Cancer Radiation Therapy. Cancers. 2023; 15(13):3294. https://doi.org/10.3390/cancers15133294

Chicago/Turabian Style

Huang, Xinyue, Hamed Hooshangnejad, Debarghya China, Ziwei Feng, Junghoon Lee, Muyinatu A. Lediju Bell, and Kai Ding. 2023. "Ultrasound Imaging with Flexible Array Transducer for Pancreatic Cancer Radiation Therapy" Cancers 15, no. 13: 3294. https://doi.org/10.3390/cancers15133294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop