Next Article in Journal
An Integrated Millimeter-Wave Satellite Radiometer Working at Room-Temperature with High Photon Conversion Efficiency
Previous Article in Journal
Joint Resource Allocation in Secure OFDM Two-Way Untrusted Relay System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift

1
Medical Physics and Biomedical Engineering Department, Faculty of Medicine, Tehran University of Medical Sciences (TUMS), Tehran 1417653761, Iran
2
Research Centre of Biomedical Technology and Robotics (RCBTR), Imam Khomeini Hospital Complex, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
3
Brain and Spinal Cord Injury Research Center, Neuroscience Institute, Tehran University of Medical Sciences (TUMS), Tehran 1419733141, Iran
4
Department of Biomedical Engineering, Wayne State University, Detroit, MI 48201, USA
5
Barbara Ann Karmanos Cancer Institute, Detroit, MI 48201, USA
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(6), 2399; https://doi.org/10.3390/s22062399
Submission received: 19 October 2021 / Revised: 16 November 2021 / Accepted: 18 November 2021 / Published: 21 March 2022
(This article belongs to the Section Biomedical Sensors)

Abstract

:
Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.

1. Introduction

Maximal safe resection of brain tumors in eloquent regions is optimally performed under image-guided surgery systems [1,2]. The accuracy of the image-guided neurosurgery system is drastically affected by intra-operative tissue deformation, called brain shift. Brain shift is a dynamic and complex spatiotemporal phenomenon that happens after performing a craniotomy and invalidates the pre-operative images of patients [3,4]. The brain shift, which is known as brain deformation, is a combination of a wide variety of biological, physical, and surgical causes and occurs in both cortical and deep brain structures [2,5,6,7]. Brain shift calculation and compensation methods are based on updating the pre-operative images with regard to the intraoperative tissue deformation. These methods fall into two main categories: biomechanical models and intra-operative imaging approaches. Biomechanical model-based approaches are time and computation-consuming methods; however, they can be highly accurate [8,9,10]. The main drawback of model-based techniques is that tissue deformation that occurs during intraoperative neurosurgical procedures is difficult to accurately model in real-time processes and thus is often not considered [2]. As a result, most of the recent studies have focused on using intra-operative imaging, including intraoperative computed tomography (CT) [11], magnetic resonance imaging (MRI) [12,13,14], fluorescence-guided surgery [15], and ultrasound (US) imaging [16,17,18] during neurosurgery. In fact, interventional imaging systems are becoming an integral part of modern neurosurgeries to update a patient’s coordinate system during surgery using the registration of intra-operative images with pre-operative images [19]. However, each of these modalities has been proven to have well-known limitations [20]. Radiation exposure and low spatial resolution in CT, the requirement for an expensive equipped MR-compatible operating room and the time-consuming nature of MRI, limited imaging depth in fluorescence imaging, and poor quality of the US images are the major challenges of the common intra-operative imaging modalities [21].
Recently, the application of hybrid imaging modalities such as photoacoustic (PA) imaging has gained considerable interest for various applications such as differential diagnostic of pathologies [22,23], depicting tissue vasculature [24], oral health [25,26], and image-guided surgeries [27,28,29]. The PA is a non-ionizing hybrid imaging method that combines optical and ultrasound imaging modalities based on the PA effect: the formation of sound waves following pulsed light absorption in a medium [30,31,32]. PA imaging inherits the advantages of high imaging contrast from optical imaging as well as the spatial and temporal resolution of US imaging [33,34,35,36,37]. During PA image acquisition, the tissue is illuminated by short laser pulses, which are absorbed by endogenous (or exogenous) chromophores and cause the generation of ultrasound emission due to thermoelastic expansion. Endogenous chromophores such as hemoglobin provide a strong PA signal due to high optical absorption coefficients, which in turn demonstrate the crucial structural information [30,38]. One of the main advantages of PA imaging is the ability to visualize the blood vessel meshwork of brain tissue, which is considered as the main landmark during neurosurgery [39,40]. On the other hand, PA imaging has demonstrated the potential to be used during image-guided interventions [41,42,43]. As a result, PA imaging as a noninvasive intra-operative imaging could enable the real-time visualization of regions of interest, including vessel meshwork during neurosurgery.
Finally, registration of intra-operative PA images with pre-operative MR images of brain tissue could enable a real-time compensation of brain shift.
Many investigations have tried to overcome the limitations of multimodal image registration algorithms in processes of brain shift compensation. Nevertheless, finding a single satisfactory solution is a challenging task due to the complex and unpredictable nature of brain deformation during neurosurgery [44]. So far, most of the studies have focused on the registration of intra-operative US with pre-operative MR algorithms. Major findings were reported by Reinertsen et al. [45], Chen et al. [46], and Farnia et al. [47] via feature-based registration methods. However, extraction of the corresponding features in two different modalities is an issue that directly affects the accuracy of these methods. In the intensity-based area, the different nature of US and MRI contrast mechanisms leads to failure of the common similarity measures such as mutual information [48,49]. However, effective solutions have been proposed by Wein et al. [50], Coupé et al. [51], Rivas et al. [52,53], and Machado et al. [54] for multimodal image registration, which faces different limitation.
Recently, multimodal image registration based on a sparse representation of images has attracted enormous interest. The main idea of image registration based on sparse representation lies in the fact that different images can be represented as a combination of a few atoms in an over-complete dictionary [55]. Therefore, the sparse coefficients describe the salient features of the images. Generally, over-complete dictionaries can be constructed via two different approaches. In the first category, the standard fixed transform is applied as an over-complete dictionary. Fixed dictionaries such as discrete cosine transform, wavelet, and curvelet are used for multi-modal image registration [19,56,57]. Using fixed dictionaries benefits from simplicity and fast implementation. However, it is not customized for different types of data. In the second approach, an over-complete dictionary is constructed via learning methods. Among learning methods, the K-singular value decomposition (K-SVD) method has been widely used for image registration [58]. There are some studies which used synthesis sparse models for multimodal image registration [59]. However, a learned dictionary includes a large number of atoms. This leads to the increased computational complexity of multi-modal image registration, which is not suitable for the real-time compensation of brain shift.
The analysis sparse model, named the co-sparse analysis model, represents a powerful alternative to the synthesis sparse representation approach in order to reduce the computational time [60]. Co-sparse analysis models can yield richer feature representations and better results for image registration in real-time processes. As a result of richer feature representation using co-sparse analysis models, better results for image registration can be obtained in real-time processes [61,62]. There are a few studies for multi-modal image registration via a co-sparse analysis model, but none of them were in the medical field. Kiechle et al., proposed an analysis model in a joint co-sparsity setup for different modalities of depth and intensity images [63]. Chang Han et al., utilized the analysis sparse model for remote sensing images [64] and Gao et al., used it to register multi-focus noisy images with higher quality images [65]. In our previous work, we applied an analysis sparse model for US-MR image registration to compensate for the brain shift [66].
To date, a few research studies have investigated PA and MR image registration. Ren et al., proposed a PA-MR image registration method based on mutual information to yield more insights into physiology and pathophysiology [67]. Gehrung et al., proposed the co-registration of PA and MR images of murine tumor models for the assessment of tumor physiology [68]. However, these studies were dedicated to solving the rigid registration problems and did not focus on the intra-operative application of PA imaging, and therefore did not face any complicated brain deformation.
To the best of our knowledge, in this study, for the first time, PA and MR image registration was used for the purpose of compensating complicated brain shift phenomena. The co-sparse analysis model is proposed for PA-MR image registration, which is able to capture the interdependency of two modalities. The algorithm works based on the minimization of mapping transform by using a pair of analysis operators which are learned by the alternating direction method of multipliers (ADMM).

2. Materials and Methods

2.1. Brain-Mimicking Phantom Data

To assess the performance of the multi-modal image registration algorithm to compensate for brain shift, a phantom that mimics brain tissue was prepared. The phantom was made of polyvinyl alcohol cryogel (PVA-C) which has been successfully used for mimicking brain tissue in previous studies [19]. The PVA-C material also has been applied in the fabrication of phantoms for ultrasound, MRI, and, recently, PA imaging [69]. A 10% by weight PVA in water solution was used to form PVA-C, which is solidified through a freeze–thaw process. The dimensions of the phantom were approximately 150 × 40 mm, with a curved top surface mimicking the shape of a head as shown in Figure 1a. Two plastic tubes with 1.2 and 1.4 mm inside diameters were inserted randomly into the mold before the freeze–thaw cycle to simulate blood vessels. Figure 1b shows the 3D model of the phantom including random vessels. Two types of chromophores, copper sulfate pentahydrate (CuSO4 (H2O)5) and human blood (1:100 dilution), were used to fill the embedded vessels before PA imaging (Figure 1c).
To acquire MR images of the phantom before any deformations, the phantom was scanned using a Siemens 1.5 Tesla scanner using a standard T1- and T2-weighted protocol. Pulse-sequence parameters were set to TR = 600 ms, TE = 10 ms, and Ec = 1/1 27.8 kHz for T1-weighted and TR = 8.6, TE = 3.2, TI = 450, and Ec = 1/1 31.3 kHz for T2-weighted considering a 1 mm slice thickness with full brain phantom coverage and a 1 mm isotropic resolution.
PA images were achieved by using an ultrasound scanner (Vantage 128, Verasonics Inc., Kirkland, WA, USA) with a 128-element linear array US transducer (L11-4v, Verasonics, Inc., Kirkland, WA, USA) operating at a frequency range between 4 and 9 MHz. A pulsed tunable laser (PhocusCore, Optotek, CA, USA) and Nd:YAG/OPO nanosecond pulsed laser (Phocus core system, OPOTEK Inc., Carlsbad, CA, USA), with a pulse repetition rate of 10 Hz at wavelengths of 700, 800, and 900 nm, were used to illuminate the phantom. The scan resolution was 1 mm, and the laser fluence was ~1 mJ/cm2 (Figure 2). It is notable that we used frame averaging for de-noising and spectral un-mixing as an image reconstruction algorithm to obtain high quality PA images.

2.2. Murine Brain Data

For further evaluation of the proposed image registration method, we used ex vivo mouse brain data which was provided by Ren et al., in a previous study [67]. After removal of the mouse brain skull, the whole brain of the mouse was embedded in 3% agar in phosphate-buffered saline and was then imaged ex vivo. To acquire T2-weighted MR images of the mouse brain, a 2D spin-echo sequence with imaging parameters of TR = 2627.7 ms, TE = 36 ms, a slice thickness of 0.7 mm, a field of view of 20 × 20 mm, and a scanning time of 12.36 min were used. For PA imaging, the laser excitation pulses of 9 ns were delivered at five wavelengths (680, 715, 730, 760, 800, and 850 nm) in coronal orientation with a field of view of 20 × 20 mm, step sizes of 0.3 mm moving along the horizontal direction, and a scan time of 20 min. To validate these data, five natural anatomical landmarks were manually selected as registration targets (Figure 3).

2.3. Inducing Brain Deformation

The proposed algorithm was designed to compensate for brain deformation during neurosurgery. Since the brain deformation is a complicated non-linear transformation, it is a challenging task to implement it physically on the phantom or mouse brain data. To evaluate our proposed registration algorithm, we performed brain deformation numerically by applying pre-defined pixel shifts to images. For this purpose, we used pre-operative and intra-operative MR images of brain tissue. The intra-operative MR image was considered as a gold standard. The deformation matrix was obtained by mono-modal registration of these images using the residual complexity algorithm [70] (Figure 4). Then, the obtained brain deformation matrix was applied on PA images of the brain phantom and mouse brain data.

2.4. PA-MR Image Registration Framework

The workflow for automatic multi-modal image registration to compensate for the brain deformation was shown in Figure 5. After preparing two data sets, including brain-mimicking phantom data and murine brain data, pre-deformation MR images were set as reference images, and pre-deformation PA images were set as float images. Then, a real brain deformation matrix which was achieved by the registration of intra-operative and pre-operative patient MR images using the residual complexity method was applied on PA images to generate deformed PA images. Then, by using the proposed registration method based on joint co-sparse analysis, registration of the MR image and deformed PA image was done. Finally, the image registration results were evaluated and visualized for brain shift calculation. To evaluate the registration algorithm, root mean square error (RMSE) was calculated for the phantom and mouse image registration. Additionally, target registration error (TRE) was calculated for defined targets in the phantom and mouse brain data. Furthermore, we used the Hausdorff distance (HD) between the PA and MR images. The HD between two point sets is defined as
HD ( I PA , I MR ) = M a x [ M a x M i n d ( I PA , I MR ) , M i n M a x d ( I PA , I MR ) ]
where d ( . , . ) is the Euclidean distance between the locations and a smaller value of HD indicates a better alignment of the boundaries. To avoid the effect of outliers [71], we used 95% HD instead of maximum HD.

2.5. Co-Sparse Analysis Model

Image (I) can be approximated via the sparse representation x R n , which is a linear combination of a few non-zero elements (named atoms) in an over-complete dictionary matrix D R n × k ( n < < k ).
x D α
where α R k is a sparse vector with the fewest k non-zero elements. The sparse coefficients describe the salient features of the images. Therefore, the sparse representation problem could be solved as the following optimization problem:
m i n α α 0 , s . t . x D α 2 ε
Here, α 0 is the zero norm of α that represents the number of non-zero values in a vector ( α ). The sparse representation of an image considers that a synthesis dictionary represents the redundant signals.
There is also another representation of an image based on the co-sparse analysis model [60]. This alternative assumes that for a signal of interest ( x ), there exists an analysis operator Ω R k × n such that Ω x α as an analyzed vector is sparse for all x R n . The rows of Ω represent filters that provide sparse responses and indices of the filters with zero response determine the subspace to which the signal belongs. This subspace is the intersection of all hyperplanes to which these filters are normal vectors, and therefore, the information of signals is encoded in its zero responses. The index set of the zero entries of Ω x is called the co-support of x as below:
c o s   u   p p ( Ω x ) : = { j | ( Ω x ) j = 0 }
As the key property of analysis sparse models, these models put an emphasis on the zeros in the analysis representation rather than the non-zeros in the sparse representation of the signal. These zeros in the analysis representation model inscribe the low-dimensional subspace which the signal belongs to. Consequently, analysis operator learning procedures find the suitable operator Ω for signal x as below:
Ω * a r g   m i n i Ω x i 0
where Ω * is the optimized operator Ω. In order to relax the co-sparsity assumption, the log-square function as a proper approximation of zero norm is used for large values of ν as below:
g ( α ) : = k l o g ( 1 + ν α k 2 )
where ν is the positive weight. Therefore, Equation (4) could be converted to
Ω * a r g   m i n i g ( Ω x i )
One should consider that there are three main constraints on the Ω * to avoid trivial solutions as below [72]:
  • The rows of Ω * have the unit Euclidean norm; Ω * o b l i q u e m a n i f o l d .
  • The operator Ω * has full rank, i.e., it has the maximal number of linear independent rows.
    h ( Ω * ) = 1 n l o g ( n ) l o g   d e t ( 1 m Ω * T Ω * ) ,
  • The rows of the operator Ω * are not trivially linearly dependent.
    r ( Ω * ) = k < 1 l o g ( 1 ( Ω k T Ω l ) 2 )

2.6. Multi-Modal Image Registration Algorithm

In this study, we formulated the multimodal image registration problem in terms of a co-sparse analysis model. There are different co-sparse models that can be used in multimodal image registration approaches [73]. In our approach, a joint analysis co-sparse model (JACSM) was proposed for the registration of PA and MR images. JACSM indicates that different signals from different sensors of the same scene form an ensemble. The signals in an ensemble include a common sparse component, shared between all of them, and an innovation component which represents individual differences [74].
Consider two images, I PA and I MR , which are provided through PA and MR imaging, respectively, from a brain simulated phantom as the input data. The interdependency of the two image modalities was modeled via JACSM and common sparse components were considered in this study. This image pair has a co-sparse representation with an appropriate pair of analysis operators, ( Ω PA , Ω MR ) R k × n PA × R k × n MR . By considering the structures of images encoded in their co-supports based on Equation (3), there is a pair of analysis operators so that the intersection of the co-supports of Ω PA I PA and Ω MR I MR is large. In particular, we attempted to learn the pair of co-sparse analysis operators ( Ω PA , Ω MR ) for two different image modalities.
On the other hand, the PA and MR images should be matched with a transformation T such that
I MR ( T x ) I PA ( x ) , f o r a l l p i x e l c o r d i n a t e x
where x determines homogeneous pixel coordinates in PA images. The goal of multi-modal image registration problem in this approach is to optimize T by using the pair of analysis operators ( Ω PA , Ω MR ) . We consider that, for an optimized transformation, there is a coupled sparsity measure to be minimized. Thus, by considering Equation (6) and constraints based on Equation (7), we are searching for T * such that
T * a r g   m i n 1 N i = 1 N g ( Ω PA I PA ( i ) , Ω MR I MR ( T x ) ( i ) ) k [ h ( Ω MR * ) + h ( Ω PA * ) ] μ [ r ( Ω MR * ) + r ( Ω PA * ) ]
To tackle the problem of Equation (9), we proposed the ADMM. In other words, the analysis operators were learned by optimizing a JACSM via an ADMM. The ADMM is a candidate solver for convex problems, breaking our main problem into smaller sub-problems as below:
m i n   f ( x ) + g ( y ) , s . t .   A x + B y c = Z = 0
where x R n , y R m , A R p × n , and B R p × m . The Lagrangian for augmentation Equation (10) can be written as
L p ( x , y , λ ) = f ( x ) + g ( y ) + λ T ( Z ) + ( ρ 2 ) Z 2 2
where the term ρ is a penalty term that is considered positive and λ is the Lagrangian multiplier. Equation (11) is solved over three steps—x-minimization and y-minimization are split into N separate problems and followed by an updating step for the multiplier λ as follows:
x k + 1 : = a r g   m i n x   L p ( x , y k , λ k ) , y k + 1 : = a r g   m i n y   L p ( x k + 1 , y , λ k ) , λ k + 1 : = λ k + ρ ( A x k + 1 + B y k + 1 c ) .

3. Results and Discussion

To implement the proposed image registration algorithm, 20,000 pairs of square sample patches of size 7 pixels from the total images in the training set were randomly selected. It is notable that in our experiments, the patch sizes of 3, 5, 7, 9, and 11 pixels were applied. Based on our experience, a small patch size would cause an over-smoothing effect, and a larger patch size would lead to more computation. Therefore, based on our results, the patch size of 7 × 7 was selected to balance the two effects.
The performance of the JACSM-based registration method was evaluated using a phantom with simulated vessels and using ex vivo mouse brain data with anatomical landmarks. In Figure 6, the performance of the proposed registration method for PA-MR, US-MR, and MR-MR images on the phantom data is shown and compared. Also, for further evaluation the results of our proposed method were compared to the commonly used normalized mutual information (NMI) registration method. In the first row, the MR image and its corresponding US and PA images are shown. Dashed yellow circles show the same fields of view in three different modalities (MRI, US, and PA). Corresponding structures which were used to calculate target registration error are labeled with numbers 1 to 3 in the three imaging modalities. The brain deformation field was applied to the images in the first row, and the second row represents deformed MR, US, and PA images. As shown in Figure 6d–f, labeled targets have been displaced due to inducing deformation. Finally, the images in the third and last rows show the image registration results of MR, US, and PA after deformation (second row) with the original MRI before deformation (Figure 6a) using two different algorithms, NMI and JACSM, respectively. The results of the registration between the original MR and deformed MR, deformed US, and deformed PA using NMI as a common multimodal registration method are shown in the third row of Figure 6g–i, respectively. Also, the results of the registration between the original MR and deformed MR, deformed US, and deformed PA using our proposed method are shown in the last row of Figure 6j–l, respectively. The result of the registration between the original MR image and deformed MR image (Figure 6j) was used as a gold standard to evaluate the proposed algorithm. Also, the registration result of the deformed PA image (Figure 6l) was compared to the registration result of the deformed ultrasound image (Figure 6k) as a commonly used intra-operative imaging modality for brain shift compensation. As we have shown in the last row, images registered more accurately in the MR-MR image registration compared to the PA-MR image registration. Also, images registered more accurately in the PA-MR image registration compared to the US-MR image registration. As we have shown with the blue arrow in the last row of images, the surface of the phantom was matched accurately in the result of the MR-MR image registration. The registration of US-MR had the worst performance in matching the surface of the phantom in two modalities, and the registration of PA-MR had an acceptable performance in matching the surface of the phantom in two modalities, PA and MRI. Comparing the blue arrows in the third and last row images, it is clear that our proposed algorithm was more accurate than NMI. Also, white arrows in Figure 6i,l show that the PA-MR registration results for vessels were located in the depth of the phantom.
To quantitatively evaluate the proposed registration method, the RMSE, TRE, and HD for the PA-MR, US-MR, and MR-MR image registration were calculated and shown in Table 1. In total, we used 23 phantom data. The registration accuracy of MR and MR images was considered as a gold standard. The algorithms were implemented in MATLAB and tested on an Intel Core i7 3.2 GHz CPU with 8GB RAM.
The results of the phantom study showed that the PA-MR image registration had a better RMSE, TRE, and HD by about 60%, 65%, and 59%, respectively, compared to the US-MR image registration as a common imaging modality for brain shift compensation. On the other hand, the proposed method reached an RMSE of about 0.73 mm, which is acceptable in comparison with the MR-MR image registration as a gold standard, with an RMSE of about 0.62 mm. The proposed method improved the results of RMSE and TRE by about 60% and 63% (on average) compared to NMI.
For further evaluation of the proposed method, ex vivo mouse brain data were used. In Figure 7, the performance of the JACSM-based registration method for the PA-MR image registration for mouse brain data is shown and compared with the MR-MR image registration. Figure 7a,b represents MR and PA images of the mouse brain before any deformation, respectively. The PA image after applying non-linear deformation is shown in Figure 7c, and the registration result of the deformed PA and original MR of the mouse brain images is shown in Figure 7. The registration of MRI images before and after deformation was shown in Figure 7e as a gold standard. Also, in Figure 7f, the mean of the RMSE, TRE, and HD of the PA-MR image registration for all data of the mouse brain was calculated and compared to the result of the MR-MR image registration.
The results acquired from the ex vivo mouse brain also proved the ability of the proposed registration method to recover non-linear deformation, with a calculated mean of the RMSE, TRE, and HD of 1.13, 0.98, and 0.85 mm, respectively. The results are acceptable when compared to the results of the MRI-MRI registration as a gold standard, with an RMSE, TRE, and HD of about 0.98, 0.85, and 0.77 mm, respectively. In fact, intra-operative PA imaging as a real-time imaging with about a 15% RMSE increase could be a good alternative to intra-operative MR imaging. Additionally, with a 60% improvement in registration accuracy, PA imaging could be an alternative for intra-operative ultrasound imaging. Generally, it cannot be concluded that PA imaging is an alternative to US imaging due to its insufficiency in providing structural and anatomical information. However, for brain shift calculation in neurosurgery, the blood vessel meshwork of brain tissue is considered as the main landmark during surgery, which is better visualized using PA imaging.
Having a closer look at the comparison between the synthesis and analysis models, the synthesis model contains very few low-dimensional subspaces and an increasingly large number of subspaces of higher dimension. In contrast, the analysis model includes a combinatorial number of low-dimensional subspaces with fewer high-dimensional subspaces. The co-sparse analysis models can yield richer feature representations, and joint co-sparse analysis models consider the common sparse components of different signals from different sensors. Therefore, the JACSM-based registration method was found to be more suitable for multi-modal image registration. Despite the promising results that were obtained for multimodal image registration based on the joint co-sparse analysis model, there are certainly limitations to adopting our proposed approach for other multimodal medical image registrations. The joint co-sparse analysis model is based on local assumptions and thus fails where large areas of one modality are not available (such as an existing gap in one of the modalities). It seems we could overcome this limitation by developing a co-sparse analysis model for each modality separately and proposing an optimized cost function in co-sparse space. This is something we will do in the future.
It is noteworthy that the quality of PA images also affects the registration accuracy. In our previous works, we also focused on improving the quality of PA images using advanced methods in image de-noising [75] and image reconstruction [21]. On the other hand, recently, there has been a growing interest in using low-fluence based photoacoustic imaging systems such as LED-based systems for guiding real-time interventions [76,77,78]. In fact, the development of methods to improve the quality of LED-based PA images [79] as well as advantages such as high frame rates and low-cost imaging have made PA imaging promising to achieve a higher SNR compared to the system used here for the purpose of brain shift compensation.

4. Conclusions

There has been a growing interest in intra-operative imaging approaches to update the pre-operative images with real-time data when tissue deformation occurs during surgery. In particular, accurate and real-time brain shift compensation remains a challenging problem during neurosurgery. For the first time in this study, we proposed the application of PA imaging as an interventional solution during neurosurgery in combination with pre-operative modalities such as MRI to track brain deformation. However, the accurate combination of PA and MR images requires the development of a real-time and robust image registration algorithm. Accurate registration of intra-operative PA images with pre-operative MR images of brain tissue could calculate and compensate for brain deformation. In this study, the JACSM-based registration, which can capture the interdependency of two modalities, was proposed for the PA-MR image registration. The proposed algorithm works based on the minimization of mapping transform by using a pair of analysis operators in PA and MR images which are learned by the ADMM. The algorithm was tested on two data sets of phantom and mouse brain data and the results showed a more accurate performance for PA imaging versus US imaging for brain shift calculation. Furthermore, the proposed method showed about a 60% improvement in TRE in comparison with the common NMI registration method. The co-sparse analysis models can yield richer feature representations and better accuracy for medical image registration in real-time processes, which is crucial for surgeons during neurosurgery to compensate for brain shift. Finally, by using this JACSM-based registration, the intra-operative PA images could become a promising tool when the brain shift invalidates pre-operative MRI.

Author Contributions

Authors have made a group effort for this research. The conceptualization and study design were done by P.F. and B.M. The methodology and algorithm were implemented by P.F. under supervision of B.M. The phantom was designed and made by E.N. under supervision of M.A. MR images were provided by E.N. The PA and ultrasound data were provided by M.B. and Y.Y. under the supervision of M.M. The original draft was written by P.F. and E.N. The manuscript was revised by M.M. and A.A. All the works were done under the supervision of A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge Ruiqing Ni from the University of Zurich and ETH Zurich for providing mouse brain data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Orringer, D.A.; Golby, A.; Jolesz, F. Neuronavigation in the surgical management of brain tumors: Current and future trends. Expert Rev. Med. Devices 2012, 9, 491–500. [Google Scholar] [CrossRef] [PubMed]
  2. Gerard, I.J.; Kersten-Oertel, M.; Petrecca, K.; Sirhan, D.; Hall, J.A.; Collins, D.L. Brain shift in neuronavigation of brain tumors: A review. Med. Image Anal. 2017, 35, 403–420. [Google Scholar] [CrossRef] [PubMed]
  3. Xiao, Y.; Rivaz, H.; Chabanas, M.; Fortin, M.; Machado, I.; Ou, Y.; Heinrich, M.P.; Schnabel, J.A.; Zhong, X.; Maier, A. Evaluation of MRI to ultrasound registration methods for brain shift correction: The CuRIOUS2018 Challenge. IEEE Trans. Med. Imaging 2019, 39, 777–786. [Google Scholar] [CrossRef] [Green Version]
  4. Gerard, I.J.; Kersten-Oertel, M.; Hall, J.A.; Sirhan, D.; Collins, D.L. Brain Shift in Neuronavigation of Brain Tumors: An Updated Review of Intra-Operative Ultrasound Applications. Front. Oncol. 2021, 10, 3390. [Google Scholar] [CrossRef]
  5. Mitsui, T.; Fujii, M.; Tsuzaka, M.; Hayashi, Y.; Asahina, Y.; Wakabayashi, T. Skin shift and its effect on navigation accuracy in image-guided neurosurgery. Radiol. Phys. Technol. 2011, 4, 37–42. [Google Scholar] [CrossRef] [PubMed]
  6. Hill, D.L.; Maurer, C.R.; Maciunas, R.J.; Maciunas, R.J.; Barwise, J.A.; Fitzpatrick, J.M.; Wang, M.Y. Measurement of intraoperative brain surface deformation under a craniotomy. Neurosurgery 1998, 43, 514–526. [Google Scholar] [CrossRef]
  7. Hammoud, M.A.; Ligon, B.L.; Elsouki, R.; Shi, W.M.; Schomer, D.F.; Sawaya, R. Use of intraoperative ultrasound for localizing tumors and determining the extent of resection: A comparative study with magnetic resonance imaging. J. Neurosurg. 1996, 84, 737–741. [Google Scholar] [CrossRef]
  8. Škrinjar, O.; Nabavi, A.; Duncan, J. Model-driven brain shift compensation. Med. Image Anal. 2002, 6, 361–373. [Google Scholar] [CrossRef]
  9. Wittek, A.; Kikinis, R.; Warfield, S.K.; Miller, K. Brain shift computation using a fully nonlinear biomechanical model. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 583–590. [Google Scholar]
  10. Miga, M.I.; Sun, K.; Chen, I.; Clements, L.W.; Pheiffer, T.S.; Simpson, A.L.; Thompson, R.C.J.I. Clinical evaluation of a model-updated image-guidance approach to brain shift compensation: Experience in 16 cases. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1467–1474. [Google Scholar] [CrossRef] [Green Version]
  11. Grunert, P.; Müller-Forell, W.; Darabi, K.; Reisch, R.; Busert, C.; Hopf, N.; Perneczky, A.J.C.A.S. Basic principles and clinical applications of neuronavigation and intraoperative computed tomography. Comput. Aided Surg. 1998, 3, 166–173. [Google Scholar] [CrossRef]
  12. Nimsky, C.; Ganslandt, O.; Cerny, S.; Hastreiter, P.; Greiner, G.; Fahlbusch, R.J.N. Quantification of, visualization of, and compensation for brain shift using intraoperative magnetic resonance imaging. Neurosurgery 2000, 47, 1070–1080. [Google Scholar] [CrossRef]
  13. Kuhnt, D.; Bauer, M.H.; Nimsky, C.J.C.R.i.B.E. Brain shift compensation and neurosurgical image fusion using intraoperative MRI: Current status and future challenges. Crit. Rev. Biomed. Eng. 2012, 40, 175–185. [Google Scholar] [CrossRef]
  14. Clatz, O.; Delingette, H.; Talos, I.-F.; Golby, A.J.; Kikinis, R.; Jolesz, F.A.; Ayache, N.; Warfield, S.K. Robust nonrigid registration to capture brain shift from intraoperative MRI. IEEE Trans. Med. Imaging 2005, 24, 1417–1427. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Valdés, P.A.; Fan, X.; Ji, S.; Harris, B.T.; Paulsen, K.D.; Roberts, D.W. Estimation of brain deformation for volumetric image updating in protoporphyrin IX fluorescence-guided resection. Stereotact. Funct. Neurosurg. 2010, 88, 1–10. [Google Scholar] [CrossRef] [Green Version]
  16. Trobaugh, J.W.; Richard, W.D.; Smith, K.R.; Bucholz, R.D. Frameless stereotactic ultrasonography: Method and applications. Comput. Med. Imaging Graph. 1994, 18, 235–246. [Google Scholar] [CrossRef]
  17. Roche, A.; Pennec, X.; Rudolph, M.; Auer, D.; Malandain, G.; Ourselin, S.; Auer, L.M.; Ayache, N. Generalized correlation ratio for rigid registration of 3D ultrasound with MR images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 567–577. [Google Scholar]
  18. Koivukangas, J.; Ylitalo, J.; Alasaarela, E.; Tauriainen, A. Three-dimensional ultrasound imaging of brain for neurosurgery. Ann. Clin. Res. 1986, 18, 65–72. [Google Scholar] [PubMed]
  19. Farnia, P.; Ahmadian, A.; Shabanian, T.; Serej, N.D.; Alirezaie, J. Brain-shift compensation by non-rigid registration of intra-operative ultrasound images with preoperative MR images based on residual complexity. Int. J. Comput. Assist. Radiol. Surg. 2015, 10, 555–562. [Google Scholar] [CrossRef] [PubMed]
  20. Bayer, S.; Maier, A.; Ostermeier, M.; Fahrig, R. Intraoperative imaging modalities and compensation for brain shift in tumor resection surgery. Int. J. Biomed. Imaging 2017, 2017, 1–18. [Google Scholar] [CrossRef] [PubMed]
  21. Farnia, P.; Mohammadi, M.; Najafzadeh, E.; Alimohamadi, M.; Makkiabadi, B.; Ahmadian, A. High-quality photoacoustic image reconstruction based on deep convolutional neural network: Towards intra-operative photoacoustic imaging. Biomed. Phys. Eng. Express 2020, 6, 045019. [Google Scholar] [CrossRef]
  22. Pramanik, M.; Ku, G.; Li, C.; Wang, L.V. Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography. Med. Phys. 2008, 35, 2218–2223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Mehrmohammadi, M.; Joon Yoon, S.; Yeager, D.; Emelianov, S.Y. Photoacoustic imaging for cancer detection and staging. Curr. Mol. Imaging 2013, 2, 89–105. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Najafzadeh, E.; Ghadiri, H.; Alimohamadi, M.; Farnia, P.; Mehrmohammadi, M.; Ahmadian, A. Application of multi-wavelength technique for photoacoustic imaging to delineate tumor margins during maximum-safe resection of glioma: A preliminary simulation study. J. Clin. Neurosci. 2019, 70, 242–246. [Google Scholar] [CrossRef] [PubMed]
  25. Arabpou, S.; Najafzadeh, E.; Farnia, P.; Ahmadian, A.; Ghadiri, H.; Akhoundi, M.S.A. Detection of Early Stages Dental Caries Using Photoacoustic Signals: The Simulation Study. Front. Biomed. Technol. 2019, 6, 35–40. [Google Scholar] [CrossRef]
  26. Moore, C.; Bai, Y.; Hariri, A.; Sanchez, J.B.; Lin, C.-Y.; Koka, S.; Sedghizadeh, P.; Chen, C.; Jokerst, J.V. Photoacoustic imaging for monitoring periodontal health: A first human study. Photoacoustics 2018, 12, 67–74. [Google Scholar] [CrossRef] [PubMed]
  27. Yan, Y.; John, S.; Ghalehnovi, M.; Kabbani, L.; Kennedy, N.A.; Mehrmohammadi, M. Photoacoustic Imaging for Image-guided endovenous Laser Ablation procedures. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef]
  28. Petrova, E.; Brecht, H.; Motamedi, M.; Oraevsky, A.; Ermilov, S.A. In Vivo optoacoustic temperature imaging for image-guided cryotherapy of prostate cancer. Phys. Med. Biol. 2018, 63, 064002. [Google Scholar] [CrossRef]
  29. Eddins, B.; Bell, M.A.L. Design of a multifiber light delivery system for photoacoustic-guided surgery. J. Biomed. Opt. 2017, 22, 041011. [Google Scholar] [CrossRef]
  30. Wang, L.V.; Hu, S. Photoacoustic tomography: In Vivo imaging from organelles to organs. Science 2012, 335, 1458–1462. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, L.V.; Yao, J. A practical guide to photoacoustic tomography in the life sciences. Nat. Methods 2016, 13, 627. [Google Scholar] [CrossRef]
  32. Attia, A.B.E.; Balasundaram, G.; Moothanchery, M.; Dinish, U.; Bi, R.; Ntziachristos, V.; Olivo, M. A review of clinical photoacoustic imaging: Current and future trends. Photoacoustics 2019, 16, 100144. [Google Scholar] [CrossRef]
  33. Beard, P. Biomedical photoacoustic imaging. Interface Focus 2011, 1, 602–631. [Google Scholar] [CrossRef]
  34. Rosencwaig, A.; Gersho, A. Theory of the photoacoustic effect with solids. J. Appl. Phys. 1976, 47, 64–69. [Google Scholar] [CrossRef]
  35. Zackrisson, S.; Van De Ven, S.; Gambhir, S. Light in and sound out: Emerging translational strategies for photoacoustic imaging. Cancer Res. 2014, 74, 979–1004. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Xu, M.; Wang, L.V. Photoacoustic imaging in biomedicine. Rev. Sci. Instrum. 2006, 77, 041101. [Google Scholar] [CrossRef] [Green Version]
  37. Farnia, P.; Najafzadeh, E.; Hariri, A.; Lavasani, S.N.; Makkiabadi, B.; Ahmadian, A.; Jokerst, J.V. Dictionary learning technique enhances signal in LED-based photoacoustic imaging. Biomed. Opt. Express 2020, 11, 2533–2547. [Google Scholar] [CrossRef] [PubMed]
  38. Hoelen, C.; De Mul, F.; Pongers, R.; Dekker, A. Three-dimensional photoacoustic imaging of blood vessels in tissue. Opt. Lett. 1998, 23, 648–650. [Google Scholar] [CrossRef]
  39. Raumonen, P.; Tarvainen, T. Segmentation of vessel structures from photoacoustic images with reliability assessment. Biomed. Opt. Express 2018, 9, 2887–2904. [Google Scholar] [CrossRef] [Green Version]
  40. Najafzadeh, E.; Ghadiri, H.; Alimohamadi, M.; Farnia, P.; Mehrmohammadi, M.; Ahmadian, A. Evaluation of multi-wavelengths LED-based photoacoustic imaging for maximum safe resection of glioma: A proof of concept study. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1053–1062. [Google Scholar] [CrossRef]
  41. Karthikesh, M.S.; Yang, X.J.E.B. Photoacoustic image-guided interventions. Exp. Biol. Med. 2019, 245, 330–341. [Google Scholar] [CrossRef] [Green Version]
  42. Han, S.H.J.N. Review of photoacoustic imaging for imaging-guided spinal surgery. Neurospine 2018, 15, 306–322. [Google Scholar] [CrossRef]
  43. Kubelick, K.P.; Emelianov, S.Y. A Trimodal Ultrasound, Photoacoustic and Magnetic Resonance Imaging Approach for Longitudinal Post-operative Monitoring of Stem Cells in the Spinal Cord. Ultrasound Med. Biol. 2020, 46, 3468–3474. [Google Scholar] [CrossRef]
  44. Iversen, D.H.; Wein, W.; Lindseth, F.; Unsgård, G.; Reinertsen, I. Automatic intraoperative correction of brain shift for accurate neuronavigation. World Neurosurg. 2018, 120, e1071–e1078. [Google Scholar] [CrossRef]
  45. Reinertsen, I.; Descoteaux, M.; Siddiqi, K.; Collins, D.L. Validation of vessel-based registration for correction of brain shift. Med. Image Anal. 2007, 11, 374–388. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Chen, S.J.-S.; Reinertsen, I.; Coupé, P.; Yan, C.X.; Mercier, L.; Del Maestro, D.R.; Collins, D.L. Validation of a hybrid Doppler ultrasound vessel-based registration algorithm for neurosurgery. Int. J. Comput. Assist. Radiol. Surg. 2012, 7, 667–685. [Google Scholar] [CrossRef] [Green Version]
  47. Farnia, P.; Ahmadian, A.; Khoshnevisan, A.; Jaberzadeh, A.; Serej, N.D.; Kazerooni, A.F. An efficient point based registration of intra-operative ultrasound images with MR images for computation of brain shift; A phantom study. In Proceedings of the Engineering in Medicine and Biology Society, EMBC, 33rd Annual International Conference of the IEEE, EMBC, Boston, MA, USA, 30 August–3 Septmber 2011; pp. 8074–8077. [Google Scholar]
  48. Arbel, T.; Morandi, X.; Comeau, R.M.; Collins, D.L. Automatic non-linear MRI-ultrasound registration for the correction of intra-operative brain deformations. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 913–922. [Google Scholar]
  49. Ji, S.; Hartov, A.; Roberts, D.; Paulsen, K. Mutual-information-corrected tumor displacement using intraoperative ultrasound for brain shift compensation in image-guided neurosurgery. In Proceedings of the SPIE 6918, Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling, San Diego, CA, USA, 17 March 2008; p. 69182H. [Google Scholar]
  50. Wein, W.; Ladikos, A.; Fuerst, B.; Shah, A.; Sharma, K.; Navab, N. Global registration of ultrasound to MRI using the LC 2 metric for enabling neurosurgical guidance. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 34–41. [Google Scholar]
  51. Coupé, P.; Hellier, P.; Morandi, X.; Barillot, C. 3D rigid registration of intraoperative ultrasound and preoperative MR brain images based on hyperechogenic structures. J. Biomed. Imaging 2012, 2012, 1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Rivaz, H.; Karimaghaloo, Z.; Collins, D.L. Self-similarity weighted mutual information: A new nonrigid image registration metric. Med. Image Anal. 2014, 18, 343–358. [Google Scholar] [CrossRef] [Green Version]
  53. Rivaz, H.; Chen, S.J.-S.; Collins, D.L. Automatic deformable MR-ultrasound registration for image-guided neurosurgery. IEEE Trans. Med. Imaging 2015, 34, 366–380. [Google Scholar] [CrossRef] [PubMed]
  54. Machado, I.; Toews, M.; George, E.; Unadkat, P.; Essayed, W.; Luo, J.; Teodoro, P.; Carvalho, H.; Martins, J.; Golland, P. Deformable MRI-ultrasound registration using correlation-based attribute matching for brain shift correction: Accuracy and generality in multi-site data. NeuroImage 2019, 202, 116094. [Google Scholar] [CrossRef]
  55. Zhang, Q.; Liu, Y.; Blum, R.S.; Han, J.; Tao, D. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf. Fusion 2018, 40, 57–75. [Google Scholar] [CrossRef]
  56. Farnia, P.; Ahmadian, A.; Shabanian, T.; Serej, N.D.; Alirezaie, J. A hybrid method for non-rigid registration of intra-operative ultrasound images with pre-operative MR images. In Proceedings of the Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, Chicago, IL, USA, 26–30 August 2014; pp. 5562–5565. [Google Scholar]
  57. Farnia, P.; Makkiabadi, B.; Ahmadian, A.; Alirezaie, J. Curvelet based residual complexity objective function for non-rigid registration of pre-operative MRI with intra-operative ultrasound images. In Proceedings of the Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference, Orlando, FL, USA, 16–20 August 2016; pp. 1167–1170. [Google Scholar]
  58. Huang, K.; Aviyente, S. Sparse representation for signal classification. In Proceedings of the Advances in Neural Information Processing Systems, San Francisco, CA, USA, 30 November–3 December 1992; pp. 609–616. [Google Scholar]
  59. Roozgard, A.; Barzigar, N.; Verma, P.; Cheng, S. 3D-SCoBeP: 3D medical image registration using sparse coding and belief propagation. Int. J. Diagn. Imaging 2014, 2, 54. [Google Scholar] [CrossRef]
  60. Nam, S.; Davies, M.E.; Elad, M.; Gribonval, R. The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 2013, 34, 30–56. [Google Scholar] [CrossRef]
  61. Zhou, N.; Jiang, H.; Gong, L.; Xie, X. Double-image compression and encryption algorithm based on co-sparse representation and random pixel exchanging. Opt. Lasers Eng. 2018, 110, 72–79. [Google Scholar] [CrossRef]
  62. Kiechle, M.; Hawe, S.; Kleinsteuber, M. A joint intensity and depth co-sparse analysis model for depth map super-resolution. In Proceedings of the IEEE international conference on computer vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1545–1552. [Google Scholar]
  63. Kiechle, M.; Habigt, T.; Hawe, S.; Kleinsteuber, M. A bimodal co-sparse analysis model for image processing. Int. J. Comput. Vis. 2015, 114, 233–247. [Google Scholar] [CrossRef] [Green Version]
  64. Han, C.; Zhang, H.; Gao, C.; Jiang, C.; Sang, N.; Zhang, L. A Remote Sensing Image Fusion Method Based on the Analysis Sparse Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2016, 9, 439–453. [Google Scholar] [CrossRef]
  65. Gao, R.; Vorobyov, S.A.; Zhao, H. Image fusion with cosparse analysis operator. IEEE Signal Process. Lett. 2017, 24, 943–947. [Google Scholar] [CrossRef] [Green Version]
  66. Farnia, P.; Najafzadeh, E.; Ahmadian, A.; Makkiabadi, B.; Alimohamadi, M.; Alirezaie, J. Co-sparse analysis model based image registration to compensate brain shift by using intra-operative ultrasound imaging. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 17–21 July 2018; pp. 1–4. [Google Scholar]
  67. Ren, W.; Skulason, H.; Schlegel, F.; Rudin, M.; Klohs, J.; Ni, R. Automated registration of magnetic resonance imaging and optoacoustic tomography data for experimental studies. Neurophotonics 2019, 6, 025001. [Google Scholar] [CrossRef] [Green Version]
  68. Gehrung, M.; Tomaszewski, M.; McIntyre, D.; Disselhorst, J.; Bohndiek, S. Co-Registration of Optoacoustic Tomography and Magnetic Resonance Imaging Data from Murine Tumour Models. Photoacoustics 2020, 18, 100147. [Google Scholar] [CrossRef] [PubMed]
  69. Surry, K.; Austin, H.; Fenster, A.; Peters, T. Poly (vinyl alcohol) cryogel phantoms for use in ultrasound and MR imaging. Phys. Med. Biol. 2004, 49, 5529. [Google Scholar] [CrossRef]
  70. Myronenko, A.; Song, X. Intensity-based image registration by minimizing residual complexity. IEEE Trans. Med. Imaging 2010, 29, 1882–1891. [Google Scholar] [CrossRef]
  71. Ou, Y.; Akbari, H.; Bilello, M.; Da, X.; Davatzikos, C. Comparative evaluation of registration algorithms in different brain databases with varying difficulty: Results and insights. IEEE Trans. Med. Imaging 2014, 33, 2039–2065. [Google Scholar] [CrossRef] [Green Version]
  72. Hawe, S.; Kleinsteuber, M.; Diepold, K. Analysis operator learning and its application to image reconstruction. IEEE Trans. Image Process. 2013, 22, 2138–2150. [Google Scholar] [CrossRef] [PubMed]
  73. Cai, S.; Kang, Z.; Yang, M.; Xiong, X.; Peng, C.; Xiao, M.J.S. Image denoising via improved dictionary learning with global structure and local similarity preservations. Symmetry 2018, 10, 167. [Google Scholar] [CrossRef] [Green Version]
  74. Zhang, Q.; Fu, Y.; Li, H.; Zou, J. Dictionary learning method for joint sparse representation-based image fusion. Opt. Eng. 2013, 52, 057006. [Google Scholar] [CrossRef]
  75. Najafzadeh, E.; Farnia, P.; Lavasani, S.N.; Basij, M.; Yan, Y.; Ghadiri, H.; Ahmadian, A.; Mehrmohammadi, M. Photoacoustic image improvement based on a combination of sparse coding and filtering. J. Biomed. Opt. 2020, 25, 106001. [Google Scholar] [CrossRef] [PubMed]
  76. Manwar, R.; Hosseinzadeh, M.; Hariri, A.; Kratkiewicz, K.; Noei, S.; Avanaki, M.R.N. Photoacoustic signal enhancement: Towards utilization of low energy laser diodes in real-time photoacoustic imaging. Sensors 2018, 18, 3498. [Google Scholar] [CrossRef] [Green Version]
  77. Singh, M.K.A. LED-Based Photoacoustic Imaging: From Bench to Bedside; Springer Nature: Singapore, 2020. [Google Scholar]
  78. Agrawal, S.; Kuniyil Ajith Singh, M.; Johnstonbaugh, K.; Han, D.C.; Pameijer, C.R.; Kothapalli, S.-R. Photoacoustic imaging of human vasculature using LED versus laser illumination: A comparison study on tissue phantoms and In Vivo humans. Sensors 2021, 21, 424. [Google Scholar] [CrossRef]
  79. Hariri, A.; Alipour, K.; Mantri, Y.; Schulze, J.P.; Jokerst, J.V. Deep learning improves contrast in low-fluence photoacoustic imaging. Biomed. Opt. Express 2020, 11, 3360–3373. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Brain-mimicking phantom design and fabrication. (a) The dimensions of the phantom were about 150 × 40 mm, (b) a 3D model of the phantom including two simulated vessels with 1.2 and 1.4 mm inside diameters were inserted randomly into the phantom. (c) The cross-section of the phantom with vessels filled using two different contrast agents CuSO4 (H2O)5 and human blood.
Figure 1. Brain-mimicking phantom design and fabrication. (a) The dimensions of the phantom were about 150 × 40 mm, (b) a 3D model of the phantom including two simulated vessels with 1.2 and 1.4 mm inside diameters were inserted randomly into the phantom. (c) The cross-section of the phantom with vessels filled using two different contrast agents CuSO4 (H2O)5 and human blood.
Sensors 22 02399 g001
Figure 2. Schematic of the PA imaging setup, which includes a tunable pulsed laser and a programmable ultrasound data acquisition system.
Figure 2. Schematic of the PA imaging setup, which includes a tunable pulsed laser and a programmable ultrasound data acquisition system.
Sensors 22 02399 g002
Figure 3. Ex vivo head of mouse data: (a) MR image and (b) PA image. Five registration targets are shown in red and blue markers in (a,b), respectively, to assess the performance of the registration algorithm [67].
Figure 3. Ex vivo head of mouse data: (a) MR image and (b) PA image. Five registration targets are shown in red and blue markers in (a,b), respectively, to assess the performance of the registration algorithm [67].
Sensors 22 02399 g003
Figure 4. (a) Pre-operative MR image, (b) intra-operative MR image, and (c) brain deformation field was achieved by registration of intra-operative and pre-operative MR images using residual complexity method.
Figure 4. (a) Pre-operative MR image, (b) intra-operative MR image, and (c) brain deformation field was achieved by registration of intra-operative and pre-operative MR images using residual complexity method.
Sensors 22 02399 g004
Figure 5. The workflow for automatic multi-modal image registration to compensate for brain deformation. MR and PA images including pre-defined targets were set as a reference and float images, respectively. After applying brain deformation on PA images, registration of MR and deformed PA was conducted and evaluated.
Figure 5. The workflow for automatic multi-modal image registration to compensate for brain deformation. MR and PA images including pre-defined targets were set as a reference and float images, respectively. After applying brain deformation on PA images, registration of MR and deformed PA was conducted and evaluated.
Sensors 22 02399 g005
Figure 6. The results of multi-modal image registration of phantom data. First row: original image of phantom data before deformation from three different modalities: (a) MRI, (b) US, and (c) PA; second row: deformed images of (d) MRI, (e) US, and (f) PA. The third row shows the results of registered images of (g) MR-MR, (h) US-MR, and (i) PA-MR using the NMI algorithm. The last row shows the results of registered images of (j) MR-MR, (k) US-MR, and (l) PA-MR using JACSM. The blue arrows in the third and last rows represent the surface of the phantom in different modalities. Blue arrows A are related to the surface of the phantom in original MR images and blue arrows B are related to the surface of the phantom in deformed MR, deformed US, and deformed PA images. White arrows in (i,l) show that the PA-MR registration results for vessels were located in the depth of the phantom.
Figure 6. The results of multi-modal image registration of phantom data. First row: original image of phantom data before deformation from three different modalities: (a) MRI, (b) US, and (c) PA; second row: deformed images of (d) MRI, (e) US, and (f) PA. The third row shows the results of registered images of (g) MR-MR, (h) US-MR, and (i) PA-MR using the NMI algorithm. The last row shows the results of registered images of (j) MR-MR, (k) US-MR, and (l) PA-MR using JACSM. The blue arrows in the third and last rows represent the surface of the phantom in different modalities. Blue arrows A are related to the surface of the phantom in original MR images and blue arrows B are related to the surface of the phantom in deformed MR, deformed US, and deformed PA images. White arrows in (i,l) show that the PA-MR registration results for vessels were located in the depth of the phantom.
Sensors 22 02399 g006
Figure 7. The results of multi-modal image registration of mouse brain data: (a) MRI, (b) PA image, (c) PA image after applying non-linear deformation, and (d) registration of deformed PA and MRI of mouse data. Registration of MRI images before and after deformation was shown in (e) as a gold standard. Panel (f) shows the mean of RMSE, TRE, and HD of PA-MR image registration for all data of the mouse brain.
Figure 7. The results of multi-modal image registration of mouse brain data: (a) MRI, (b) PA image, (c) PA image after applying non-linear deformation, and (d) registration of deformed PA and MRI of mouse data. Registration of MRI images before and after deformation was shown in (e) as a gold standard. Panel (f) shows the mean of RMSE, TRE, and HD of PA-MR image registration for all data of the mouse brain.
Sensors 22 02399 g007
Table 1. Evaluation of proposed registration methods on phantom data.
Table 1. Evaluation of proposed registration methods on phantom data.
Multimodal Registration RMSE (Mean ± Std)TRE (Mean ± Std)
Number of Targets: 3
HD
(Mean ± Std)
MR-MRJACSM0.62 ± 0.040.32 ± 0.03
0.51 ± 0.04
0.21 ± 0.03
0.46 ± 0.07
NMI0.98 ± 0.09
US-MRJACSM1.17 ± 0.13
1.87 ± 0.15
0.96 ± 0.08
1.58 ± 0.11
0.51 ± 0.03
1.23 ± 0.13
NMI
PA-MRJACSM0.73 ± 0.050.58 ± 0.040.32 ± 0.04
NMI1.18 ± 0.090.96 ± 0.080.68 ± 0.05
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Farnia, P.; Makkiabadi, B.; Alimohamadi, M.; Najafzadeh, E.; Basij, M.; Yan, Y.; Mehrmohammadi, M.; Ahmadian, A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. Sensors 2022, 22, 2399. https://doi.org/10.3390/s22062399

AMA Style

Farnia P, Makkiabadi B, Alimohamadi M, Najafzadeh E, Basij M, Yan Y, Mehrmohammadi M, Ahmadian A. Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift. Sensors. 2022; 22(6):2399. https://doi.org/10.3390/s22062399

Chicago/Turabian Style

Farnia, Parastoo, Bahador Makkiabadi, Maysam Alimohamadi, Ebrahim Najafzadeh, Maryam Basij, Yan Yan, Mohammad Mehrmohammadi, and Alireza Ahmadian. 2022. "Photoacoustic-MR Image Registration Based on a Co-Sparse Analysis Model to Compensate for Brain Shift" Sensors 22, no. 6: 2399. https://doi.org/10.3390/s22062399

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop