Diffusion Maps for Multimodal Registration
AbstractMultimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information. View Full-Text
Share & Cite This Article
Piella, G. Diffusion Maps for Multimodal Registration. Sensors 2014, 14, 10562-10577.
Piella G. Diffusion Maps for Multimodal Registration. Sensors. 2014; 14(6):10562-10577.Chicago/Turabian Style
Piella, Gemma. 2014. "Diffusion Maps for Multimodal Registration." Sensors 14, no. 6: 10562-10577.