DocumentCode :
3748513
Title :
Unsupervised Cross-Modal Synthesis of Subject-Specific Scans
Author :
Raviteja Vemulapalli;Hien Van Nguyen;Shaohua Kevin Zhou
Author_Institution :
Center for Autom. Res., UMIACS Univ. of Maryland, College Park, MD, USA
fYear :
2015
Firstpage :
630
Lastpage :
638
Abstract :
Recently, cross-modal synthesis of subject-specific scans has been receiving significant attention from the medical imaging community. Though various synthesis approaches have been introduced in the recent past, most of them are either tailored to a specific application or proposed for the supervised setting, i.e., they assume the availability of training data from the same set of subjects in both source and target modalities. But, collecting multiple scans from each subject is undesirable. Hence, to address this issue, we propose a general unsupervised cross-modal medical image synthesis approach that works without paired training data. Given a source modality image of a subject, we first generate multiple target modality candidate values for each voxel independently using cross-modal nearest neighbor search. Then, we select the best candidate values jointly for all the voxels by simultaneously maximizing a global mutual information cost function and a local spatial consistency cost function. Finally, we use coupled sparse representation for further refinement of synthesized images. Our experiments on generating T1-MRI brain scans from T2-MRI and vice versa demonstrate that the synthesis capability of the proposed unsupervised approach is comparable to various state-of-the-art supervised approaches in the literature.
Keywords :
"Image generation","Biomedical imaging","Nearest neighbor searches","Image resolution","Training data","Training","Magnetic resonance imaging"
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN :
2380-7504
Type :
conf
DOI :
10.1109/ICCV.2015.79
Filename :
7410436
Link To Document :
بازگشت