Author_Institution :
Machine Learning Group, Tech. Univ. Berlin, Berlin, Germany
Abstract :
Learning to build universal decoders for BCI is a great challenge (see [9], [2], [8] for recent reviews on Machine Learning for BCI). Usually in multimodal imaging we consider the modes to be different types of imaging devices such as EEG, NIRS or fMRI (see e.g. [1], [7], [3], [4]). However, we can also interpret different subjects as imaging modalities to gain a zero training decoder (cf. [5], [6]) from a data base of subjects. Even the same subject data from several experiments can be seen as instantiation of multiple modes. This change of view allows to proceed in various research directions (e.g. [3], [4],[10], [11],[12]). The talk will expand on recent multimodal analysis techniques such as SPoC ([3], [4]). Furthermore we will discuss nonstationarities (cf. [13], [10]) that often occur in neuroscience, e.g. between a subjects´ training and testing session in braincomputer interfacing (BCI) (e.g. [10], [11],[12]). We show that such changes can be very similar between subjects, and thus can be reliably estimated using data from other users and utilized to construct an invariant feature space ([11]). These insights can be accumulated into a broader theoretical framework using beta divergences ([12]). We show that it cannot only achieve a significant increase in performance, but also that the extracted change patterns allow for a neurophysiologically meaningful interpretation.
Keywords :
biomedical equipment; brain-computer interfaces; learning (artificial intelligence); medical image processing; BCI; EEG; NIRS; beta divergences; change patterns; fMRI; imaging devices; invariant feature space; machine learning; multimodal analysis techniques; multimodal imaging; neurophysiology; universal decoders; zero training decoder; Brain-computer interfaces; Decoding; Educational institutions; Electroencephalography; Imaging; Neuroscience; Training; BCI; Brain Computer Interface; Machine Learning; multimodal data analysis;