Innovative Approaches to a Hybrid Non-Invasive BCI System
ECE Assistant Professor Sarah Ostadabbas, in collaboration with the University of Rhode Island, was awarded a $500K NSF grant for “A Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface.”
Abstract Source: NSF
Major advances in non-invasive brain-computer interfaces (BCIs) have enriched the lives of persons with certain disabilities by providing them with alternative means of communication. However, current systems rely heavily on unimodal techniques that limit both their performance and our understanding of the integrated neural dynamics essential to properly explain multiscale neural functions. To address this issue it has been proposed to employ hybrid (multimodal) BCIs, but attempts to date to utilize the complementary benefits of multiple modalities through simple combinations (e.g., concatenation of feature sets from two neuroimaging modalities) have yielded only incremental advances; generalizable computational data-driven approaches for the fusion of multimodal signals to efficiently and simultaneously extract complementary information from multiple signals of interest remain lacking. This research will explore an innovative approach to a hybrid non-invasive BCI system that capitalizes on the complementary physiological features that can be obtained from electrical and hemodynamic neural signals using EEG and fNIRS respectively, with the help of a graph-based data fusion framework. Project outcomes will include novel signal processing pipelines and lay the foundation for practical BCI techniques for mainstream user applications. In addition to the project’s potential societal impacts, the team will focus on broadening participation in STEM and will also engage students from K-12 through the graduate level.
The research will involve three main thrusts. A novel graph theoretical multimodal data fusion framework will be developed to systematically capture complex topological features of hybrid patterns and user intentions during a dual-task interaction that concurrently modulates electrical and hemodynamic responses of interest. Because multimodal techniques create inherently complementary attributes in terms of both spatiotemporal resolution and information content, the framework will aim to capture the corresponding complementary synergistic topological features from the complex hybrid patterns hidden in EEG and fNIRS signals for the high-level abstraction of user intentions. The framework will be evaluated on non-communicative individuals by optimizing parameters and channels containing the highest mutual information, in real-world settings. Finally, a conceptually new hybrid subspace-based filter will be proposed to maximize the distance between two classes of hybrid data and enhance classification performance.