Innovative Approaches to a Hybrid Non-Invasive BCI System
ECE Assistant Professor Sarah Ostadabbas, in collaboration with the University of Rhode Island, was awarded a $500K NSF grant for “A Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface.”
Giving a voice — literally — to those in need
Being able to communicate with others around us is a core part of being human. It’s something many of us take for granted. But it can be a daily struggle for the nearly 1 million people in the U.S. with certain neurological conditions that limit their ability to communicate, and it’s a significant public health issue.
Thanks to the hard work of researchers and patients alike, advances in non-invasive brain-computer interfaces (BCIs) have enabled many people to get back that ability to communicate. Tools like BCI spellers, where people use their brain to control a keyboard display to produce spoken words, have literally given people a voice.
These tools aren’t perfect, however. They can be slow, inaccurate, and require fine eye-gaze control, which is not practical for many people. Improving these interfaces significantly improves the lives of many people, and that’s exactly what ECE Assistant Professor Sarah Ostadabbas, in collaboration with the University of Rhode Island, aims to do with her recent $500,000 NSF grant for a “Graph-Based Data Fusion Framework Towards Guiding A Hybrid Brain-Computer Interface.”
BCIs in focus
“Brain-computer interfaces are among the most powerful and most impactful augmented cognition systems,” explains Ostadabbas, the director of the Augmented Cognition Lab at Northeastern. “Even apparently simple problems in BCIs have a complex web of interconnected elements with significant engineering and science implications, which means that a successful project has to be a collaborative one.”
The most common way BCIs communicate with the brain is through electroencephalography (EEG), due to its fast reaction time, cost-effectiveness, and portability. However, EEG also has some qualities that make it less than ideal for the very complex tasks required for practical use, including being highly non-stationary, having low signal-to-noise ratio, not having task-associated patterns in specific patient conditions, and requiring fine eye-gaze control, which isn’t practical for many people who lack motor control.
There is a technology called functional near-infrared spectroscopy (fNIRS) that addresses all four of these shortcomings. fNIRS integrates information from the cortical surface by penetrating through the skull, resulting in higher spatial resolution (about 1 cm vs. 3 cm for EEG), although it suffers from poor temporal resolution (seconds vs. milliseconds for EEG). EEG and fNIRS are the only non-invasive, compatible, and portable brain imaging techniques that are applicable in real-time BCIs and can measure two different, but essential, types of neural activities (electrical and hemodynamic).
Single modal vs. multimodal
Within the past few years, there has been growing interest in improving the performance of BCI systems by integrating fNIRS and high-speed EEG-based systems to overcome single modal limitations.
This gets to the heart of Assistant Professor Ostadabbas’s research.
“Similar to the concept of hybrid cars, where fuel efficiency is maximized by relying on two types of engines, hybrid BCIs have a specific goal of increasing general stability and enhancing performance,” she explains.
Many studies suggest fNIRS as a complement to EEG and a potential control signal for future BCIs, but to date, no BCI study has systematically fused the two different signals — considering their different spatial-temporal dynamics and information contents — to extract complementary, rich information from both modalities. This means there has only been incremental advances in hybrid BCIs so far, and a lack of obvious “ground truth” in the field.
“Using cutting-edge multidisciplinary techniques, in this study we proposed a novel hybrid fNIRS-EEG-based BCI to bridge the gap between end-users’ communication needs and the true capabilities of BCIs,” Ostadabbas says. “Our proposed system is guided by an innovative graph-theoretical multimodal data fusion (GT-MMDF) framework and a new dual-task user interface.”
Putting it into practice
Another hurdle this research hopes to clear is making the technology usable for those who need it most. Traditionally, most hybrid BCI studies have tested their protocol on healthy participants, which doesn’t reflect many of the challenges actual users of the technology face.
But, recent studies by Ostadabbas’s collaborator at University of Rhode Island, Professor Yalda Shahriari (including one in 2019), showed that fNIRS-EEG could be applied in ALS and late-stage locked-in state (LIS) patients. Specifically, they observed that incorporating fNIRS into communication-based BCIs can substantially enhance BCI classification performance for patients in late-stage LIS. This was attributed to the fact that fNIRS doesn’t depend on fine eye-gaze control and is, therefore, an ideal signal for patients who have lost their gaze control.
Getting everyone involved
It’s no secret that women and minorities have historically been underrepresented in the engineering field. This is problematic for many reasons, including the fact that it silences many voices and potentially misses the best ideas.
Beyond helping people with communications challenges, Ostadabbas also sees her research as an opportunity to get a more diverse group of people interested and involved in STEM (science, technology, engineering, and math).
“As reported in various studies, the students who participate in these types of collaborative programs gain a much broader understanding of engineering and are much more likely to stay in the engineering major,” she explains. “I believe we can create a unique opportunity to draw women and underrepresented minorities from multidisciplinary backgrounds into the STEM field. This way, not only will the students benefit from the experience, but so will the research — and, ultimately, the patients.”
“From what I’ve seen, the best research comes from labs that have a mixture of people from different origins and backgrounds,” Ostadabbas concludes. “As we move deeper into the 21st century, we must continue embracing this diversity, as well as get better at bringing women into STEM. Missing out on the contributions of 50% of our population is not the way to continue leading the world in scientific contributions.”
Abstract Source: NSF
Major advances in non-invasive brain-computer interfaces (BCIs) have enriched the lives of persons with certain disabilities by providing them with alternative means of communication. However, current systems rely heavily on unimodal techniques that limit both their performance and our understanding of the integrated neural dynamics essential to properly explain multiscale neural functions. To address this issue it has been proposed to employ hybrid (multimodal) BCIs, but attempts to date to utilize the complementary benefits of multiple modalities through simple combinations (e.g., concatenation of feature sets from two neuroimaging modalities) have yielded only incremental advances; generalizable computational data-driven approaches for the fusion of multimodal signals to efficiently and simultaneously extract complementary information from multiple signals of interest remain lacking. This research will explore an innovative approach to a hybrid non-invasive BCI system that capitalizes on the complementary physiological features that can be obtained from electrical and hemodynamic neural signals using EEG and fNIRS respectively, with the help of a graph-based data fusion framework. Project outcomes will include novel signal processing pipelines and lay the foundation for practical BCI techniques for mainstream user applications. In addition to the project’s potential societal impacts, the team will focus on broadening participation in STEM and will also engage students from K-12 through the graduate level.
The research will involve three main thrusts. A novel graph theoretical multimodal data fusion framework will be developed to systematically capture complex topological features of hybrid patterns and user intentions during a dual-task interaction that concurrently modulates electrical and hemodynamic responses of interest. Because multimodal techniques create inherently complementary attributes in terms of both spatiotemporal resolution and information content, the framework will aim to capture the corresponding complementary synergistic topological features from the complex hybrid patterns hidden in EEG and fNIRS signals for the high-level abstraction of user intentions. The framework will be evaluated on non-communicative individuals by optimizing parameters and channels containing the highest mutual information, in real-world settings. Finally, a conceptually new hybrid subspace-based filter will be proposed to maximize the distance between two classes of hybrid data and enhance classification performance.