Self-Powered Sensorial ‘Skin’ the Future of Motion and Gesture Recognition

Groundbreaking research on flexible photodetectors with computational powers by Canek Fuentes-Hernandez, associate professor, electrical and computer engineering (ECE), and Gregory D. Abowd, dean of the College of Engineering and professor of ECE, has been published in npj Flexible Electronics. The researchers developed a new approach to achieve motion and gesture recognition using arrays of thin and flexible organic photodetectors distributed in space.


Understanding human activity is of critical importance to preserve health and improve healthcare, as well as enhance how we experience our living environment. Advances in this area could lead to detecting if an elderly person falls, their gait changes, or early signs of mental distress, as well as many other practical scenarios in healthcare, interactive computing, and infrastructural and environmental monitoring.

Groundbreaking research on flexible photodetectors with computational powers by Canek Fuentes-Hernandez, associate professor, electrical and computer engineering (ECE), and Gregory D. Abowd, dean of the College of Engineering and professor of ECE, has been published in npj Flexible Electronics.

The researchers developed a new approach to achieve motion and gesture recognition using arrays of thin and flexible organic photodetectors distributed in space. These arrays, called computational photodetectors, are designed to recognize human activity and operate with low latency (fast) while only using power harvested from light in the environment using a solar cell. They constitute a self-powered sensorial skin that sits on the surface of the objects, seamlessly augmenting our experience and ability to sense relevant information from the environment.

The research posits that deploying a wireless network of these computational photosensors on everyday surfaces could replace conventional vision-based systems, such as video cameras, by offering more flexibility and privacy, and drawing less power.

The work combines Abowd’s interest in broadening the definition of what a computer can be alongside Fuentes-Hernandez’s expertise in organic and flexible electronics design. The autonomous sensors in their research harvest energy from the environment and can understand gestures and motion in space—such as whether a box with a certain orientation is moved in or out of a shelf, or recognizing a swipe across a tabletop—by sensing changes in light.

“Our first iteration of this light-sensing photodetector network was rather traditional in a pixelated array, like in a charge-coupled device (CCD) camera that captures and reads information pixel by pixel over time to reconstruct the shadow cast by an object onto the sensor,” says Fuentes-Hernandez. “We ran into the problem that when you increase the number of pixels you capture, the photosensors need more power to store, process, and communicate data—meaning a large solar cell was needed.”

When faced with this power problem, the researchers knew they needed a better way to scale up the system. They decided to develop unique signatures for each photodetector so that when a shadow is cast on each sensor, it produces a distinct signal that is interpreted by two terminals. The array is now connected in parallel and read in a time series, like a single pixel, which means a very small solar cell can power all of the sensors in the system.

“In addition to movement, we can also now derive meaningful physical properties like velocity and direction because we know and control how the sensors are distributed on a surface,” says Fuentes-Hernandez. “The biggest impact of the research is that we have a way to distribute photodetectors on an everyday object and interpret motions and gestures that happen over the top of them without the power problem.”

Fuentes-Hernandez and Abowd’s photodetector array can interpret motions such as swiping, walking, and understanding the position of an object in space, which could have impacts on industries as diverse as manufacturing to occupational therapy.

“These sensors could be used on the tip of a cane to help people with vision impairments move throughout a space more easily; they could be placed in nursing homes to offer better response to dangerous slips and falls, without compromising patients’ privacy like video cameras could,” says Fuentes-Hernandez. “This is the tip of the iceberg in the kinds of applications we’re exploring at Northeastern for this technology.”

Abowd is thinking even farther into the future, wondering what it could be like if objects themselves were imbued with these computational and sensing properties.

“There are lots of devices on the market right now that track info about a person’s health—steps taken, identifying exercise, and so on—but they are limited in terms of what they can interpret,” says Abowd. “What if our clothes could understand how they’re being operated by the human who’s wearing them? Or what if in order to better monitor buildings or bridges for vibration stress, we created wood and steel that has computational abilities? The opportunities are endless.”


Abstract:

Understanding human activity is of critical importance to preserve health and improve healthcare, as well as to improve how we experience our living environment. For instance, detecting if an elderly person falls, their gait changes, early signs of mental distress, increasing the awareness of human activity around people with blindness or other disabilities, among many other practical scenarios in healthcare, interactive computing, infrastructural and environmental monitoring. Through collaborations at Northeastern University, the team is actively pursuing solutions to many of these problems.

To date, video cameras and computer vision algorithms can do many of these tasks, but they inherently invade the user’s privacy, require a lot of energy to operate and communicate, and process large amounts of data contained in a video. In addition, they can be easily occluded, and overall, they are expensive to operate and maintain.

This paper reports on a new approach to achieve motion and gesture recognition using arrays of thin and flexible organic photodetectors distributed in space. These arrays are designed to recognize human activity and they operate with low latency (fast) only using power harvested from the environment. Hence, they constitute a self-powered sensorial skin, that sits on the surface of the objects wherein they are deployed a consequently seamlessly augment our experience and ability to sense relevant information from the environment. These arrays are called computational photodetectors and they will be the cornerstone of many of our research activities in trying to solve societal problems.

Related Departments:Electrical & Computer Engineering