Neuroscience -Tomography. — © AFP
The deepfake phenomenon has generally received a bad press, being misused in political campaigns or controversially for recreating deceased movie stars. However, a more practical usage of the technology appears to be emerging. Dubbed ‘deepfaking the mind’ could be the basis for improving brain-computer interfaces for people with disabilities.
Scientists based at the University of Southern California have shown how generative adversarial networks (GANs) can be used to improve brain-computer interfaces for people with disabilities. As part of the first wave of the development of the technology, the researchers used artificial intelligence to generate synthetic brain activity data. By creating data signals called spike trains, the researchers directed into machine-learning algorithms an improved usability of brain-computer interfaces.
Hitherto, GANs are a form of technology best known for creating deepfake videos and photorealistic human faces. Instead, BCI systems function by analyzing a person’s brain signals and translating that neural activity into commands. This allows the user to control digital devices such as computer cursors using only their thoughts.
Such technology can improve the quality of life for people with motor dysfunction or paralysis. While some types of BCI are available, it has proved challenging to make these systems fast and robust enough for the real world. This is because BCIs need large amounts of neural data and long periods of training, calibration and learning.
Furthermore, the technology is user-specific and has to be trained from scratch for each person. This also slows down the application.
These limitations led the researchers to adopt an alternate approach: Synthetic neurological data (that is artificially computer-generated data) that can “stand in” for data obtained from the real world.
This is where GANs come in, offering the ability to create a virtually unlimited number of new, similar images by running through a trial-and-error process.
In a study to demonstrate the potential, the researchers used a deep-learning spike synthesizer with one session of data recorded from a monkey reaching for an object. After this, the researchers used the synthesizer to generate large amounts of similar (‘fake’) neural data.
The researchers next combined the synthesized data with small amounts of new real data — either from the same monkey on a different day, or from a different monkey — to train a BCI.
As measure of success, the GAN-synthesized neural data improved a BCI’s overall training speed by up to 20 times. This paves the way for further research and the goal of an improved system for those with disabilities.
The research appears in Nature Biomedical Engineering, titled “Rapid adaptation of brain–computer interfaces to new neuronal ensembles or participants via generative modelling.”
More Stories
Why It’s Important to Have a Secure WiFi Network
North Korean tech freelancers’ earnings fund nukes, missiles • The Register
Boss installed a virus, left techies to mop up • The Register