
Traditionally, robots have been configured to perform tasks by programming them explicitly. They have been taught the intricacies of how humans communicate so that they can respond accordingly. While this process is not only tedious, it has significantly high error rates too. Especially in areas involving safety-critical tasks, the accuracy of robots is of paramount importance. This brings the need to control robots in a quick and effective manner, and brainwaves are a way out.
MIT researchers are now working on reducing the errors and mistakes made by robots by supervising them with the help of the human brain and muscle activity. It would allow the system to detect in real-time if a person notices an error in the robot, and correct it immediately based on hand gestures.
The work has been reported in a paper by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which is led by PhD candidate Joseph DelPreto along with Daniela Rus, Andres F Salazar-Gomez, Stephanie Gil, Ramin M Hasani, and Boston University Professor Frank H Guenther.
A Continuation Of The Former Efforts
Traditionally, robotics required humans to think in a prescribed way so that computers could recognise them. But this training process was quite challenging, often resulting in errors. In an effort to make the experience more natural, MIT researchers had created a feedback system that could allow people to correct robot mistakes instantly, with nothing more than their brains.
They had used only EEG monitor to record the brain activity, where the system could detect if a person notices an error as the robot performs the task.
While this work yielded commendable results, it was focused on simple binary-choice activities and system could not recognise secondary errors in the real-time.
What Are Researchers Doing Now?
To overcome the challenges in the previous model and expand the scope to multiple-choice tasks, researchers have advanced their working, where they are now also making use of EMG apart from EEG, the two bio-signals that denote electrical muscle and brain activity respectively.
In this experimental setup, where the autonomous robot performs a task, the human supervising it wore an EEG cap and EMG electrode while observing the task. As human mentally judges whether the robot is performing a correct task and uses gestures to correct the robot’s trajectory, when necessary. While an EEG classifier detects the presence of brain signals, EMG classifier analyses muscle activity to identify gestures.
“Once the robot reaches the selected target, either with or without intervention, it pauses briefly to indicate completion and then returns to its starting position. This concludes a single trial. The experiment had a total of 40 trials spread across 2 hours.
“This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications that we’ve been able to do before using only EEG feedback,” said CSAIL Director Daniela Rus. “By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
While EEG signals alone cannot always be reliably detectable, EMG signals can sometimes be difficult to map motions that are any more specific than “move left or right.” Merging the two, however, allows a more robust bio-sensing, generating effective results.
“What’s great about this approach is that there’s no need to train users to think in a prescribed way,” says DelPreto. “The machine adapts to you, and not the other way around.”
For the project the team used “Baxter,” a humanoid robot from Rethink Robotics.
The Way Ahead
As the paper reports, there was an average of 156 trials per experiment with a success rate of more than 90 percent. It suggested that while the robot randomly chose the correct target in 69.5 percent of the trials, but after EEG and EMG controls were applied, it chose the correct target in 97.3 percent of the trials.
The paper says:
“In most cases, the optimal number of gestures required was also detected by the system even though the subjects were not instructed to minimize gestures. The few cases in which the final target was incorrect were typically due to no gesture being detected.”
The experiment has so far been successful, as the system also works on people it has never seen before, suggesting that there is no need to train it on each user and can hence be deployed in a real-world setting in a hassle-free manner.
The team hopes to see the system being useful one day for the elderly, or workers with language disorders or limited mobility. As this experiment gives a hope to overcome the constraints of machines, it also allows for a more intuitive human-robot interaction in the coming future.
The post Can Robots Be Controlled By Brainwaves? These Researchers Are Up For The Game. appeared first on Analytics India Magazine.