- Wednesday | 27th May, 2020
- Saturday | 23rd May, 2020
- Friday | 22nd May, 2020
- Wednesday | 20th May, 2020
- Wednesday | 13th May, 2020
Whatever you think in your head can be translated into real words. Wondering how? Read on to know more.
The human mind is a complex thing, not every speech and reactions can be summarized and understood. What makes it even more complex are the emotions. Not everybody is comfortable to speak their minds out, and they keep most of the things to themselves. But what if somebody tells you, your mind is vulnerable and it can be read. Whatever you think in your head can be translated into real words.
Led by an Indian-origin student, MIT scientists in Boston directed to develop a wearable device named Alter Ego and an associated computing system that can actually transcribe the words that users say in their heads.
How does it read words in your head?
The unique machine picks up the neuromuscular signals in the jaw and face that are triggered by the internal verbalizations when a person says something in his head.
These signals are undetectable to the human eye but the machine-learning system has been trained to correlate particular signals with particular words will transcribe it.
How the device, give words to internal speech?
Alter Ego includes a pair of bone-conduction headphones, which transmits vibrations through the bones of the face to the inner ear. Because they do not obstruct the ear canal, the headphones facilitate the system to put across information to the user without interrupting the conversation or otherwise interfering with the user`s auditory experience.
The device is part of complete silent-computing system that allows the user undetectably pose and receive answers to difficult computational problems.
Researchers came out with a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.
The researchers told that the device would facillitate one to interact with computing devices without having to physically type into them.
Though, subvocalisation as a computer interface is largely unexplored. The researchers` first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals.
Researchers held experiments in which the people were asked to subvocalise a series of words four times, with an array of 16 electrodes at different facial locations each time.
They coded how to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalised words.
They also compiled data on a few computational tasks with limited vocabularies -- about 20 words each.
One was arithmetic, in which the user subvocalised large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.
Using the prototype wearable interface, the researchers held a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations.
In that study, the system had an average transcription accuracy of about 92 per cent.
However, the system`s performance should improve with more training data, which could be collected during its ordinary use.
Your support to NYOOOZ will help us to continue create and publish news for and from smaller cities, which also need equal voice as much as citizens living in bigger cities have through mainstream media organizations.