Created Sonar Glasses That Translate Silent Speech Into Printed Text
Ruidong Zhang has developed EchoSpeech glasses equipped with sonar, capable of “hearing” the silent speech of its owner – enough facial articulation.
The new project is based on the previous development, in which he installed cameras on wireless headphones. The glasses format turned out to be more convenient for these purposes: the user does not need to look into the camera or insert something into the ear. The data from the speakers and microphones installed on the glasses is wirelessly transmitted to the smartphone, where it is processed by artificial intelligence algorithms.
The first use requires EchoSpeech calibration, which allows AI to study the features of the user’s facial expressions – just a few minutes are enough, during which the person is invited, for example, to read out a few numbers. When the setting is completed, the accuracy of the system operation reaches 95%. Using a smartphone for data processing allows the glasses to remain compact and unobtrusive, provide them with up to 10 hours of battery life and ensure that all information remains on the phone – its performance is sufficient to process all data locally.