Earable Computing: Upcoming TechnologysteemCreated with Sketch.

in hive-175254 •  4 years ago 

CSL's Systems and Networking Research Group (SyNRG) defines a small area of ​​mobile technology they call "usable computing." The team believes that earphones will be the next landmark in portable devices, and that new hardware, software, and apps will all work in this field.

quantum-computer-3679893_1280.webp

Instead, the group believes that future earphones will continue to hear about human behavior, apply unpopular truth to taxpayers we see, make Alexa and Siri whisper timely information, track user and health movements, and provide seamless security, among many other skills.

Under-computer audible research questions come from a variety of sources, including hearing, signal processing, embedded systems, communication and machine learning. The SyNRG team is at the forefront of developing new algorithms while also experimenting with real earphones with live users.

Computer science PhD student Zhijian Yang, along with other members of the SyNRG team, including classmates Yu-Lin Wei and Liz Li, took the lead. They published a series of papers in the area, starting with one on the topic of voicemail cancellation published at ACM SIGCOMM 2018. Recently, the group had three papers published at the 26th Annual International Conference on Mobile Technology and Network (ACM MobiCom) earphones: hearing facial expressions, acoustic augmented reality, and local voice activation for earphones.

The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones can detect facial and mouth functions such as tooth movements and taps, resulting in hands-free communication on smartphones. In addition, a variety of medical conditions can be seen in toothpaste, and the proposed technology will enable them to detect them by wearing earphones during the day. In the future, the team plans to focus on analyzing the movement of facial muscles and emotions through the sensory nerves.

The third book, Voice Localization Using Nearby Wall Reflections, investigates the use of algorithms to obtain sound guidance. This means that when Alice and Bob chat, Bob's earphones will be able to match the tone of Alice's voice.

"We've been working on cell phones and computer use for 10 years," Wei said. "We have a lot of information to explain this emerging world of efficient computer."

Haitham Hassanieh, an assistant professor in ECE, was also involved in the study. The team is sponsored by NSF and NIH, as well as companies like Nokia and Google.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!