Listen up: augmented reality is coming to your ears

Headphones and earpieces that filter out unwanted noise and notify us are the future of augmented reality

Music is a joy - until you start learning how to play. Then – for most of us – it immediately becomes difficult and demoralising. Unless, that is, you practise with Amped, an app that algorithmically transforms bum notes into tuneful ones.

Created by Finnish-Swedish startup Zoundio, Amped deconstructs harmony and chord structure to blend users' playing with existing tracks. The first instrument it's designed for is the electric guitar. Plug in, follow the lessons, and even if the reality is hesitant or jarring, it sounds good through your headphones.

Read more: Hands-on with Microsoft's HoloLens as it lands in Europe

Mistakes on Amped aren't erased completely; when you miss a beat or a chord, you'll hear a discordant note. But because the overall effect is good rather than bad, you hear what you wished you sounded like, so your motivation doesn't fall away. Using Amped feels counterintuitive, even faintly sinful. (After all, isn't learning meant to be hard?) But here's the thing: it works. In fact, it's great.

Augmented reality is most often described as a digital overlay on physical reality. Its true promise, however, is not technical, but sensory. As its root meaning of "increase" or "expand" suggests, augmentation digitises our senses, giving us virtual powers in the physical world. From Google Glass to Microsoft's HoloLens, the focus so far has been on vision. Now augmented aural reality (AAR) is here and, in the near future, ready to use.

Take Nura, a pair of "in-ear and over-ear" headphones that, in summer 2016, raised $1.88 million (£1.3m) on Kickstarter. Listening varies vastly from person to person; by measuring the minute vibrations of the inner ear (an adaptation of the test used to screen babies for deafness), Nura corrects its input to fit each wearer's earprint. They are, in effect, a hearing aid for the hearing.

Other aural AR devices go further than correction. San Francisco-based Doppler Labs' 20p-sized wireless Here One earbuds can isolate specific sounds, allowing the listener to cancel out the noise of a building site, for instance, or focus on a conversation in a busy restaurant, a process Doppler calls adaptive filtering.

The company's previous earbuds, Here Active Listening, were designed for live music and had to be adjusted via smartphone, but even so, when I tried them, the effect was impressive: they picked up the bass in a badly equipped club far better than natural hearing. Shipping in February after delays, Here One promises to go a step further. "They're actually meant for everywhere, not just places where you'd wear headphones," says Doppler Labs CEO and co-founder Noah Kraft.

This technology's insight is psychological, or, to be more precise, psychoacoustic. The study of sound perception has long aided technologists: it's what allowed the designers of the MP3 to strip out almost all the information but leave a sound that's coherent to the human ear. But rapid advances in computing power and understanding are leading to fresh advances in the field. "Psychoacoustics helps us detangle what's happening in the physical world from what's in our minds," says Nura co-founder Kyle Slater. Case in point: his doctoral work on hearing impairments, which led to the realisation that headphones could communicate with touch, as well as sound.

Using psychoacoustics in software is very much a work in progress. "Understanding acoustic signals is still at the beginning," says Michael Breidenbrücker, co-founder of Last.fm, who has been working on augmented aural products since the 90s. "There are very basic acoustic problems we still don't understand." Computers can be trained to see that a person standing behind a car is distinct from the vehicle, but when the same effect occurs acoustically, like when a bell rings during a speech, the system is flummoxed. "We are using simple algorithms," says Breidenbrücker.

Even with this disadvantage, augmented headsets have a huge benefit when it comes to wearability. AR glasses have not changed fundamentally since Google's "glassholes" were assaulted in public in the summer of 2014. By contrast, if you're reading this at work or in a public place, chances are you'll be surrounded by headphone-wearing people leaking muffled beats. We are less precious about our ears. Consider this: humans actually wore Bluetooth headsets in public.

Read more: The best headphones for any budget in 2021

The point of AAR, its proponents say, is mindfulness. "It's about being present," says Kraft. "We think about it as a way in which we can optimise the world." But AAR is also a commercial effort, in which the ear is described as “the next platform in computing.” Kraft discusses targeted promotions for opted-in mall shoppers, or announcements directed at certain sections of stadia. “The ear is the last mile to the brain,” notes Breidenbrücker. “Whoever owns that last mile will be in a very powerful position.”

With AI voice assistants - Siri, Google Assistant, Echo et al - increasing in power and prominence, the ability to communicate by talking to our devices grows closer. Imagine: then we'll be distracted not by our phones, but by our headsets, pushing a stream of notifications directly into our ears.

Even if in-ear alerts prove too disruptive even for Silicon Valley, the mere act of acoustic filtering is still a dystopian prospect. Listening is democratic, because it is in most cases passive; it takes what it is given without care or favour. AAR presents a processed version of the world, loud noises sanded down, strident voices washed away. Social media already cuts us off from different social and political views. Will AR extend this principle into the physical world? Hear no evil, then - once the glasses arrive - see no evil?

Perhaps. But AAR brings possibilities as well as worries. One augmented aural device due to arrive in 2017 promises instant translation in near real time. That's the claim of Waverly Labs, a New York-based startup that raised more than $2 million on Indiegogo for Pilot, its language decoder. The questions surrounding the device - there have been no live tests, and many in the industry doubt it will ever come to market - suggest such assertions should be taken with a pinch of salt, but the prospect is there all the same.

"Real-time translation will happen in the next two to three years," says Kraft. "This can be done," adds Breidenbrücker. Once it is, we'll all be able to speak to each other. Hopefully we'll like what we hear.

This article was originally published by WIRED UK