My Bionic Quest for Boléro

He's been haunted by Ravel's masterpiece since he lost his hearing. A deaf man's pursuit of the perfect audio upgrade.

With one listen, I was hooked. I was a 15-year-old suburban New Jersey nerd, racked with teenage lust but too timid to ask for a date. When I came across Boléro among the LPs in my parents' record collection, I put it on the turntable. It hit me like a neural thunderstorm, titanic and glorious, each cycle building to a climax and waiting but a beat before launching into the next.

I had no idea back then of Boléro's reputation as one of the most famous orchestral recordings in the world. When it was first performed at the Paris Opera in 1928, the 15-minute composition stunned the audience. Of the French composer, Maurice Ravel, a woman in attendance reportedly cried out, "He's mad … he's mad!" One critic wrote that Boléro "departs from a thousand years of tradition."

I sat in my living room alone, listening. Boléro starts simply enough, a single flute accompanied by a snare drum: da-da-da-dum, da-da-da-dum, dum-dum, da-da-da-dum. The same musical clause repeats 17 more times, each cycle adding instruments, growing louder and more insistent, until the entire orchestra roars in an overpowering finale of rhythm and sound. Musically, it was perfect for my ear. It had a structure that I could easily grasp and enough variation to hold my interest.

It took a lot to hold my interest; I was nearly deaf at the time. In 1964, my mother contracted rubella while pregnant with me. Hearing aids allowed me to understand speech well enough, but most music was lost on me. Boléro was one of the few pieces I actually enjoyed. A few years later, I bought the CD and played it so much it eventually grew pitted and scratched. It became my touchstone. Every time I tried out a new hearing aid, I'd check to see if Boléro sounded OK. If it didn't, the hearing aid went back.

And then, on July 7, 2001, at 10:30 am, I lost my ability to hear Boléro - and everything else. While I was waiting to pick up a rental car in Reno, I suddenly thought the battery in my hearing aid had died. I replaced it. No luck. I switched hearing aids. Nothing.

I got into my rental car and drove to the nearest emergency room. For reasons that are still unknown, my only functioning ear had suffered "sudden-onset deafness." I was reeling, trying to navigate in a world where the volume had been turned down to zero.

But there was a solution, a surgeon at Stanford Hospital told me a week later, speaking slowly so I could read his lips. I could have a computer surgically installed in my skull. A cochlear implant, as it is known, would trigger my auditory nerves with 16 electrodes that snaked inside my inner ear. It seemed drastic, and the $50,000 price tag was a dozen times more expensive than a high-end hearing aid. I went home and cried. Then I said yes.

For the next two months, while awaiting surgery, I was totally deaf except for a thin trickle of sound from my right ear. I had long since become accustomed to not hearing my own voice when I spoke. It happened whenever I removed my hearing aid. But that sensation was as temporary as waking up without my glasses. Now, suddenly, the silence wasn't optional. At my job as a technical writer in Silicon Valley, I struggled at meetings. Using the phone was out of the question.

In early September, the surgeon drilled a tunnel through an inch and a half of bone behind my left ear and inserted the 16 electrodes along the auditory nerve fibers in my cochlea. He hollowed a well in my skull about the size of three stacked quarters and snapped in the implant.

When the device was turned on a month after surgery, the first sentence I heard sounded like "Zzzzzz szz szvizzz ur brfzzzzzz?" My brain gradually learned how to interpret the alien signal. Before long, "Zzzzzz szz szvizzz ur brfzzzzzz?" became "What did you have for breakfast?" After months of practice, I could use the telephone again, even converse in loud bars and cafeterias. In many ways, my hearing was better than it had ever been. Except when I listened to music.

I could hear the drums of Boléro just fine. But the other instruments were flat and dull. The flutes and soprano saxophones sounded as though someone had clapped pillows over them. The oboes and violins had become groans. It was like walking color-blind through a Paul Klee exhibit. I played Boléro again and again, hoping that practice would bring it, too, back to life. It didn't.

The implant was embedded in my head; it wasn't some flawed hearing aid I could just send back. But it was a computer. Which meant that, at least in theory, its effectiveness was limited only by the ingenuity of software engineers. As researchers learn more about how the ear works, they continually revise cochlear implant software. Users await new releases with all the anticipation of Apple zealots lining up for the latest Mac OS.

About a year after I received the implant, I asked one implant engineer how much of the device's hardware capacity was being used. "Five percent, maybe." He shrugged. "Ten, tops."

I was determined to use that other 90 percent. I set out on a crusade to explore the edges of auditory science. For two years tugging on the sleeves of scientists and engineers around the country, offering myself as a guinea pig for their experiments. I wanted to hear Boléro again.

Helen Keller famously said that if she had to choose between being deaf and being blind, she'd be blind, because while blindness cut her off from things, deafness cut her off from people. For centuries, the best available hearing aid was a horn, or ear trumpet, which people held to their ears to funnel in sound. In 1952, the first electronic hearing aid was developed. It worked by blasting amplified sound into a damaged ear. However it (and the more advanced models that followed) could help only if the user had some residual hearing ability, just as glasses can help only those who still have some vision. Cochlear implants, on the other hand, bypass most of the ear's natural hearing mechanisms. The device's electrodes directly stimulate nerve endings in the ear, which transmit sound information to the brain. Since the surgery can eliminate any remaining hearing, implants are approved for use only in people who can't be helped by hearing aids. The first modern cochlear implants went on the market in Australia in 1982, and by 2004 approximately 82,500 people worldwide had been fitted with one.

When technicians activated my cochlear implant in October 2001, they gave me a pager-sized processor that decoded sound and sent it to a headpiece that clung magnetically to the implant underneath my skin (see "Reprogramming the Inner Ear," page 154). The headpiece contained a radio transmitter, which sent the processor's data to the implant at roughly 1 megabit per second. Sixteen electrodes curled up inside my cochlea strobed on and off to stimulate my auditory nerves. The processor's software gave me eight channels of auditory resolution, each representing a frequency range. The more channels the software delivers, the better the user can distinguish between sounds of different pitches.

Eight channels isn't much compared with the capacity of a normal ear, which has the equivalent of 3,500 channels. Still, eight works well enough for speech, which doesn't have much pitch variation. Music is another story. The lowest of my eight channels captured everything from 250 hertz (about middle C on the piano) to 494 hertz (close to the B above middle C), making it nearly impossible for me to distinguish among the 11 notes in that range. Every note that fell into a particular channel sounded the same to me.

So in mid-2002, nine months after activation, I upgraded to a program called Hi-Res, which gave me 16 channels - double the resolution! An audiologist plugged my processor into her laptop and uploaded the new code. I suddenly had a better ear, without surgery. In theory, I would now be able to distinguish among tones five notes apart instead of 11.

I eagerly plugged my Walkman into my processor and turned it on. Boléro did sound better. But after a day or two, I realized that "better" still wasn't good enough. The improvement was small, like being in that art gallery again and seeing only a gleam of pink here, a bit of blue there. I wasn't hearing the Boléro I remembered.

At a cochlear implant conference in 2003, I heard Jay Rubinstein, a surgeon and researcher at the University of Washington, say that it took at least 100 channels of auditory information to make music pleasurable. My jaw dropped. No wonder. I wasn't even close.

A year later, I met Rubinstein at another conference, and he mentioned that there might be ways to bring music back to me. He told me about something called stochastic resonance; studies suggested that my music perception might be aided by deliberately adding noise to what I hear. He took a moment to give me a lesson in neural physiology. After a neuron fires, it goes dormant for a fraction of a second while it resets. During that phase, it misses any information that comes along. When an electrode zaps thousands of neurons at once, it forces them all to go dormant, making it impossible for them to receive pulses until they reset. That synchrony means I miss bits and pieces of information.

Desynchronizing the neurons, Rubinstein explained, would guarantee that they're never all dormant simultaneously. And the best way to get them out of sync is to beam random electrical noise at them. A few months later, Rubinstein arranged a demonstration.

An audiologist at the University of Iowa working with Rubenstein handed me a processor loaded with the stochastic-resonance software. The first thing I heard was a loud whoosh - the random noise. It sounded like a cranked-up electric fan. But in about 30 seconds, the noise went away. I was puzzled. "You've adapted to it," the technician told me. The nervous system can habituate to any kind of everyday sound, but it adjusts especially quickly to noise with no variation. Stochastic-resonance noise is so content-free that the brain tunes it out in seconds.

In theory, the noise would add just enough energy to incoming sound to make faint details audible. In practice, everything I heard became rough and gritty. My own voice sounded vibrato, mechanical, and husky - even a little querulous, as if I were perpetually whining.

We tried some quick tests to take my newly programmed ear out for a spin. It performed slightly better in some ways, slightly worse in others - but there was no dramatic improvement. The audiologist wasn't surprised. She told me that, in most cases, a test subject's brain will take weeks or even months to make sense of the additional information. Furthermore, the settings she chose were only an educated guess at what might work for my particular physiology. Everyone is different. Finding the right setting is like fishing for one particular cod in the Atlantic.

The university loaned me the processor to test for a few months. As soon as I was back in the hotel, I tried my preferred version of Boléro, a 1982 recording conducted by Charles Dutoit with the Montréal Symphony Orchestra. It sounded different, but not better. Sitting at my keyboard, I sighed a little and tapped out an email thanking Rubinstein and encouraging him to keep working on it.

Music depends on low frequencies for its richness and mellowness. The lowest-pitched string on a guitar vibrates at 83 hertz, but my Hi-Res software, like the eight-channel model, bottoms out at 250 hertz. I do hear something when I pluck a string, but it's not actually an 83-hertz sound. Even though the string is vibrating at 83 times per second, portions of it are vibrating faster, giving rise to higher-frequency notes called harmonics. The harmonics are what I hear.

The engineers haven't gone below 250 hertz because the world's low-pitched sounds - air conditioners, engine rumbles - interfere with speech perception. Furthermore, increasing the total frequency range means decreasing resolution, because each channel has to accommodate more frequencies. Since speech perception has been the main goal during decades of research, the engineers haven't given much thought to representing low frequencies. Until Philip Loizou came along.

Loizou and his team of postdocs at the University of Texas at Dallas are trying to figure out ways to give cochlear implant users access to more low frequencies. A week after my frustratingly inconclusive encounter with stochastic resonance, I traveled to Dallas and asked Loizou why the government would give him a grant to develop software that increases musical appreciation. "Music lifts up people's spirits, helps them forget things," he told me in his mild Greek accent. "The goal is to have the patient live a normal life, not to be deprived of anything."

Loizou is trying to negotiate a trade-off: narrowing low-frequency channels while widening higher-frequency channels. But his theories only hinted at what specific configurations might work best, so Loizou was systematically trying a range of settings to see which ones got the better results.

The team's software ran only on a desktop computer, so on my visit to Dallas I had to be plugged directly into the machine. After a round of testing, a postdoc assured me, they would run Boléro through their software and pipe it into my processor via Windows Media Player.

I spent two and a half days hooked up to the computer, listening to endless sequences of tones - none of it music - in a windowless cubicle. Which of two tones sounded lower? Which of two versions of "Twinkle, Twinkle, Little Star" was more recognizable? Did this string of notes sound like a march or a waltz? It was exacting, high-concentration work - like taking an eye exam that lasted for two days. My responses produced reams of data that they would spend hours analyzing.

Forty minutes before my cab back to the airport was due, we finished the last test and the postdoc fired up the programs he needed to play Boléro. Some of the lower pitches I'd heard in the previous two days had sounded rich and mellow, and I began thinking wistfully about those bassoons and oboes. I felt a rising sense of anticipation and hope.

I waited while the postdoc tinkered with the computer. And waited. Then I noticed the frustrated look of a man trying to get Windows to behave. "I do this all the time," he said, half to himself. Windows Media Player wouldn't play the file.

I suggested rebooting and sampling Boléro through a microphone. But the postdoc told me he couldn't do that in time for my plane. A later flight wasn't an option; I had to be back in the Bay Area. I was crushed. I walked out of the building with my shoulders slumped. Scientifically, the visit was a great success. But for me, it was a failure. On the flight home, I plugged myself into my laptop and listened sadly to Boléro with Hi-Res. It was like eating cardboard.

It's June 2005, a few weeks after my visit to Dallas, and I'm ready to try again. A team of engineers at Advanced Bionics, one of three companies in the world that makes bionic ears, is working on a new software algorithm for so-called virtual channels. I hop on a flight to their Los Angeles headquarters, my CD player in hand.

My implant has 16 electrodes, but the virtual-channels software will make my hardware act like there are actually 121. Manipulating the flow of electricity to target neurons between each electrode creates the illusion of seven new electrodes between each actual pair, similar to the way an audio engineer can make a sound appear to emanate from between two speakers. Jay Rubinstein had told me two years ago that it would take at least 100 channels to create good music perception. I'm about to find out if he's right.

I'm sitting across a desk from Gulam Emadi, an Advanced Bionics researcher. He and an audiologist are about to fit me with the new software. Leo Litvak, who has spent three years developing the program, comes in to say hello. He's one of those people of whom others often say, "If Leo can't do it, it probably can't be done." And yet it would be hard to find a more modest person. Were it not for his clothes, which mark him as an Orthodox Jew, he would simply disappear in a roomful of people. Litvak tilts his head and smiles hello, shyly glances at Emadi's laptop, and sidles out.

At this point, I'm rationing my emotions like Spock. Hi-Res was a disappointment. Stochastic resonance remains a big if. The low-frequency experiment in Dallas was a bust. Emadi dinks with his computer and hands me my processor with the new software in it. I plug it into myself, plug my CD player into it, and press Play.

Boléro starts off softly and slowly, meandering like a breeze through the trees. Da-da-da-dum, da-da-da-dum, dum-dum, da-da-da-dum. I close my eyes to focus, switching between Hi-Res and the new software every 20 or 30 seconds by thumbing a blue dial on my processor.

My God, the oboes d'amore do sound richer and warmer. I let out a long, slow breath, coasting down a river of sound, waiting for the soprano saxophones and the piccolos. They'll come in around six minutes into the piece - and it's only then that I'll know if I've truly got it back.

As it turns out, I couldn't have chosen a better piece of music for testing new implant software. Some biographers have suggested that Boléro's obsessive repetition is rooted in the neurological problems Ravel had started to exhibit in 1927, a year before he composed the piece. It's still up for debate whether he had early-onset Alzheimer's, a left-hemisphere brain lesion, or something else.

But Boléro's obsessiveness, whatever its cause, is just right for my deafness. Over and over the theme repeats, allowing me to listen for specific details in each cycle.

At 5:59, the soprano saxophones leap out bright and clear, arcing above the snare drum. I hold my breath.

At 6:39, I hear the piccolos. For me, the stretch between 6:39 and 7:22 is the most Boléro of Boléro, the part I wait for each time. I concentrate. It sounds … right.

Hold on. Don't jump to conclusions. I backtrack to 5:59 and switch to Hi-Res. That heart-stopping leap has become an asthmatic whine. I backtrack again and switch to the new software. And there it is again, that exultant ascent. I can hear Boléro's force, its intensity and passion. My chin starts to tremble.

I open my eyes, blinking back tears. "Congratulations," I say to Emadi. "You have done it." And I reach across the desk with absurd formality and shake his hand.

There's more technical work to do, more progress to be made, but I'm completely shattered. I keep zoning out and asking Emadi to repeat things. He passes me a box of tissues. I'm overtaken by a vast sensation of surprise. I did it. For years I pestered researchers and asked questions. Now I'm running 121 channels and I can hear music again.

That evening, in the airport, sitting numbly at the gate, I listen to Boléro again. I'd never made it through more than three or four minutes of the piece on Hi-Res before getting bored and turning it off. Now, I listen to the end, following the narrative, hearing again its holy madness.

I pull out the Advanced Bionics T-shirt that the team gave me and dab at my eyes.

During the next few days I walk around in a haze of disbelief, listening to Boléro over and over to prove to myself that I really am hearing it again. But Boléro is just one piece of music. Jonathan Berger, head of Stanford's music department, tells me in an email, "There's not much of interest in terms of structure - it's a continuous crescendo, no surprises, no subtle interplay between development and contrast."

So now it's time to try out music with sophistication, innovation, grace, and depth. But I don't know where to begin. I need an expert with first-rate equipment, a huge music collection, and the ability to pick just the right pieces for my newly reprogrammed ear. I put the question to craigslist - "Looking for a music geek." Within hours, I hear from Tom Rettig, a San Francisco music producer.

In his studio, Rettig plays me Ravel's String Quartet in F Major and Philip Glass' String Quartet no. 5. I listen carefully, switching between the old software and the new. Both compositions sound enormously better on 121 channels. But when Rettig plays music with vocals, I discover that having 121 channels hasn't solved all my problems. While the crescendos in Dulce Pontes' Canção do Mar sound louder and clearer, I hear only white noise when her voice comes in. Rettig figures that relatively simple instrumentals are my best bet - pieces where the instruments don't overlap too much - and that flutes and clarinets work well for me. Cavalcades of brass tend to overwhelm me and confuse my ear.

And some music just leaves me cold: I can't even get through Kraftwerk's Tour de France. I wave impatiently to Rettig to move on. (Later, a friend tells me it's not the software - Kraftwerk is just dull. It makes me think that for the first time in my life I might be developing a taste in music.)

Listening to Boléro more carefully in Rettig's studio reveals other bugs. The drums sound squeaky - how can drums squeak? - and in the frenetic second half of the piece, I still have trouble separating the instruments.
After I get over the initial awe of hearing music again, I discover that it's harder for me to understand ordinary speech than it was before I went to virtual channels. I report this to Advanced Bionics, and my complaint is met by a rueful shaking of heads. I'm not the first person to say that, they tell me. The idea of virtual channels is a breakthrough, but the technology is still in the early stages of development.

But I no longer doubt that incredible things can be done with that unused 90 percent of my implant's hardware capacity. Tests conducted a month after my visit to Advanced Bionics show that my ability to discriminate among notes has improved considerably. With Hi-Res, I was able to identify notes only when they were at least 70 hertz apart. Now, I can hear notes that are only 30 hertz apart. It's like going from being able to tell the difference between red and blue to being able to distinguish between aquamarine and cobalt.

My hearing is no longer limited by the physical circumstances of my body. While my friends' ears will inevitably decline with age, mine will only get better.

Michael Chorost (michael@chorost.com) is the author of Rebuilt: How Becoming Part Computer Made Me More Human.
credit CT scan: Valley Radiology; Matt Hoyle
The author spent years tweaking the software on his cochlear implant.

credit Bryan Christie
Reprogramming the Inner Ear

Feature:

>

My Bionic Quest for Boléro

Plus:

>

Reprogramming the Inner Ear