The neuroscience of screwing up

This article was taken from the February issue of Wired UK magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

It all started with the sound of static. In May 1964, two astronomers at Bell Labs, Arno Penzias and Robert Wilson, were using a radio telescope in suburban New Jersey to search the far reaches of space. Their aim was to make a detailed survey of radiation in the Milky Way, which would allow them to map those vast tracts of the universe devoid of bright stars. This meant Penzias and Wilson needed a receiver that was exquisitely sensitive, able to eavesdrop on all the emptiness. And so they had retrofitted an old radio telescope, installing amplifiers and a calibration system to make the signals coming from space just a little bit louder. But they made the scope too sensitive. Whenever Penzias and Wilson aimed their dish at the sky, they picked up a persistent background noise, a static that interfered with all of their observations. It was an incredibly annoying technical problem.

At first, they assumed the noise was man-made, an emanation from nearby New York City. But when they pointed their telescope straight at Manhattan, the static didn't increase. Another possibility was that the sound was due to fallout from recent nuclear bomb tests in the upper atmosphere. But that didn't make sense either, since the level of interference remained constant.

And then there were the pigeons: a pair of birds was roosting in the receiver, leaving a trail of what they later described as "white dielectric material". The scientists evicted the pigeons and scrubbed away their mess, but the static remained, as loud as ever.

For the next year, Penzias and Wilson tried to ignore the noise, concentrating on observations that didn't require cosmic silence or perfect precision. They insulated the steel joints with aluminium tape, kept the receiver as clean as possible and hoped a shift in the weather might clear up the interference. They waited for the seasons to change, and then change again, but the noise always remained, making it impossible to find the faint radio echoes they were looking for. Their telescope was a failure.

Kevin Dunbar is a researcher who studies how scientists study things -- how they fail and succeed. In the early 90s, he began an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorised about how science happens, but Dunbar wanted to get beyond theory. He wasn't satisfied with abstract models of the scientific method -- that seven-step process we drum into students -- or the dogmatic faith scientists place in logic and objectivity. Dunbar knew scientists often don't think the way the textbooks say they are supposed to.

He suspected that all those philosophers of science -- from Aristotle to Karl Popper -- had missed something important about what goes on in the lab. (As the physicist Richard Feynman famously quipped, "Philosophy of science is about as useful to scientists as ornithology is to birds.") So Dunbar decided to launch an in-vivo investigation, attempting to learn from the messiness of real experiments.

He ended up spending the next year staring at post-docs and test tubes: the researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analysing the data. "I'm not sure I appreciated what I was getting myself into," Dunbar says. "I asked for complete access, and I got it. But there was just so much to keep track of."

Dunbar came away from his in-vivo studies with an unsettling insight: science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) "The scientists had these elaborate theories about what was supposed to happen," Dunbar says. "But the results kept contradicting their theories. It wasn't uncommon for someone to spend a month on a project and then discard all their data because the data didn't make sense." Perhaps they hoped to see a specific protein but it wasn't there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: the scientists were looking for X but they found Y.

Dunbar was fascinated by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which "everything but the most esoteric detail of the result is known in advance".)

However, when experiments were observed up close -- and Dunbar interviewed the scientists about even the most trifling details -- this idealised version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn't work and data that couldn't be replicated and simple studies riddled with anomalies. "These weren't sloppy people," Dunbar says. "They were working in some of the finest labs in the world. But experiments rarely tell us what we think they're going to tell us.

That's the dirty secret of science."

How did the researchers cope with all this unexpected data? How did they deal with so much failure? Dunbar realised the vast majority of people in the lab followed the same basic strategy.

First, they would blame the method. The surprising finding was classified as a mere mistake; perhaps a machine malfunctioned or an enzyme had gone stale. "The scientists were trying to explain away what they didn't understand," Dunbar says. "It's as if they didn't want to believe it."

The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved.

But the weirdness usually remained.

This is when things get interesting. According to Dunbar, even after scientists had generated their "error" multiple times -- it was a consistent inconsistency -- they might fail to follow it up. "Given the amount of unexpected data in science, it's just not feasible to pursue everything," Dunbar says. "People have to pick and choose what's interesting and what's not, but they often choose badly."And so the result was tossed aside. The scientists had discovered a new fact, but they called it a failure.

The reason we're so resistant to anomalous information -- the real reason researchers automatically assume that every unexpected result is a stupid mistake -- is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we edit our reality, searching for evidence that confirms what we already believe. Although we pretend we're empiricists -- our views dictated by nothing but the facts -- we're actually blinkered when it comes to information that contradicts our theories. The problem with science, then, isn't that most experiments fail -- it's that most failures are ignored.

As he tried to understand further how people deal with dissonant data, Dunbar conducted experiments of his own. In one 2003 study, he had undergraduates at Dartmouth College, New Hampshire, watch a couple of short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo's metal balls all landed at the exact same time -- a refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. (Intuitively, we're all Aristotelians.) They found the two balls falling at the same rate to be deeply unrealistic, despite the fact that it's how objects actually behave.

Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics students the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex (ACC), a collar of tissue located in the centre of the brain. The ACC is typically associated with the perception of errors and contradictions; neuroscientists often refer to it as part of the "Oh shit!" circuit.

So far, so obvious. Most undergrads are scientifically illiterate. But Dunbar also conducted the experiment with physics students. As expected, their education enabled them to see the error, and for them it was the inaccurate video that triggered the ACC. But there's another region of the brain that can be activated as we go about editing reality. It's called the dorsolateral prefrontal cortex, or DLPFC. It's located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that don't square with our preconceptions. For scientists, it's a problem.

When physics students saw the video with the aberrant balls, their DLPFCs kicked into gear and they deleted the image from their consciousness. In most contexts, this act of editing is an essential cognitive skill. (When the DLPFC is damaged, people often struggle to pay attention as they can't filter out irrelevant stimuli.) However, when it comes to noticing anomalies, an efficient prefrontal cortex can be a serious liability. The DLPFC is constantly censoring the world, erasing facts from our experience. If the ACC is the "Oh shit!" circuit, the DLPFC is the Delete key. When the ACC and DLPFC "turn on together, people aren't just noticing that something doesn't look right," Dunbar says. "They're also inhibiting that information."

The lesson is that not all data is created equal in our mind's eye. When it comes to interpreting our experiments, we see what we want to see and disregard the rest. The physics students, for instance, didn't watch the video and wonder whether Galileo might be wrong. They put their trust in theory, tuning out whatever it couldn't explain. Belief, in other words, is a kind of blindness.

But this research raises an obvious question: if humans -- scientists included-cling to their beliefs, why is science so successful? How do our theories ever change? How do we learn to reinterpret a failure to see the answer?

This was the challenge facing Penzias and Wilson as they tinkered with their radio telescope. Their background noise was still inexplicable, but it was getting harder to ignore, if only because it was always there. After a year of trying to erase the static, after assuming it was just a mechanical malfunction, an irrelevant artifact, or pigeon droppings, Penzias and Wilson began exploring the possibility that it was real. Perhaps it was everywhere for a reason.

In 1918, sociologist Thorstein Veblen was commissioned by a popular magazine devoted to American Jewry to write an essay on how Jewish "intellectual productivity" would be changed if Jews were given a homeland. At the time, Zionism was becoming a potent political movement, and the magazine editor assumed Veblen would make the obvious argument: a Jewish state would lead to an intellectual boom, as Jews would no longer be held back by institutional anti-Semitism. But Veblen, always the provocateur, turned the premise on its head. He argued instead that the scientific achievements of Jews -- at the time, Albert Einstein was about to win the Nobel Prize and Sigmund Freud was a best-selling author -- were due largely to their marginal status. In other words, persecution wasn't holding the Jewish community back -- it was pushing it forward.

The reason, according to Veblen, was that Jews were perpetual outsiders, which filled them with a "sceptical animus". Because they had no vested interest in "the alien lines of gentile inquiry", they were able to question everything. Just look at Einstein, who did much of his most radical work as a lowly patent clerk in Bern, Switzerland. According to Veblen's logic, if Einstein had got tenure at an elite German university, he would have become just another physics professor with a vested interest in the space-time status quo. He would never have noticed the anomalies that led him to develop the theory of relativity.

Predictably, Veblen's essay was potentially controversial, and not just because he was a Lutheran from Wisconsin. The magazine editor was not pleased; Veblen could be seen as an apologist for anti-Semitism. But his larger point is crucial: there are advantages to thinking on the margin. When we look at a problem from the outside, we're more likely to notice what doesn't work.

Instead of shunting the unexpected aside with our "Oh shit!" circuit and Delete key, we take the mistake seriously. A new theory emerges from the ashes of our surprise.

Modern science is populated by expert insiders, schooled in narrow disciplines. Researchers have all studied the same thick textbooks, which make the world of fact seem settled. This led Kuhn, the philosopher of science, to argue that the only scientists capable of acknowledging the anomalies -- and thus shifting paradigms and starting revolutions -- are "either very young or very new to the field". In other words, they are classic outsiders, naive and untenured. They aren't inhibited from noticing failures that point toward new possibilities.

But Dunbar, who had spent all those years watching Stanford scientists struggle and fail, realised that the romantic narrative of the brilliant, perceptive newcomer left something out. After all, most scientific change isn't dramatic; revolutions are rare.

Instead, the epiphanies of science often come from researchers safely ensconced on the inside. "These aren't Einstein figures, working fromthe outside," Dunbar says."These are the guys with big grants."How do they overcome failure-blindness?

Although the scientific process is typically seen as a lonely pursuit, Dunbar found most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn't the presentation -- it was the debate that followed.

Dunbar observed that the sceptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they'd previously ignored. A single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.

But not every lab meeting was equally effective. Dunbar tells the story of two labs that both ran into the same experimental problem: the proteins they were trying to measure were sticking to a filter, making it impossible to analyse the data. "One of the labs was full of people from different backgrounds," Dunbar says. "They had biochemists and molecular biologists and geneticists and students in medical school." The other lab, in contrast, was made up of E.coli experts. Dunbar watched how each of these labs dealt with their protein problem. The E.coli group spent several weeks methodically testing various fixes. "It was extremely inefficient," Dunbar says. "They eventually solved it, but they wasted a lot of valuable time."

The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then potential answers began to emerge. "After another ten minutes of talking, the protein problem was solved," Dunbar says. "They made it look easy."

When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix forced the scientists to rely on metaphors and analogies to express themselves. (Unlike the

E.coli group, the second lab lacked a specialised language.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions.

Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins.

This is why other people are so helpful: they shock us out of our cognitive box. "I saw this happen all the time," Dunbar says. "A scientist would be trying to describe their approach, and they'd be getting a little defensive, and then they'd get this quizzical look on their face. It was like they'd finally understood what was important."

What turned out to be so important, of course, was the experimental error that felt like a failure. The answer had been there all along. It's not until we talk to a colleague or translate our idea into an analogy that we glimpse the meaning in our mistake. Bob Dylan, in other words, was right: there's no success quite like failure.

For the radio astronomers, the breakthrough was a casual conversation with an outsider. Penzias had been referred by a colleague to Robert Dicke, a Princeton scientist trained not in astrophysics but nuclear physics. He was best known for his work on radar systems during World War II. Dicke had since become interested in applying his radar technology to astronomy and a then-strange theory called the big bang, which postulated that the cosmos had started with a primordial explosion. Such a blast would have been so massive, Dicke argued, that it would have littered the entire universe with cosmic shrapnel, the radioactive residue of genesis. (This proposal was first made in 1948 by physicists George Gamow, Ralph Alpher and Robert Herman, although it had been largely forgotten by the astronomical community.)The problem for Dicke was that he couldn't find this residue using standard telescopes, so he was planning to build his own dish less than an hour's drive south of the Bell Labs one.

Then, in early 1965, Penzias picked up the phone and called Dicke. He wanted to know if the renowned radar and radio telescope expert could help explain the persistent noise bedevilling them.

Dicke's reaction was instantaneous: "Boys, we've been scooped!" he said. Someone else had found what he'd been searching for: radiation left over from the beginning of the universe. It had been a frustrating process for Penzias and Wilson. They'd been consumed by the problem and had spent too much time cleaning up pigeon shit -- but they had found an explanation for the static. Their failure was the answer to a different question.

And all that frustration paid off: in 1978, they received the Nobel Prize for physics.

My greatest mistakes

Six luminaries reveal how their stumbles and missteps paved the way to success

Mike Tyson

Boxer "I never lost a fight to another fighter, but I lost plenty to myself. I'm happy to have experienced ups and downs. My mentor, Cus D'Amato, used to say, 'Adversity will make the strong stronger and the weak weaker.' Alexander the Great, Genghis Khan, Machiavelli: insecurities allowed them to be great men. I've faced greater adversity than many of the great fighters. I put myself at the head of the list because of what my eyes have seen and my heart has endured."

Bill Clinton

Former United States President "When I was young, I often lost school elections, in part because I was in the band and not a star athlete. Then, when I didn't come out on top in music contests, losing was even more painful. My mother taught me not to feel sorry for myself. She said I had good health, a good mind and good friends, so I should just count my blessings and do more with them. When I was defeated for reelection as governor [of Arkansas] in 1980, there didn't seem to be much future for me in politics. I was probably the youngest ex-governor in US history. But if I hadn't been defeated, I probably would have never become president. It was a near-death experience, but it forced me to be more sensitive and to understand that if people think you've stopped listening, you're sunk."

Meg Whitman

Former CEO of eBay "When I joined eBay in January 1998, the company was growing rapidly -- revenue was jumping 70 percent every month. We continued to upgrade the site, but we didn't invest enough to keep up. If you think your site's traffic is going to grow 150 percent a year, be prepared for 250 percent growth. At 5pm on June 10, 1999, the site went down; we had corrupted the entire back-end database of the trading platform. I called my assistant and said, 'I think we are going to be here for a little while. We need cots, sleeping bags, toiletries and towels.' She organised sleeping quarters in the conference rooms. I ended up staying there for most of the summer.

By the end of it, I realised we needed a new CTO. A headhunter recommended Maynard Webb at Gateway. I called his boss and said, 'I need to have Maynard Webb now.' He was so stunned that he was just like, 'OK!' It took us the better part of six months to get the site rebuilt. But we ultimately engineered a system that is up 99.999 percent of the time. You learn a lot from a near-death experience like that."

Terry Gilliam

Writer and film director "When I was in my junior year of college, I was a counsellor at a summer camp in the San Bernardino Mountains of California. The campers came from Beverly Hills, and they were kids of the greats:

Danny Kaye's daughter, Hedy Lamarr's son, William Wyler's son. I was their drama coach, and I'd never done drama in my life. The camp is eight weeks long, and in the sixth week we had a parents' day. I was charged with putting on Alice in Wonderland for the event. I had grand ideas: we had to have great costumes, good sets, all the Tweedledumming and Tweedledeeing. But nothing came together. So a couple of days before the parents came, I pulled the plug. The people who ran the camp were horrified and I became a figure of hate. There was this terrible feeling of getting that close to Olympus and then failing so utterly and totally. It's probably the thing in my life that's given me the most nightmares.

It's left me with the only real scars that I carry around inside. I think that's why I foolishly march on in projects today. I will go until the whole thing crashes -- but it won't be my fault this time."

Jason Kilar CEO of Hulu "I used to run the DVD business at Amazon.com. In the autumn of 2000, we wanted to find out if we could lower prices and still make money. So we decided to do a test: 50 percent of the people who wanted a DVD on Amazon were given the everyday price; the other half were given a lower, test price. Our intentions were good but in hindsight it was very foolish. Some shoppers noticed the different prices and thought they were being discriminated against.

The whole thing blew up on the internet and we had TV crews outside our headquarters. I emailed Jeff Bezos and he summoned me to a conference room. He wanted to know what had happened, why it had happened and what was the best thing we could do at that point. The next morning he appeared on the CBS Early Show and explained everything. It was a defining moment for me. I learned that perception can quickly become reality: it was important not to look like we were hiding anything and to talk to our customers about what had occurred."

Nick Denton Founder of Gawker Media "In 2004, while vacationing in Brazil, I learned that Jason Calacanis had set up a rival blog network and hired away Pete Rojas, the editor of our gadget site, Gizmodo. There I was, a mogul at leisure, stopping by an internet café only to find that one of my top editors had been secretly working on a copycat site. It was the business equivalent of a kick in the balls and a sucky way to end a vacation. Engadget soon overtook Gizmodo. But I'm grateful to Calacanis. I had been taking it easy, and he roused the competitor in me. Gizmodo now has some ten times the traffic it had then; it's eclipsing Engadget. I always say to myself: it's never as bad (or as good) as it seems at the time."

This article was taken from the February issue of Wired UK magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

This article was originally published by WIRED UK