The Shallows

I’ve got a review of The Shallows, a new book by Nicholas Carr on the internet and the brain, in the NY Times: Socrates started what may have been the first technology scare. In the “Phaedrus,” he lamented the invention of books, which “create forgetfulness” in the soul. Instead of remembering for themselves, Socrates warned, […]

I've got a review of The Shallows, a new book by Nicholas Carr on the internet and the brain, in the NY Times:

Socrates started what may have been the first technology scare. In the "Phaedrus," he lamented the invention of books, which "create forgetfulness" in the soul. Instead of remembering for themselves, Socrates warned, new readers were blindly trusting in "external written characters." The library was ruining the mind.

Needless to say, the printing press only made things worse. In the 17th century, Robert Burton complained, in "The Anatomy of Melancholy," of the "vast chaos and confusion of books" that make the eyes and fingers ache. By 1890, the problem was the speed of transmission: one eminent physician blamed "the pelting of telegrams" for triggering an outbreak of mental illness. And then came radio and television, which poisoned the mind with passive pleasure. Children, it was said, had stopped reading books. Socrates would be pleased.

In "The Shallows: What the Internet Is Doing to Our Brains," the technology writer Nicholas Carr extends this anxiety to the 21st century. The book begins with a melodramatic flourish, as Carr recounts the pleas of the supercomputer HAL in "2001: A Space Odyssey." The machine is being dismantled, its wires unplugged: "My mind is going," HAL says. "I can feel it."

For Carr, the analogy is obvious: The modern mind is like the fictional computer. "I can feel it too," he writes. "Over the last few years, I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory." While HAL was silenced by its human users, Carr argues that we are sabotaging ourselves, trading away the seriousness of sustained attention for the frantic superficiality of the Internet. As Carr first observed in his much discussed 2008 article in The Atlantic, "Is Google Making Us Stupid?," the mere existence of the online world has made it much harder (at least for him) to engage with difficult texts and complex ideas. "Once I was a scuba diver in a sea of words," Carr writes, with typical eloquence. "Now I zip along the surface like a guy on a Jet Ski."

Much of Carr's argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that's shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I'm a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)

As I make clear in the review, I was not entirely convinced by Carr's arguments:

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind. For instance, a comprehensive 2009 review of studies published on the cognitive effects of video games found that gaming led to significant improvements in performance on various cognitive tasks, from visual perception to sustained attention. This surprising result led the scientists to propose that even simple computer games like Tetris can lead to "marked increases in the speed of information processing." One particularly influential study, published in Nature in 2003, demonstrated that after just 10 days of playing Medal of Honor, a violent first-person shooter game, subjects showed dramatic increases in visual attention and memory.

Carr's argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a "book-like text." Interestingly, this brain area underlies the precise talents, like selective attention and deliberate analysis, that Carr says have vanished in the age of the Internet. Google, in other words, isn't making us stupid -- it's exercising the very mental muscles that make us smarter.

This doesn't mean that the rise of the Internet won't lead to the loss of important mental talents; every technology comes with trade-offs. Look, for instance, at literacy itself: when children learn to decode letters, they usurp large chunks of the visual cortex previously devoted to object recognition. The end result is that literate humans are less able to "read" the details of the natural world.

On his blog, Carr disagrees with me:

I was startled to find him claim that "the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind." I think that's incorrect, even while I'm happy to acknowledge that brain studies are imprecise and can be interpreted in different ways (and that the definition of what's "good for the mind" will vary from person to person).

As evidence, Carr refers to a very interesting review by Patricia Greenfield, a developmental psychologist at UCLA. The problem with the review is that, while it covers many important topics (from the Flynn effect to the tradeoffs involved in multitasking) it only discusses a single study that actually looked at the cognitive effects of the internet.

This was tested in a communication studies class where students were generally encouraged to use their laptops during lectures, in order to explore lecture topics in greater detail on the Internet and in library databases. Half of the students were allowed to keep their laptops open, while the other half (randomly assigned) had to close their laptops. Students in the closed laptop condition recalled significantly more material in a surprise quiz after class than did students in the open laptop condition. Although these results may be obvious, many universities appear to be unaware of the learning decrement produced by multitasking when they wire classrooms with the intention of improving learning.

Now this is a compelling finding, and I agree with Professor Greenfield that it should lead colleges to reconsider having the internet in the lecture hall. (Although it's also worth noting that the students in the internet cohort didn't get lower grades in the class.) But as the paper itself makes clear, this was not a study about the cognitive effects of the world wide web. (After all, most of us don't surf the web while listening to a professor.) Instead, the experiment was designed to explore the hazards of multitasking:

The work here explored the effects of engaging in multiple tasks simultaneously on traditional outcome measures of performance. While methodologically the procedures employed in the present study differ somewhat from those of the classic divided attention paradigm, the essence of those procedures has been preserved, and the resulting performance decrement obtained. In two studies, students performing multiple tasks performed significantly poorer on immediate measures of memory for the to-be-learned content.

Given this paucity of evidence, I think it's far too soon to be drawing firm conclusions about the negative effects of the web. Furthermore, as I note in the review, the majority of experiments that have looked directly at the effects of the internet, video games and online social networking have actually found significant cognitive benefits. Video games improve visual attention and memory, Facebook users have mor
e friends (in real life, t
oo) and preliminary evidence suggests that surfing the web "engages a greater extent of neural circuitry...[than] reading text pages."

Now these studies are all imperfect and provisional. (For one thing, it's not easy to play with Google while lying still in a brain scanner.) But they certainly don't support the hypothesis that the internet, as Carr writes, is turning us into "mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory."

To get around these problematic findings, Carr spends much of the book dwelling on the costs of multitasking. Here he is on much firmer scientific ground: the brain is a bounded machine, which is why talking on the phone makes us more likely to crash the car. (Interestingly, video games seem to improve our ability to multitask.) This isn't a new idea - Herbert Simon was warning about the poverty of attention fifty years ago - although I have little doubt that the internet makes it slightly easier for us to multitask while working and reading. (Personally, I multitask more while watching television than while online.)

But even here the data is complicated. Some studies, for instance, have found that distraction encourages unconscious processing, which leads to improved decisions in complex situations. (In other words, the next time you're faced with a really difficult choice, you might want to study the information and then multitask on the web for a few hours.) Other studies have found that temporary distractions can increase creativity, at least when it comes solving difficult creative puzzles. Finally, there is a growing body of evidence on the benefits of mind wandering, which is what happens when the spotlight of attention begins to shift inwards. Does this mean we should always be distracted? Of course not. But it does suggest that focused attention is not always ideal. The larger lesson, I think, is that we should be wary of privileging certain types of thinking over others. The mind is a pluralistic machine.

One last note: Carr makes many important, timely and eloquent points about the cultural losses that accrue with the arrival of new technologies. (This seems like an apt place to add that Carr is an awesome writer; The Shallows was full of graceful prose.) I'm a literary snob, and I have a weakness for dense novels and modernist poetry. I do worry, like Carr, that the everywhereness of the internet (and television before that) is making it harder for people to disappear down the worm hole of difficult literature. This is largely because the book is a quiet medium, and leaves much of the mind a bit bored. (This helps explain why many mind wandering paradigms give undergrads readings from War and Peace; Tolstoy is great for triggering daydreams, which suggests that literature doesn't always lead to the kind of sustained attention that Carr desires.)

But this cultural argument doesn't require brain scans and lab studies. One doesn't need to name drop neural plasticity in order to hope that we will always wrestle with the challenging texts of Auden, Proust and even Tolstoy. Carr and I might disagree about the science, but I think we both agree that the act of engaging with literature is an essential element of culture. (It might not be "good" for my brain, but it's certainly good for the mind.) We need Twitter and The Waste Land.

UPDATE: Nick Carr posts a typically thoughtful reply in the comments, and I reply to his reply.