I've got a review of The Shallows, a new book by Nicholas Carr on the internet and the brain, in the NY Times:
Much of Carr's argument revolves around neuroscience, as he argues that our neural plasticity means that we quickly become mirrors to our mediums; the brain is an information-processing machine that's shaped by the kind of information it processes. And so we get long discussions of Eric Kandel, aplysia and the malleability of brain cells. (Having work in the Kandel lab for several years, I'm a big fan of this research program. I just never expected the kinase enzymes of sea slugs to be applied to the internet.)
As I make clear in the review, I was not entirely convinced by Carr's arguments:
On his blog, Carr disagrees with me:
As evidence, Carr refers to a very interesting review by Patricia Greenfield, a developmental psychologist at UCLA. The problem with the review is that, while it covers many important topics (from the Flynn effect to the tradeoffs involved in multitasking) it only discusses a single study that actually looked at the cognitive effects of the internet.
Now this is a compelling finding, and I agree with Professor Greenfield that it should lead colleges to reconsider having the internet in the lecture hall. (Although it's also worth noting that the students in the internet cohort didn't get lower grades in the class.) But as the paper itself makes clear, this was not a study about the cognitive effects of the world wide web. (After all, most of us don't surf the web while listening to a professor.) Instead, the experiment was designed to explore the hazards of multitasking:
Given this paucity of evidence, I think it's far too soon to be drawing firm conclusions about the negative effects of the web. Furthermore, as I note in the review, the majority of experiments that have looked directly at the effects of the internet, video games and online social networking have actually found significant cognitive benefits. Video games improve visual attention and memory, Facebook users have mor
e friends (in real life, t
oo) and preliminary evidence suggests that surfing the web "engages a greater extent of neural circuitry...[than] reading text pages."
Now these studies are all imperfect and provisional. (For one thing, it's not easy to play with Google while lying still in a brain scanner.) But they certainly don't support the hypothesis that the internet, as Carr writes, is turning us into "mere signal-processing units, quickly shepherding disjointed bits of information into and then out of short-term memory."
To get around these problematic findings, Carr spends much of the book dwelling on the costs of multitasking. Here he is on much firmer scientific ground: the brain is a bounded machine, which is why talking on the phone makes us more likely to crash the car. (Interestingly, video games seem to improve our ability to multitask.) This isn't a new idea - Herbert Simon was warning about the poverty of attention fifty years ago - although I have little doubt that the internet makes it slightly easier for us to multitask while working and reading. (Personally, I multitask more while watching television than while online.)
But even here the data is complicated. Some studies, for instance, have found that distraction encourages unconscious processing, which leads to improved decisions in complex situations. (In other words, the next time you're faced with a really difficult choice, you might want to study the information and then multitask on the web for a few hours.) Other studies have found that temporary distractions can increase creativity, at least when it comes solving difficult creative puzzles. Finally, there is a growing body of evidence on the benefits of mind wandering, which is what happens when the spotlight of attention begins to shift inwards. Does this mean we should always be distracted? Of course not. But it does suggest that focused attention is not always ideal. The larger lesson, I think, is that we should be wary of privileging certain types of thinking over others. The mind is a pluralistic machine.
One last note: Carr makes many important, timely and eloquent points about the cultural losses that accrue with the arrival of new technologies. (This seems like an apt place to add that Carr is an awesome writer; The Shallows was full of graceful prose.) I'm a literary snob, and I have a weakness for dense novels and modernist poetry. I do worry, like Carr, that the everywhereness of the internet (and television before that) is making it harder for people to disappear down the worm hole of difficult literature. This is largely because the book is a quiet medium, and leaves much of the mind a bit bored. (This helps explain why many mind wandering paradigms give undergrads readings from War and Peace; Tolstoy is great for triggering daydreams, which suggests that literature doesn't always lead to the kind of sustained attention that Carr desires.)
But this cultural argument doesn't require brain scans and lab studies. One doesn't need to name drop neural plasticity in order to hope that we will always wrestle with the challenging texts of Auden, Proust and even Tolstoy. Carr and I might disagree about the science, but I think we both agree that the act of engaging with literature is an essential element of culture. (It might not be "good" for my brain, but it's certainly good for the mind.) We need Twitter and The Waste Land.
UPDATE: Nick Carr posts a typically thoughtful reply in the comments, and I reply to his reply.