When the news hit that a photographer was suing BuzzFeed for $3.6 million for reusing one of his images, some on the internet reacted with fear and horror. Because many of those people -- and websites -- are notoriously loose with reusing images, and they like to hide behind the blithe view that it's all "fair use."
These debates about the bounds of fair use will always be important, but they obscure a very unfair dynamic that is squeezing artists -- and turning the web into a battleground between humans and machines. The trouble is that in many cases today, there's no human artist, writer, or editor creating what we see on the web. Some algorithm assembled the photos and it's enjoying a nice little loophole. The machines sail on past the rules about copyright because the law lets those companies blame any infringement on the chaos of the internet. It's a system that's tilting the tables against any of the human artists who write, edit, or illustrate.
In other words, the battle for fair use is unfair to anyone who plays by the old rules and tries to share with the artists because human creatives can't compete with the automated services that aren't sharing with the artists.
#### Peter Wayner
##### About
Peter Wayner is the author of *Disappearing Cryptography *(published by Morgan Kaufmann, now part of Elsevier) and *Free for All* (published by HarperBusiness), as well as a number of e-books*. *He recently wrote [*Attention Must Be Paid, But For $800?*](http://www.attentionmustbepaidbook.com/) and another short [book](http://www.futureridebook.com/) exploring the coming changes from autonomous cars. Wayner has contributed to *The New York Times*, *InfoWorld*, and other publications. He lives in Baltimore.
I’m not a practicing lawyer, but I can speak from my personal experience around the issue of fair use when putting together Attention Must Be Paid, But For $800?, a short economic history that compared two productions of Death of a Salesman. A friend suggested adding some photos from 1949 and 2012 (the years when the first and latest productions reached Broadway) because the book used these events as a way to understand just how life and our economy had changed. Adding pictures of the production in 1949 and 2012 would really bring the manuscript to life.
While websites can invoke murky notions that the law is different in cyberspace, the law on books is well understood. If I included photos, I needed to share my royalties with the photographers or risk a punitive copyright lawsuit. As a creative worker, I understood sharing with the photographers. And the pictures would really add depth to the book.
After working through the often byzantine licensing matrices of major photo archives, I found the pictures would cost about $300-$600 per image -- adding 20 images would easily add about $10,000 to the book budget. Would this be worth it? Would more people buy an illustrated book? An informal marketing survey suggested it wasn't worth it; one friend told me flat out that if he wanted the pictures, he would just go to Google. And he was right: All the photos were there.
The automated machines have me and the photographers beat. Aggregators -- whether listmakers, search engines, online curation boards, content farms, and other sites -- can scrape them from the web and claim that posting these images is fair use. (BuzzFeed claims that what it does is "transformative," allowing them to call their lists a new creation.)
We already know these companies make a profit on the ads. But what we don’t know is that the algorithms they use are acting less and less like a card catalog for the web and more and more like an author. In other words, the machine isn't just a dumb hunk of silicon: It's a living creator. It’s less like a dull machine and more like a fully functional, content-producing Terminator.
>The algorithms are acting less like a card catalog for the web and more like an author. It's a living creator.
Anyone who searches for "Death of a Salesman" gets search results with a nice sidebar filled with a few facts and some images that Google scraped from websites under fair use. In this way, they can do things that I, a lowly human, can't do. And while I had to pay $10,000, they could “get” them for free.
The market therefore punishes the people who try to do the right thing by the photographers. If I raised the price of my book to pay for the images, even more people would choose the book "written" by Google's computers.
Is there recourse? Well, if the algorithm violates a copyright, owners can fill out DMCA takedown forms. But it’s an onerous process that can’t match the scale of the breach, because it pits human against machine. The aggregators’ machines scrape the web day and night but humans need to fill out the forms in their waking hours.
So what if we turned the model on its head? What if the researchers at these companies could improve their bots enough for the algorithms to make intelligent decisions about fair use? If their systems can organize the web and drive cars, surely they are capable of shouldering some of the responsibility for making smart decisions about fair use.
Such tools could help identify blogs or websites that borrow too aggressively from other sites. The search engines that are crawling the net could then use that information to flag sites that cross the line from fair use into plagiarism. Google, for example, already has tools that find music in videos uploaded to YouTube, and then shares the revenue with the creators.
>Fair use is unfair when it pits humans against machines.
The fair-use algorithms could also honor what the artist wants -- for instance, some artists want to be copied. In these cases, a markup language that enumerates just how much the artist wants to encourage fair use could help provide that choice. That way, those who want rampant copying could encourage it while those who want to maintain exclusivity could dial back the limits.
Approaches like this would offer more support for writers and photographers, the human creatives who can never match the scale and reach of automated machines. Because fair use is unfair when it pits humans against machines.
We must not forget that as good as some of the aggregated and automated results can be (and they are), we still need humans to synthesize knowledge and write new books instead of having bots just digitize the old ones. The web needs to encourage and reward those who create and bring new insight to the internet -- not just those that remix it.
Wired Opinion Editor: Sonal Chokshi @smc90