Can an algorithm detect fair play?

This article was taken from the July 2014 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by <span class="s1">subscribing online.

Could computers have a sense of justice? "We are increasingly seeing these automated systems which are making decisions for us that are far too vast, far too frequent, far too complex for us to be involved, so we have to get our little computer programs to do it for us," says Jeremy Pitt, part of the Intelligent Systems and Network Research Group at Imperial College London. "The problem is, how do you ensure they distribute resources fairly?"

Pitt didn't realise this at first: he just wanted to solve the problem of resource allocation, and began by looking at how humans manage it. He started with the Nobel Prize-winning work of Elinor Ostrom, who described eight "design principles" for managing a common pool of resources. "We set up a testbed, and found that the more principles you had, the better it worked," says Pitt, whose electronic systems worked in a way similar to human ones. To further refine the model, Pitt turned to the work of the philosopher Nicholas Rescher, which allocates resources by ranking people's claims on them, and early in 2014, he published a paper proposing a research programme in computational justice.

This isn't arcane; it could have applications in cloud computing, grid computing and sensor networks. "We're saturating our infrastructure with sensors -- how do they negotiate with one another?" Pitt will find out this year, applying his research to a smart power grid in Strathclyde.

According to Pitt, what distinguishes human justice from other rule-based systems, such as those of ants or bees, is that "people make shit up". Maybe computers will start doing the same.

This article was originally published by WIRED UK