LinkedIn founder Reid Hoffman and nonprofit the Omidyar network, setup by eBay’s founder Pierre Omidyar, are jointly contributing $20m (£16.4m) to a new fund that aims to keep artificial intelligence technology in check.
Rather than being at the behest of tech giants, the newly formed Ethics and Governance of Artificial Intelligence Fund - headed up by Harvard’s Berkman Klein Center for Internet and Society and the MIT Media Lab - will pool together the collective insights of social scientists, ethicists, philosophers, faith leaders, economists, lawyers, and policymakers. The aim is to carve out a space for interdisciplinary research with a goal of creating “activities, tools and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities” to “ensure that AI develops in a way that is ethical, accountable, and advances the public interest”.
The inevitable robot uprising is impending, and as it becomes clear that AI will be driving our cars, taking over our jobs and running our cities sooner than we think, academics have been stepping up to push for checks and balances that ensure an ethical route is laid down. In September of last year, Facebook, Google-owned DeepMind, Microsoft, IBM and Amazon created the Partnership on AI, designed to share information and resources. It committed to making its research public, as well carving out best practice benchmarks for the industry. But these are also the very same companies spearheading and benefitting from the changes AI is bringing forth. And this is problematic.
Just last year, European Commissioner for Trade Cecilia Malmstrom had her speech on banning the export of items used to implement the death penalty removed from YouTube - owners Google later blamed the incident on one of its algorithms not behaving as intended. A more widespread and common example has materialised in the form of fake news proliferated and often prioritised by Facebook’s algorithms.
Read more: Holding AI to account: will algorithms ever be free from bias if they're created by humans?
“One of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society? How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?” said Joi Ito, director of the MIT Media Lab.
“Even when we don’t know it, artificial intelligence affects virtually every aspect of our modern lives,” said Alberto Ibargüen, president of Knight Foundation, which has contributed $5m (£4.2m) to the fund. “Technology and commerce will ensure it will impact every society on Earth. Yet, for something so influential, there’s an odd assumption that artificial intelligence agents and machine learning, which enable computers to make decisions like humans and for humans, is a neutral process. It’s not.”
“Even algorithms have parents, and those parents are computer programmers, with their values and assumptions. Those values - who gets to determine what they are and who controls their application - will help define the digital age. Algorithms can reflect standards of beauty, standards of morality, standards of acceptable behaviour. As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more.”
Ibargüen wants the fund to help identify ethical issues, work out who decides these, and achieve all this by getting as diverse a pool of perspectives as possible, he explains in a blog post.
In total, the fund has launched with $27m (£22.2m) in donations. The William and Flora Hewlett Foundation is also among the donors.
“We know that even as the fund grows, it will be dwarfed by the billions industry is spending to develop artificial intelligence,” continues Ibargüen. “But we believe the strength of our focus can impact the ethics and governance applied. We mean to use logic and research, and persuade by appeals self- and public interest to encourage transparency, embed ethical principles, and engage diverse disciplines and perspectives.”
The fund will work together with other efforts being made in this area, and launch an AI fellowship program.
This article was originally published by WIRED UK