In October 2017, we published a feature on China’s social credit rating system. Written by the academic and author of Who Can You Trust?, Rachel Botsman, the piece was a richly researched examination of the methods developed by the Chinese state to rate its citizens via a national trust score.
The story detailed ways in which the behaviour of citizens is rewarded and “corrected” through social engineering driven by the concept of “sincerity”. According to the official policy document: “It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.”
The Chinese system is yet to be integrated, so government data and that of corporates aren’t aggregated – for now. But it will come as no surprise to WIRED readers that our digital interactions are the raw material for some of the world’s largest tech companies to predict our intent and extract value.
While the west considers itself to have a very different attitude to trust and privacy from that of China, the corporate aspects of social credit have long existed throughout Europe and North America, a phenomenon recently dubbed “surveillance capitalism” by Shoshana Zuboff, a professor at Harvard Business School.
How have we ended up at this point in history? Firstly, because of the advances in machine learning and deep learning that enable machines to optimise outputs based on experience rather than programming. Secondly, because of the sheer amount of data available to train algorithms and the increasing amount of it that is usefully labelled. SenseTime, a Chinese company that develops surveillance technology, is currently the most valuable AI startup in the world.
A few years ago, it was popular to quote Reid Hoffman’s maxim that data is the new oil. Today, that quotation seems quaint – the difference between oil and data is that oil is finite, while data, to all intents and purposes, is limitless. Data will determine the haves and have nots of the coming age of AI, and the power to be adversarial will become greater as leaders in the field emerge.
And then there’s a separate, but associated, question over the true ownership of this data. Lawmakers in Brussels and the state of California have introduced a degree of redress in this regard, which is likely to be the beginning of further legislation and regulation that will attempt to return ownership, so that people can make their own decisions about how their data is used.
Individuals have very different tolerances for privacy: some people may be willing to trade reduced rent from a landlord in return for surrendering intimate personal data about the way they live; meanwhile, billions of people every day still input data to Facebook, despite the relentless waves of scandal about the way that information is used. In the face of central government inaction, cities are taking steps – Barcelona has an ambitious programme to return the value of the data its citizens produce into improved services, rather than allow the value to be extracted by Big Tech.
Visit the city’s website, and you have access to a “City Data Commons” generated by people and sensors throughout the metropolis. Others are developing distributed technology that will enable users to control who has access to their data, and on what terms.
But the question remains: how much are we willing to trade in the balancing act between security, privacy and surveillance? It’s one we should be answering ourselves, not leaving to others.
This article was originally published by WIRED UK