Facts matter: but is that statement a fact? Suppose it could be proven or refuted – what then? What’s the best way to tell someone they’ve made an error? How do you nudge the world in the direction of truth?
These are the questions professional factcheckers wrestle with on a daily basis. The practice creates a recognisable type. As a breed, factcheckers are cautious, modest and intensely paranoid about inaccuracy. Which makes the attitude of their newest addition unusual.
“This is like, positive shit we're doing,” says Dhruv Gulati. “We don't diverge from our ultimate mission, which is to solve misinformation in the world.”
Gulati, a former city analyst who co-founded UK artificial intelligence startup Weave.ai, is CEO of Factmata, one of a group of technology startups tackling the problem of “alternative facts” online. In November 2016, his small UK-based team was awarded €50,000 by the Google Digital News Initiative to develop a system to detect and correct misleading information online: a tool with the potential to match the speed and size of the internet itself.
Read more: Jimmy Wales goes after fake news with Wikitribune
“The idea is a scalable real-time system,” Gulati explains, “that uses AI to detect stuff that is potentially misleading.” That includes not only false claims, but also “fake news, misleading news, rumours, hoaxes”: everything, in short, that isn’t entirely true. If a statement deviates in any way from the strict truth, Gulati claims his system will be able to spot it.
Or, at any rate, he claims his system will be able to spot it one day. When that day will be isn’t entirely clear – as, indeed, are many aspects of Factmata’s approach. Gulati is building his startup the Silicon Valley way, with bold talk of “AI” and “democratisation,” and dismissive remarks about the fallibilities of human alternatives. But when it comes to factchecking, accuracy and nuance are all-important. Truth has already been disrupted once. Can it survive being challenged by technology yet again?
Factmata is launching its first product on June 8, the day of the UK general election. This will be an extension for Google’s Chrome browser, designed to correct claims related to economic statistics. When Factmata’s text-reading software detects a statement about immigration or employment, the extension will bring up a link to the official government statistic in a little window next to the text, like a real-time footnote.
Gulati shows me a static example, based on a real-life exchange on Twitter. Someone called Chris Conyers is debating, in the futile way of social media, a man using a Republican elephant as his avi. “Ha!,” spits Conyers, when the man, who goes by the name of Albert Parsons, claims Donald Trump will re-energise the US economy. “We’re in the longest stretch of economic growth in US history. 82 months.”
Read more: Monetising misinformation: inside the fake news capital of the world
Factmata has highlighted those sentences in yellow and linked them to a World Bank chart of GDP growth. “The great thing,” says Gulati, “is that this will work on anything. It will work on you and your mate talking about these issues in Facebook comments. We will pull up the chart from the World Bank and our sources of statistics and give you figures.”
I look at it, thinking how annoying it would be to have some automated know-it-all tap me on the shoulder to correct my Twitter exaggerations. I wonder what I’d do, if I was Chris Conyers or Albert Parsons. “Of course,” I say jokingly to Gulati, “the World Bank is fake news.”
“Now you're asking me a trick question,” Gulati replies. He appears unaware of the highly politicised nature of both the World Bank and the term “fake news.” Later, we talk about Donald Trump’s so-called Muslim Ban – he doesn’t know that’s a contested term either. Nor does he know that election day is devoid of news, as UK law restricts coverage to uncontroversial titbits, such as the weather, or politicians’ appearances at polling stations.
If you’re launching then you’ll be too late to make a difference, I tell him.
“Just put, we'll release it for the election,” he says – meaning, just put that in your article. “Anyway,” he adds, “I don't care. We're releasing it on June 8.”
“Why not wait?” I say. “This stuff is important.” He seems not to know what I mean, so I add, dramatically: “We’re talking about life and death here.”
“For our company?” says Gulati. He thinks I mean life and death for Factmata.
“No, for people. There are people for whom this election will be the difference between life and death. So why not wait until you’ve perfected it?”
“Because the election time's gone. The point is to release this and get people's feedback. It's a tool that everyone needs to use and have input into. This needs to be an open tool.”
To function effectively, an automated political factchecker will need to be open: that way it can be tested against generally accepted standards, and be held accountable when it goes wrong. But although Gulati says the code for Factmata will be open source, he doesn’t mean openness in the political sense. He’s talking about getting people onto his platform: he means it needs to be open like a shop. Gulati says Factmata “uses AI,” which gives the impression his system does its factchecking automatically, like a factual version of spellcheck. But, in reality, Factmata will, at least for the foreseeable future, be powered by the crowd. It uses AI to identify facts that need checking, then assigns the work to freelance factcheckers. “Factchecking as a service,” Gulati calls it.
“It's important to note,” he adds, “that it's not saying, ‘Oh I'm getting my facts from Factmata. Because we're not providing a factcheck.” In his vision, Factmata is to facts what Facebook is to content: a platform, which disclaims responsibility for what is placed there. “What facts am I providing here?” he asks. “I'm not providing anything. I'm not checking anything. I'm allowing other people to check things.”
Will these crowdsourced factcheckers be paid? “That's where our business model comes in,” Gulati replies. “You can imagine this sort of service, detecting misleading content and verifying misleading content and providing a truth score, being useful to all sorts of industries. Platforms like Facebook or Google who need to clean their content. Ad networks that need to contain inventory of articles. Hedge funds who want to verify what they're trading.”
Won’t this create conflicts of interest, the way it has in journalism or credit rating? Gulati brushes away that concern, saying that if Factmata is going to work, then it can’t be funded by grants. “It needs proper backing and it needs a business model that's defensible and that people support, and it needs scale.”
To succeed commercially, Factmata needs factcheckers, and that means it needs to grow. To succeed technologically, it needs data – that, too, requires growth. “We need to scale very fast,” says Gulati. He doesn’t add, “by any means necessary”: among startups, that’s taken for granted.
Silicon Valley has a simple attitude to politics: for them, it’s a problem to be solved. When the denizens of Silicon Valley (and I am speaking here of the mindset, rather than the place) encounter political difficulties, they treat them in the manner of an engineer. In the case of truth, that means going looking for it, as if it was a substance, like gold, that could be identified, mined and held up for all to see. Factcheckers are, on the whole, a fairly sanguine bunch, but this is one idea that makes them angry.
“A lot of the ‘We know how to do machine learning’ brigade in this area don't know what they're talking about,” says Will Moy, director of independent factchecking charity Full Fact. “Automation is interesting, but it brings up really deep hard questions. We think a lot about them and I don't think we have glib answers.”
Moy has been in the factchecking game long enough to be intimately acquainted with its difficulties. A political obsessive, who previously worked for the Parliamentary Advisory Council on Transport Safety, he set up Full Fact in 2010 to check claims made by politicians, interest groups and the media. Three referenda and one general elections later, and Full Fact is the UK’s leading – indeed only – factchecker, cited as an authority by everyone from David Cameron to David Dimbleby.
Moy has some concerns about Factmata. “I just feel a little bit like there's a brand being spun up on ground which isn't very clear,” he says. “The work it’s based on is genuinely very elegant. But how do you turn that into something actually useful? How do they bridge the gap between treating statistics as facts about the world, which they’re not, to treating them as bits of information that you have to understand the definition and methodology of in order to be able to use?”
Factmata draws on research by two machine learning academics, Andreas Vlachos and Sebastian Riedel, which Gulati expanded upon as a computer science masters at University College London. In a 2015 paper, the pair developed a method of identifying and verifying simple statistical claims, such as a country’s population or inflation rate. Previous attempts at automated factchecking started with statements that had been already factchecked by journalists. Vlachos and Riedel, who are now scientific advisors for Factmata, did the opposite: they started with the list of numbers, then used a branch of machine learning called natural language processing to identify those statistics being used.
But although the method worked with established facts, it found more complex claims harder. What single database can help decide a statement such as “New Labour oversaw a collapse in social mobility”? Or “immigrants help the economy”? At present, neither the data nor the technology are ready. Hence the need for a human element.
“They came up with quite a nice move,” says Moy. “But those aren't particularly interesting claims. What happens when you start going into the kinds of claims that people actually want us to factcheck, is that things tend to get more complicated.”
Moy is not without an interest. Under his direction, Full Fact is developing its own automated factchecker, and is in the process of acquiring $500,000 from eBay founder Pierre Omidyar’s philanthropic investment fund. But, unlike Factmata’s, Full Fact’s tool is not designed for public use. It is intended instead to help factcheckers and journalists perform their traditional role and scrutinise people in positions of power and influence.
Moy shows me the prototype. It’s set up to live-check each sentence in that most obscure political ritual: Prime Minister’s Questions. The demo uses a video from 14 February – and straightaway it finds a claim, as May tells Corbyn, “there are 1,800 more midwives in the English NHS since 2010.” Immediately, Full Fact’s response pops up: “That's about right depending on where you measure from.” As a test of the technology, it’s meaningless, a neat hard-coded trick. But from a human point of view, it’s just what I’m looking for: a sophisticated answer, communicated in a sentence.
Read more: UpVote episode 3: fake news and AI factchecking, with Full Fact director Will Moy
Full Fact recently started working with Facebook to provide guidelines to help people spot fake news online. But when I ask Moy if he’d like to see a Facebook factchecker, he reels back in horror: “Certainly not!” At best, people would just ignore it. At worst, the automated interruptions create an outraged reaction against the very idea of factchecking.
Because here’s the thing: factchecking isn’t friendly. Nor should it be. Factchecking developed to hold powerful people to account: it’s confrontational, and it’s like that because it has to be. If we want to change people’s minds, we should really take a different approach. Because no-one likes being told they’re wrong.
This is Moy’s real worry about Factmata: that in its efforts to grow it will discredit factcheckers at large. I put this to Gulati. “What is Full Fact doing? They're an organisation of, what, 10, 20 people? This is a factchecker that has the wisdom and knowledge of everyone on the web.”
The wisdom and knowledge of everyone on the web. If only that sounded more reassuring.
This article was originally published by WIRED UK