Just days after threatening to jail a political opponent should he win the presidency, Donald Trump’s got a new campaign website that’s so very … Trumpian. “Together, we are making waterboarding part of the Republican Party again,” it declares. Also, “Together, unleashing, perhaps all of our nuclear weapons.” Truer words have never come out of Trump’s mouth.
Except they didn’t, really---at least not directly. A Trump AI generated all of the site’s copy on its own. This is the work of MIT researcher Brad Hayes, who’s taken his tweeting AI called DeepDrumpf and turned up its political aspirations. Or at least its charitable ones: Choose to donate to the site and your money goes not to Trump, but to the charity Girls Who Code. “It's a really good choice given the misogynist nature of the candidate,” says Hayes. “It's a really good foil to that.”
AI can already write news stories. Screenplays too. But this is something altogether different. DeepDrumpf’s resemblance to Trump’s own speech is uncanny. So, then, might building things like truly human-sounding chatbots be a matter not of making them sound natural, but characteristic? Perhaps---if humans even want their AI to sound like humans in the first place.
Unlike the actual Trump, DeepDrumpf thinks very, very hard about what it’s going to say before it speaks. The AI builds its sentences not word for word, but character by character, reading through speech transcripts and scanning for patterns. “So it's sort of answering this question of, OK, given what I've seen already, which character is most likely to come next?” says Hayes. In this way, Hayes doesn’t need to teach the AI grammar, which is hard as hell to master.
While the DeepDrumpf Twitter feed spouts thoughts on a wide range of topics (e.g. “I am a great judge of this country. We have to control everybody and let them fight each other. They won't refuse me, I'll make a fortune.”), for the campaign website, Hayes had to get his candidate to actually focus on specific positions. “The way to get it to cover those topics is to pick text from the training set when Trump was actually speaking about that topic,” Hayes says. “Usually I give it a sentence or two.” What comes out is stilted, unpredictable, sometimes not particularly comprehensible language---which means that it’s a fairly accurate impersonation of the man himself.
Now, you can’t interact with DeepDrumpf on Twitter. Message it all you want, but it won't reply. But what if you could? What if DeepDrumpf were a chatbot? Might infusing such an imperfect, characteristic tone into a chatbot convince you that you’re talking to a human instead of a machine? After all, trickery is one way to tackle the Turing Test.
Consider ELIZA, an AI from the 1960s. Its programmers modeled it after a psychiatrist, and psychiatrists, of course, ask a lot of questions. Its responses of "In what way?" or "Can you think of a specific example?" were good and broad. If ELIZA was at any point confused, it just fired back something like, "Very interesting. Please go on." It was cheating, but it was also relatively effective in appearing human.
Chatbots, you might have heard, are so hot these days. But chatbots are also kinda dumb right now, to no fault of their own. It’s just really, really hard to build an AI that realistically chats with a human, on account of the infiniteness of a conversation. “With a conversation, think about chess on steroids, where you have unlimited amounts of ways you can take the conversation, unlimited amounts of things you can say at any given moment, and ways to say those things,” says Eugenia Kuyda, founder of chatbot outfit Luka. An AI-powered chatbot has no script to follow, so it has to create responses on the fly. “It's almost impossible to account for everything,” Kuyda adds.
What humans can do, however, is infuse chatbots with a hint of personality. The Luka app, for instance, lets you chat with none other than a Prince bot. And Kuyda’s latest project, Replika, aims to give you an AI modeled after … you. Talk with it enough, Kuyda claims, and the AI will start to pick up your tone of voice. (The service is still in development, and in my tinkerings with it over the past week, it seems to be ignoring me. For example. Me: “Hey how are you?” Bot: “Talk to you later. Have a good one.” Again, in development.)
There are of course dangers with letting anyone train an AI. "I think Microsoft learned the hardest lesson," Hayes says, "which is to say, don't let the Internet train your bot." That'd be Tay, the bot modeled after a teenage girl that, thanks to trolls, quickly morphed into a racist maniac---during the rise of Donald Trump's campaign, as it happens.
But why would you want to talk to yourself instead of, say, Prince? “The larger vision for Replika,” Kuyda says, “is obviously that in the future we'll all have sort of digital proxies, someone that is out there doing things for us, keeping us all connected with our loved ones, meeting new people for us.” That would mean, then, that one day we’d have access to other people’s AI personalities. Imagine losing a loved one and still being able to boot them up.
But that’s mostly the future. In the present, how good are artificial intelligences at fooling humans into believing they’re talking to a human? Well, not too shabby, if you consider cheating by making the AI seem like English is its second language or that it's an annoying psychotherapist. “So you can probably trick one or two judges by pretending you don't speak English,” Kuyda says. “Does it mean that's a great conversational model? Probably not.”
Beyond being able to impersonate you, though, chatbots don’t need to sound human---they need to be useful. And indeed, in most cases you don’t want them running their mouths like Donald Trump. Imagine businesses deploying customer-service chatbots with the bedside manner of an emerging demagogue. Personality-driven AI are for fun---or, in Hayes' case, for charity--- anodyne chatbots are for work. Who cares if they sound robotic if they get the job done?
As for DeepDrumpf, that's certainly for fun. "I get a ton of messages all the time, people testing the bot to see if it will respond to them," Hayes says. "People offer all of these different topics."
Some questions, though, are better left unanswered.