CES is always an exercise in separating the substance from the stunts, and this year the big story was Neon, a buzzy project from Samsung’s Star Labs which began teasing its “artificial human” ahead of the Las Vegas expo.
While manufacturing your own hype is dangerous, it takes a special kind of courage to make those kinds of promises and then debut your tech in a demo for a live audience. In front of a mass of journalists and showgoers, CEO Pranav Mistry stood in front of a lineup of digital avatars to introduce Neon, an assistant that is designed to look, talk and act like a human. The problem? Having watched the concept video about Neon, which leaked before the show, we were expecting something slightly more profound, but in reality the Neon humanoid chatbots were a damp squib.
During the demo Mistry chatted to one of the female avatars on a screen, asking her questions and, using an app, cycling through various facial expressions. The avatars looked mostly realistic and successfully answered some questions, but the demos were heavily controlled and, even so, quite wonky. Questions could only be asked by a Neon team member who was plugged into a mic, and delivery was at best rigid; at worst, it didn’t work at all.
The whole demo was ‘live’ in the sense that this was not video; Neon reps were prompting unique computer-generated responses with their questions, though in this case these were pre-rendered. But these weren’t the indistinguishable-from-human AI Neon had been teasing with actors’ performances. Neon burned itself with hype and while it’s possible that it simply left the lab too early, there’s little evidence so far that suggests the spin-off startup can deliver on its vision. Indeed, if the Samsung name was not attached, this may have been written off on day one of demos.
The long-term goal is to make avatars that act and respond like real humans. Unlike voice assistants like Alexa, Neons aren’t designed to be bottomless founts of knowledge, but will instead be deployed for specific tasks – think concierges, tourism guides or air crew giving safety instructions – as well as acting as digital companions. Want to learn guitar? Perhaps your avatar could teach you.
One thing Neon does have going for it is Mistry himself, CEO of Star Labs (Samsung Technology and Advanced Research Labs) since last October, who heads up the project. He also had a hand in creating the Ballie robot and Samsung’s Bot Chef, also at CES 2020, and before that, he was VP of research, having developed the first Galaxy Gear. Mistry says the idea for Neon has been kicking around for two years, but Neon only formed six months ago. “This is a preview of the technology, it’s not even beta. It’s not anywhere near to being ready to launch,” he says. In a tweet during CES, Mistry also referred to it as a “four-month young venture”.
The avatars we saw in our demo were based on the bodies and faces of real people. Neon takes the likeness of the person and then layers their own special AI sauce on top in order to generate “millions” of possible animations, gestures and vocal responses.
There are two main components. The first is Core R3, which stands for ‘Reality Realtime Responsive’ - this is the rendering engine for the movements and expressions that Neon gives these avatars. All of its current avatars are based on real people, but Neon only takes a base-likeness – and the Core R3 tech does all the rest. “The physical features are taken from the likeness of the person but the behaviours are not necessarily theirs,” said Angie Chian, a senior technical writer on Neon. “We can give them new gestures. So the way the lady laughs is not necessarily the way she does in real life. I don’t actually know how she laughs.”
The endgame of Neon, however, is to be able to build avatars completely from scratch, no flesh required – something Mistry says they should eventually be able to do.
Third parties will then be able to plug their own intelligence into Neon in order to provide the “brains”. For example, a hotel could build a digital concierge by subscribing to the program, then taking Neon’s tech and feeding it information specific to its own business. Mistry is even convinced we’ll one day see Neons used as news anchors for breaking stories. However, he insists, “the point of Neon is not about replacing humans”.
Then there’s the second component that we haven’t seen yet: Spectra, a software engine that will act as a memory for the avatars, so they can learn about you from your interactions and become smarter over time. (Interactions are private, encrypted and linked to one specific avatar, the company says.) This is precisely where Neon needs to make good on the promised future that we didn’t see at CES but it’s also exactly where Samsung, and indeed almost every company outside Google, Amazon and Apple, have previously stumbled.
Neon noted that there is no Samsung technology being used here, so this is not, for instance, building on Samsung’s Bixby voice assistant – and Mistry says Samsung doesn’t have input into Neon’s roadmap, at least right now. But it’s difficult to not think of Samsung’s less successful ventures into AI. Bixby will be joined this year on Samsung TVs by Alexa and Google Assistant, both more capable and more popular than Bixby, in what could be viewed as an acknowledgement of its failure. See also the seemingly indefinite delay of Samsung’s first smart speaker.
Then there’s Viv, the AI and assistant technology that Samsung acquired in 2016 and eventually rolled into Bixby 2.0. Viv was a showstopper when it was first shown off ahead of the acquisition – a much more successful demo than Neon – but it’s fair to say its potential feels squandered. Samsung has never been afraid to throw money at ideas, and then throw those ideas at a wall to see what sticks – or in the case of the Galaxy Fold, what doesn’t break. But with the company profits taking a beating and its smartphone strategy increasingly messy, you can’t help but wonder if the company shouldn’t be trying to fix what it already has, rather than chasing a project like Neon.
“Spectra will give you that experience that you wish you had in your phone,” Mistry says, who is bullish on this front without offering specifics. “Spectra will give avatars that capability of: I know you, I know about your preferences, I know you’re tired, and I know you don’t like this particular movie so I won’t talk about it.”
There are a lot of theoreticals at this stage. Neon plans to do beta testing with select partners by the end of the year, but doesn’t see any sort of full launch for some time yet. Indeed, the CES demo of the Core R3 rendering was processed locally on a powerful PC, although when it launches, this is likely to move to cloud processing.
Neon’s biggest mistake was showing off this technology so early, particularly without Spectra. For the time being these are just elaborate chatbots veering into the uncanny valley; give them a brain and Neon may be something more special, and useful. As Amazon, in particular, works to humanise Alexa’s voice responses, Neon seems to believe that being able to see an interactive avatar can add something meaningful.
Neon is also late to the party, even in the narrow field of digital avatars being used for customer service with Soul Machines, co-founded by a former Weta Digital-exec, also at CES. The startup, which recently raised $40 million (£30m), has already been deployed in customer services by the likes of P&G, Royal Bank of Scotland and Google with some success. Soul Machines’ system also has a leg up as it uses IBM’s Watson for some speech work.
There’s a chance that Neon never makes it beyond skunkworks, or that it gets sold off to someone else; it’s too early to tell what might happen. But as we watched these avatars doing their best human impression, it also made us wonder: if this is a success, will people be able to move past the creepiness? “If you look at the gaming industry, for the last 15 years it’s tried to make characters look more real. If you go to the film industry, the same thing happens,” said Mistry. “People want that.”
If the buzz proves anything it’s that perhaps we are more ready for Neon than we might think, now that Alexa and co are already part of our lives. “This might come across as a bit controversial given all the negative sentiment following the unveiling of Neon,” says Jeb Su, principal analyst at Atherton Research, “but we believe it is simply a more realistic-looking evolution of smart digital assistants, similar to what has happened in robotics, video games or sci-fi movies.”
Mistry threw out several examples of where Neon avatars could be employed, and if any of this ends up working close to advertised, there’s numerous applications here. At one point in the demo, an avatar started flexing its linguistic skills by spitting out some phrases in Korean and Chinese, and we wondered what it might be like to learn a new language with an expressive human avatar – or what something like this could do for assistive living.
It’s the nature of the beast that talk around Neon has fallen into either total wonder or complete disappointment. Keen to distance itself from related Samsung tech, it appears that Neon is essentially starting from scratch and its CEO has been honest about the fact a launch is a long way off. That said, the progress it’s made in six months is considerable.
So don’t believe the haters: Neon’s avatars certainly look the part, and the potential is there. At the same time, don’t believe the hype: the real test is yet to come, and it remains to be seen if Neon can bring its two ambitious technologies – the avatars and the AI – together.
This article was originally published by WIRED UK