The first picture flashes on the screen. “A man is standing next to an elephant,” a robotic voice intones. Another picture appears. “A person sitting at a table with a cake.”
Those descriptions are obvious enough to a person. What makes them remarkable is that a human is not supplying the descriptions at all. Instead, the tech behind this system is cutting-edge artificial intelligence: a computer that can “see” pictures.
Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, is standing on a lit stage in a dark auditorium showing off the advanced object-recognition system she and her fellow researchers built. But as impressive as the system is, Li grows more critical as her presentation unfolds. She says that even if the computer is technically accurate, it could do more. The computer may be able to describe in simple, literal terms what it "sees" in the pictures. But it can't describe the stories behind the pictures. The person sitting at the table, for instance, is actually a young boy—Li’s son, Leo. Li explains that he is wearing his favorite T-shirt. It’s Easter, and we non-computers can all see how happy he is.
“I think of Leo constantly and the future world he will live in,” Li tells the audience at TED in a video that's been viewed more than 1.2 million times. In Li’s ideal future, where machines can see, they won’t just be built for maximum efficiency. They’ll be built for empathetic purposes. Artificial eyes, for instance, could help doctors diagnose and take care of patients. If robot cars had empathy, they could run smarter and safer on roads. (Imagine if the builders of self-driving cars used algorithms that didn’t account for the safety of pedestrians and passengers.) Robots, Li says, could brave disaster zones to save victims.
Li is one of the world’s foremost experts on computer vision. She was involved in building two seminal databases, Caltech 101 and ImageNet, that are still widely used by AI researchers to teach machines how to categorize different objects. Given her stature in the field, it’s hard to overstate the importance of her humanitarian take on artificial intelligence. That's because AI is finally entering the mainstream.
In recent years, Internet giants like Google, Facebook, and Microsoft have doubled down on AI, using brain-like systems to automatically recognize faces in photos, instantly translate speech from one language to another, target ads and more. And simpler forms of AI are now pervasive. Amazon uses a form of AI in recommending products you might like on its popular retail site.
Yet as AI becomes ever more popular, it’s also going through a crisis of sorts. Research from the Bureau of Labor Statistics shows that by 2020, the US economy will have 1 million more computer-science-related jobs available than graduating students qualified to fill them—a gap we’ll soon desperately need to fill. At the same time, notable figures like Elon Musk, Stephen Hawking, and Bill Gates have publicly worried that artificial intelligence could evolve to a point where humanity will not be able to control it. A kind of doomsday strain of thinking around AI might be a little exaggerated, according to Li. But it does point to the importance of being mindful about how AI technology develops going forward—and right now.
In a tech industry—and research community—that is still largely white and male, the danger arises of a less-than-humane AI that doesn’t take everyone’s needs and perspectives into account. Even as more people join the conversation around diversity in tech, recent examples show what happens when products aren’t designed to serve the most diverse population possible. In 2014, Apple introduced HealthKit, which the company presented as a comprehensive tracking system for human health. But it seemed to have forgotten about humans who have periods, at least until it corrected the oversight with a software update a year later. The Apple incident wasn’t specifically AI going awry due to diversity problems, but this July, it did at Google: The search giant apologized profusely when its new Photos app, which automatically tags pictures using its own artificial intelligence software, identified an African-American couple as “gorillas.”
(“This is 100 percent not OK,” said Google executive Yonatan Zunger after the company was made aware of the error.)
“The diversity crisis is the same crisis we talk about as a society in asking, ‘Is technology soulless?’” Li says, speaking frankly about her disappointment in the AI community being less than welcoming to members of underrepresented minorities. Among 15 full-time faculty members in her department, she’s the only woman.
Elsewhere within the industry, the 44-person Facebook AI research team includes just five women. At Baidu, the 42-person AI team includes three female researchers. In her own lab, Li says there are few students of color. These numbers aren’t just bad in themselves; they bode badly for the prospects of developing truly humane AI.
“I think the combination of being a professor and becoming a mother got me thinking really deeply about these issues,” says Li, who was born in China and migrated to the US when she was 16. “You feel so much more responsible for the future generations.” Li holds Friday afternoon wine and cheese sessions for women in AI every other week at her office. Recently, she also greenlit and helped carry out a one-of-a-kind project: the Stanford Artificial Intelligence Laboratory’s Outreach Summer program (SAILORS), the country’s first AI summer camp for ninth-grade girls.
“This is a field that’s producing technology that is so relevant to every aspect of human lives,” Li says. As such, it’s vital that people doing the work have the perspective to make such a crucial technology relevant to every human’s life. “To bring diversity into a highly innovative and impactful field fundamentally has good value.”