Fei-Fei Li | 2018 MAKERS Conference
Fei-Fei Li, Professor of Computer Science, Stanford University and Chief Scientist of AI/ML, Google Cloud, on the future of artificial intelligence
HOST: Ladies and gentlemen, Fei-Fei Lee.
FEI-FEI LI: Good morning. Good morning, Makers. Good morning, everyone. Such an honor to be here. Let me start my talk. A few years ago, as you saw in the video, I started a summer outreach program at Stanford University to encourage high school girls from diverse backgrounds to participate and get involved in artificial intelligence.
I have one vivid memory of delivering this opening lecture. I was in a room full of ninth grade girls, most of whom have never even set foot on a university campus before. It was a complex lesson. I was really geeking out with them. And they were eager to learn. There was excitement in the air, but a little bit of nervousness too.
As we finished this really long technical discussion, I wanted to inspire them more. So I described how what we have learned. This computer vision technology can help doctors and nurses better track their hand hygiene practice in the hospital, reducing hospital-born infection that kills almost 90,000 patients per year in the United States, several times more than car accidents. I'll never forget what I saw at that moment. Across the room, these young faces just lit up. I saw passion, amazement, and even some relief, as this incredibly technical field that they just heard about suddenly took on a human form.
And this is the story that I want to share with you today, the deeply human side of artificial intelligence. In fact, I hope to convince you that there's nothing artificial about it at all, especially at this very moment. AI is about to transform our world in ways we can barely imagine.
I want to start with the story about a breakthrough moment in science. And it goes back to 1959. Researchers, Hubel and Wiesel, used electrodes to connect the visual cortex of an anesthetized cat to a loudspeaker, and then projected patterns of light for the cat to see. This allowed them to literally hear the cat's visual perception at work, and showed for the first time that the brain is organized by neurons stacked in a hierarchical fashion with each layer responding to increasingly complex visual pattern. And this work got them to win Nobel Prize a couple of decades later.
But more than 40 years after their work, I had an opportunity to be a summer research intern student at Berkeley to replicate this experiment in a neuroscience lab. Hearing the neurons responding to patterns of light in the darkness was a mesmerizing experience. No words can describe the sense of magic I felt at that moment, realizing that this rich and beautiful visual world we see all begins with such tiny neurons in our brain that get excited by simple patterns of light.
So I began to wonder, what if one day we build computers that can see like us. It turned out I wasn't the only one asking this question. Computer vision was already a growing field with thousands of researchers worldwide by the time I started my PhD study in 2000 right here in LA, Pasadena, not very far. Progress was slow but steady. And the amazing technology we now enjoy is possible because thousands of researchers dedicated their careers to establishing the science.
But teaching computers to see is easier said than done. A modern camera easily registers millions of color pixels when taking a picture. But deriving meaning from all that data is an enormous challenge. It's no surprise it takes Mother Nature 540 million years to get this solved right. A human can understand staggering amounts of details about an image with only a split second of glance, and then describe it in language, also very unique to humans.
One of my first experiments as a PhD student quantified this. And then it becomes the Holy Grail of the field of computer vision, to be able to teach computers to see and talk about what it sees. Luckily for me, I arrived at a very unique time in history. The internet was exploding. And that gave researchers access to more data than ever before. The sheer variety and depth of images available online made me think about the constant visual stimulation that children experience as they grow up.
So I saw a parallel in that. What if we could use the internet to help our algorithms explore the world in a similar way? So as you saw in the video, around 2006, 2007, I began a project with my students and collaborators called ImageNet, intended to organize enough images from the internet to teach computer algorithms what everything in the world looks like. In the end, it added up to 15 million photos across 22,000 categories of objects. It was the largest AI dataset ever publicly released at that time.
But here is the tricky part. In order to actually teach an algorithm and benchmark its progress, every single image must be sorted and labeled correctly. We needed to sort, clean, and label from a pool of billions and billions of images. In the end, we had to rely on crowdsourcing by hiring over 50,000 online workers across 167 countries to do this. So yes, we did get a little crazy. But that's the fun of science.
The hard work did pay off. By combining ImageNet with a class of algorithm known as convolutional neural network, or more popularly known as deep learning, and modern computing hardware, like GPUs, AI was revolutionized and ushered into the modern era of what we know today. By 2015, just a few years after ImageNet was released, computers were recognizing objects better than humans in head to head contest. Algorithms built on ImageNet have advanced the state of the art, state of the computer vision considerably with error raising image recognition steadily decreasing every year.
And my students and I began to make major progress on image captioning, the very problem I could only have dreamed of during my PhD studies. And the photo descriptions you are seeing now behind my back were some of the first ever machine-generated sentences for computers when they see a picture for the first time. But we still have a long way to go.
Today's AI is great at pattern matching in narrow tasks, like object classification, facial recognition, and language translation. But there's so much more to human thoughts and intelligence than simple patterns. AI is now targeting loftier goals, like natural communication and collaboration with richer sense of context and even emotional perception. I call this human-centered AI.
And many of my colleagues are working on projects that exemplify it. For example, examples include applying machine learning to education, understanding satellite imageries to track poverty more precisely, or developing diving robots to explore the deep ocean when divers cannot or it's too dangerous for divers to go. And along with my students and collaborators at Stanford, we're working with senior care facilities on early studies of AI assistance for nurses and family members.
But just like any technology, AI is a tool in the hands of people. In fact, I believe there are no independent machine values. Machine values will come from human values. Without thoughtful guidance, many of the benefits of AI could cause unintended harm as well. This is a complex challenge. And I don't pretend to have all the answers. But I do know we have an obligation to build technology that benefits everyone, not just a privileged few. And the first step is understanding who is developing it. So how well is humanity represented in the development of AI today?
I'll be blunt here. Diversity is sorely lacking in the world of computing. And that includes AI. The National Science Foundation reported in 2016 that fewer than 30% of computer science majors are women. A similar 2016 study showed that fewer then 15% are left by the time they reach their professorship. Similar numbers are found across most of Silicon Valley's tech companies. And the statistics for racial minority groups are even worse.
If this technology is going to change our lives, our society, and perhaps the entire future of humanity, and I actually believe it will, then this lack of representation is an absolute crisis. Outreach programs, like the one I started at Standford, are a powerful first step. I co-founded it four years ago with my former PhD students, Olga Russakovsky, now an AI professor at Princeton, with the goal to inspire girls and under-represented minority students, not just to pursue tech jobs, but also to recognize the human impact that AI has to the world.
The result is AI4ALL, a nonprofit organization focusing on increasing diversity and inclusion in AI through education programs. We specifically target high school students of all walks of life, especially those underprivileged communities. AI4ALL was launched in 2017, seed funded by Melinda Gates's Pivotal Ventures and Jensen and Lori Huang Foundation.
From Stanford, AI4ALL is already partnering with Berkeley, Princeton, Carnegie Mellon University, and Canada's Simon Fraser University to bring our AI education to a diverse group of students. No technology is more reflective of its designers than AI. From the architecture of its algorithms to the applications, it's our responsibility to ensure that everyone can play a role from the beginning.
I've always summed it up like this, we know AI is going to change the world. The real question is, who is going to change AI? I hope many of you in the audience will consider yourself to be part of this answer. We need you. Thank you.
MAKERS amplifies the dialogue around harassment, equal pay and other urgent issues, pushing the women's movement forward. #RAISEYOURVOICE