AI think, therefore AI am
What exactly do people mean when they talk about AI in 2018? Where do I start if I want to embrace AI in my business? Get your questions answered in our Think:Act magazine on artificial intelligence.
by Hal Hodson
illustrations by Sarah Illenberger
with contributions from Neelima Mahajan
Facebook's New York Offices are stacked at the top of a pretty building at 770 Broadway. Inside, rows of monitors stretch into the distance and cast-iron staircases wind up and down between the floors. Dotted with coffee shops and comfy sofas, the space has the feel of a high-end department store. But instead of shop assistants, it's packed with computer scientists. Welcome to FAIR (Facebook Artificial Intelligence Research) lab, where the social network's most advanced artificial intelligence software is built.
Facebook first demonstrated this power, referred to interchangeably as "machine learning" or "artificial intelligence," in early 2016. The company's team working to bring the internet to people in developing countries was having a problem: It didn't know where these unconnected people were. Maps are often out of date, failing to reflect population growth and where internet coverage is needed, and new maps are prohibitively expensive to produce with human labor. So Facebook tried something different. It used AI to draw maps of 20 different countries automatically, using satellites. After Facebook's machine learning systems were shown what settlements look like from space, they could then proceed on their own, churning through masses of imagery to build maps faster than any human or team of humans ever could.
Such feats of work are a testament to modern machine learning. But for all its power, Facebook's world-mapping AI has a severe weakness – it did not understand what it had done. If asked to explain itself, it wouldn't even understand the question. Unlike a human, Facebook's superhuman mapping system is completely incapable of even ordering a cup of coffee, let alone speaking French or shooting hoops. This is typical of modern artificial intelligence, which is built around a technique that apes the human visual system: deep learning. Data can be used to train a machine to solve one very narrow task to extremely high levels, but that trained mind is then completely inflexible, unable to turn itself to anything else.
The salves for this inflexibility, it turns out, are a four-minute walk from Facebook's offices, at the NYU Center for Child Language. Gary Marcus, a psychology professor at the university, thinks that the blueprints for more flexible artificial intelligence can be found in minds with which humans are already very familiar: those of children. "The amazing thing is that kids can learn a language in a few years with a modest amount of data," says Marcus. "Kids are doing this in a way that seems almost impossible from the perspective of modern machine learning." Marcus has long advocated for more nuanced alternatives to deep learning, and now other luminaries in AI are starting to see things his way.
Geoff Hinton, the founding father of deep learning, sent a shock through the AI community when he told attendees at a Toronto AI conference in September that he has become suspicious of the deep learning methods on which his career is built. "My view is throw it all away and start again," Axios editor Shannon Vavra reported Hinton as saying. "The future depends on some graduate student who is deeply suspicious of everything I have said." For Marcus, this admission is past due. "What I see over and over is progress over narrow domains of AI. The way it works right now is that people get a large amount of data for a very specific problem and they get a solution for that problem as long as the problem doesn't change."
Deep learning has already delivered astounding results, and the popular press and the tech world is understandably enamored. That's reasonable; intelligence is what has allowed humans to become the dominant species on this planet, and capturing any small slice of it in software is naturally exciting and useful. Marcus' point is that deep learning is just one small way in which machines have the potential to be intelligent. Given the power and potential of intelligent software, the world should be pressing ahead and looking for new kinds, not just busily installing deep learning algorithms. "I would not talk about what comes after deep learning, but about how deep learning needs to be extended to help us build human-level AI," says Yoshua Bengio, one of the founding fathers of deep learning and a computer scientist at Universite de Montreal.
Figuring out new kinds of algorithms is hard. It took decades for Hinton and a small band of devotees to take deep learning from fringe science to the heart of the tech industry. Algorithms modeled on other kinds of thinking will take time too. "Kids just observe the world and they put together themselves how language works," says Marcus. The problem is that we know very little about the human brain. Danko Nikolic, an AI expert, says that we don't yet understand how humans learn. "I think we need a breakthrough in neuroscience and psychology to create better AI," he says. "We are a bit like scientists studying physical phenomena before Newton and the scientific theories of the 18th, 19th and 20th century," says Bengio. "We are lacking strong compact and global theories, but we are drowning in a sea of observations."
There are hints, however, Marcus is currently developing a second AI startup, as yet unnamed, to pursue new algorithms. His first, Geometric Intelligence, was acquired by Uber in 2016, to kickstart its AI efforts. "We are still in formation, so we haven't launched yet. I don't want to say too much. But we're definitely interested in this problem of bringing together new technologies to build hybrid architectures that are better equipped to relate knowledge and perception. Instead of just focusing on how brains process information, it may be fruitful to look at how human beings end up being born with the equipment, mental and physical, to make sense of the world. Children seem to be born knowing that objects persist in space and time," Marcus says. To create machines that come equipped with useful abilities off the bat, rather than having to learn them from piles of data, some are turning to another powerful idea from biology: evolution.
Marcus' co-founder at Geometric Intelligence, Kenneth Stanley, imagines artificial minds of the future will be bred rather than trained. This works by generating many hundreds of deep learning systems, each with random characteristics. They are then set to work at a task, like learning to walk a robotic body without falling over, for instance. At first, almost all of these artificial minds will perform very badly. But a couple will excel. From these, a whole new set of descendent minds are generated, and the process begins anew. The eventual goal, after many repetitions, is to evolve deep learning systems that are suited to particular tasks from the get-go, just as children are evolved to learn language very early in their development.
In July, researchers from DeepMind, Google's AI research arm based in London, presented a system called PathNet to a conference on evolutionary AI held in Berlin. PathNet trains normal deep learning systems to play video games, but then forces components of those trained systems to compete against each other to try and solve other tasks. This ability to transfer learning is key to creating artificial intelligence which can easily switch between tasks, as humans can, needing less data to learn new abilities.
Jean-Michel Cambot of Tellmeplus, a French AI firm, has a similar system. He uses older, more straightforward mathematical AI systems to test multiple machine learning systems against each other to see which performs best, an approach he calls "meta active machine learning." He says that a large aircraft maker is already using the system to search their production lines for flaws faster than human inspectors could, saving money on dud parts. Nikolic suggests future AI systems will be able to learn and adapt on the fly, based on new data they encounter in the world, just as humans do. "After the training, the networks today stay fixed: You have one and the same network running in production," he explains. "In the future, the network architecture will be changing fluidly several times per second, depending on the inputs that just passed by."
Work has begun in getting deep learning systems to explore the world as children do, poking and pushing at objects and learning from what happens. Research published at the International Conference on Learning Representations in April 2017 showed the first design of a system which used deep learning to make guesses about the mass and number of objects in a simulated environment. Through trial and error, the system learned to estimate an object's mass, or to estimate the number of objects in a scene.
A growing swathe of research like this aims to create artificial intelligence built to human needs, AI that understands more of what we might ask it to do. Marcus likens deep learning's current foreignness to a chimpanzee's mind, which is evolved for life in the forest rather than life in human society. "You can raise a chimpanzee in a human environment but it will never learn language," says Marcus. "I think that's because the learning mechanisms, the machinery for learning is different in a chimpanzee from others." Marcus expects that the current focus on deep learning will wane when it fails to create a safe driverless car, or build home robots, and realize that existing AI techniques are just not enough. "At some point people are going to want something like domestic robots," he says. Current AI doesn't have a hope of providing that. Machines with more childlike minds might.
What exactly do people mean when they talk about AI in 2018? Where do I start if I want to embrace AI in my business? Get your questions answered in our Think:Act magazine on artificial intelligence.
Curious about the contents of our newest Think:Act magazine? Receive your very own copy by signing up now! Subscribe here to receive our Think:Act magazine and the latest news from Roland Berger.