Main image of article The Future of AI: A Non-Alarmist Viewpoint
shutterstock_286508345 There has been a lot of discussion recently about the dangers posed by building truly intelligent machines. A lot of well-educated and smart people, including Bill Gates and Stephen Hawking, have stated they are fearful about the dangers sentient Artificial Intelligence (AI) poses to humanity. Elon Musk even donated 10 million dollars to a research fund to look into ways of ensuring AI benefits the world. While these are all extremely smart and knowledgeable individuals, I can’t help thinking they’ve watched too many Hollywood movies. From the original Terminator series to the Matrix trilogy, from I, Robot to the excellent Alex Garland movie Ex Machina, Hollywood paints a bleak portrait of what advanced AI means for the future of humanity. I know one thing for certain, when we do finally invent true artificial intelligence, it will be the most important invention we have ever made. But will it be our last?

AI in the World

Maybe it’s better to see what the experts in the field have to say. Ray Kurzweil, who has built several very successful AI products, including optical character recognition (OCR) software and a text-to-speech reading machine for the blind, believes we will merge with machines, rather than be superseded by them. Check out the latest AI-related jobs. Andew Ng, a luminary in the field of AI who led the Google Brain project and invented numerous machine-learning algorithms and techniques, has a refreshing attitude to the dangers of AI: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.” What he means is that we are a long way from building true artificial intelligence, just as we are a long way from colonizing Mars, and that it is therefore pointless worrying about it right now. Bear in mind, this is from one of the world’s leading researchers in the field of deep learning, and it’s mainly the recent advances in this field that have led people such as Hawking and Musk to become so concerned. AI has a history of people overestimating how close we are to building true intelligence. In the early days of the field (the 1950s and 60s), we solved a number of “hard problems,” such as the development of mathematical theorem-provers and software that could beat humans at backgammon; we thought that we would solve intelligence within a decade or two. Indeed, computer vision, one of the most challenging of AI problems, was considered a “summer project” at MIT. This over-optimism in the field has led to several so-called “AI Winters,” where research funding dried up because AI systems failed to deliver on their promise. Those early researchers made a mistake in believing that problems that are hard for humans to solve would also prove hard for machines. Doing complex mathematics, such as calculus and playing chess, are indeed considered challenging problems for people, yet they were amongst the first to be solved by machines. This led to the initial optimism within the field. We found out later that the problems that humans find easy, such as recognizing faces and understanding natural language, are the hardest to automate. So are the likes of Musk and Gates getting ahead of themselves, as Ng contends? Well, maybe. We have made some huge advances in this area in the past five to 10 years, due mainly to advances in the field of deep learning: building deep, many layered neural networks that are very loosely based on how our brains learn. What’s more, these advances have involved the sort of problems that, until recently, machines performed very poorly on. For instance, Facebook built an algorithm that can identify that two pictures are of the same person almost as well as humans can. New algorithms exist for creating natural language descriptions of images. And the recent advances aren’t limited to image processing, either: In a recent paper, researchers demonstrated that, for some verbal reasoning IQ tests, machines could outperform some humans. Perhaps now we have good reason to be optimistic about what we can achieve in the near future. Even so, most deep-learning researchers such as Ng believe we are many decades from building true artificial intelligence. Most AI research has focused on building software that can excel at specific tasks. This is because we have no idea how to build true Artificial General Intelligence (AGI): AI that can learn to solve any problem, just as a person can. So it perhaps makes more sense to focus on the societal challenges that advances in AI will pose in the near future, rather than worrying about what will happen when we eventually solve the problem of building AGI. One of the most pressing challenges is one of privacy versus security. As machines become more intelligent, they can be used for tasks such as reading and understanding emails or performing facial recognition on CCTV cameras and online videos. The latter can be used to solve crimes and prevent terrorism, but it also presents huge challenges for our privacy; imagine if a government applied advanced facial recognition software to decades of CCTV coverage. It could determine what we did 30 years ago.

AI and Your Job

Another challenge, one that Ng is particularly concerned with, is how advances in AI will affect the job market. In the past, when new technologies came along to replace human workers, the change was gradual; it generally happened over the course of a generation or two. This allowed us time to adapt and train the children of displaced workers to perform different jobs. But once the self-driving car becomes a reality, thousands of taxi drivers, truck drivers and delivery people will be out of a job practically overnight, as economic competition forces companies to make the switch to self-driving fleets as quickly as possible. Previously, machines replaced jobs requiring physical labor, leading to people moving from farming or manufacturing jobs to more knowledge-based professions. However, once machines can do the knowledge-based jobs, where does that leave us? Ng points out that there are many pressing challenges that AI will bring in the short term, so thinking about machines taking over the world is a little ridiculous. We should instead be thinking about the more immediate problems posed by the technology. One issue I have with the “machines take over the world” scenario is that it assumes AI would want to live in the real world. The potential exists to create worlds in software that are far richer, larger and more interesting than the real world. Advanced AIs may prefer to inhabit virtual reality instead of physical reality (as many humans already do today). This presents the possibility that we can co-exist with AIs without becoming extinct in the process. The potential still exists for them to destroy the earth to support these virtual worlds, but that does not have to be the case. Machines could possibly be taught empathy and compassion, and may not be the selfish, soulless killing machines portrayed by Hollywood. Indeed, developing truly intelligent software will undoubtedly bring many benefits to mankind if we survive long enough to see it. Superhuman-level intelligence could potentially cure all known diseases, develop clean energy, end world poverty, and so on: Any problem amenable to intelligence could be solved given a smart-enough machine. Most AI researchers are motivated by the thought that the software they are developing can do a lot of good. It is unfortunate that most people’s opinion of the technology is tempered by the visions portrayed by the popular press and in movies. It’s true we should be cautious here—we really can’t know what the future holds—but I also feel there’s a lot of room for optimism, too. Maybe we should pay more attention to the AI researchers, rather than the alarmists.