I have given the following speech at a dinner at Middle Temple, in the Queen’s Room, on 27 May 2016, to which I have been invited by the Great British Business Alliance.
Where will artificial intelligence lead us in the coming years? Will we all be served by robots?
Will my vacuum-cleaning robot, Berta, use my credit card to pay its holidays?
These are questions I hear often lately.
Well, maybe not the last one.
Professor Stephen Hawking warns us that artificial intelligence could end mankind.
“The development of full artificial intelligence could spell the end of the human race,” he told the BBC in December 2014. “[AI] would take off on its own, and re-design itself at an ever increasing rate,” he said. “Humans, who are limited by slow biological evolution, could not compete and would be superseded.”
A gloomy outlook.
The industrial revolution brought new manufacturing processes, a change from hand production methods to machines, new chemical manufacturing and iron production processes based on water power and then steam power. The societal change was profound. Almost every aspect of daily life was affected in some way. Physical labour in many ways was replaced.
Accommodating these changes, though, the adaptation of society took place over several generations.
Moving from production to service economy in the 20th century was a painful process. Unemployment was high at times. The workforce needed was and is, at times, not the workforce available. Some part of the workforce can be retrained, many workers need to change career paths altogether, and many go into long-term unemployment.
But still, the societal change had time to take place over several generations.
We are still adapting – Tata Steel selling its UK plants in England and Wales puts 4000 jobs at risk and the local economy. The UK Government Digital Strategy aims at redesigning its digital services to bring better services to the public — and save about £1.8 billion each year. This will not free civil servants from repetitive tasks to work on harder problems and improve their services. It will free many of them of their jobs.
A strategic decision.
A responsible one?
The societal change from information to knowledge society now brings a new threat. Mental work; knowledge work, and such services that civil servants provide are coming under threat. And there is no place for them to go, is there?
According to Russel and Norvig’s standard textbook, “Artificial Intelligence is the intelligence exhibited by machines. In computer science, an ideal ‘intelligent’ machine is a flexible rational agent that perceives its environment and takes actions that maximise its chance of success at an arbitrary goal.”
You could apply the term ‘artificial intelligence’ whenever cutting-edge techniques are used by a machine to competently perform or mimic ‘cognitive’ functions that we intuitively associate with human minds, such as learning and problem solving. If the machine performs tasks associated with a certain mystique, it is perceived as magic. Optical character recognition or route planning are no longer perceived as AI. It is just an everyday technology today. Chess playing programmes have lost its mystique. And Go is on its way to being demystified as the AlphaGo system has beaten professional players. Robots such as self-driving cars are probably closest to what we associate with artificial intelligence.
The field, artificial intelligence, was founded at a conference of Dartmouth College in 1956. The founders of the field, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon were quite optimistic about the success of AI. To quote Herbert Simon, “machines will be capable, within twenty years, of doing any work a man can do.”
Well … Maybe Herbert Simon did not mean ‘Earth years’?
In any case: How do you measure success in AI? When is a machine capable of doing any work a man — sorry — a human can do?
This question had been addressed before the legendary Dartmouth conference by Alan Turing, the mathematical genius who not only broke the Enigma code during WWII but formalised the concepts of algorithm and computation, called the Turing machine, a model of a general purpose computer. Alan Turing is widely considered to be the father of artificial intelligence.
In a journal article on Computing Machinery and Intelligence Alan Turing discussed the questions whether a computer could think. He had no doubt. And he put forward the idea of an ‘imitation game’. If a human and a machine would be interrogated in a way that the interrogator does not know who is the human and who is the machine, and the interrogator cannot distinguish between them by questioning, then it would be unreasonable not to call the computer intelligent.
Competitions around the world exist where you can chat with a communication partner via text messages and you have to find out whether you talk to a human or a machine. – By the way, we use Captcha checks to ascertain that we are humans on the Web every day while teaching ‘the machine’ what characters we recognise in a picture of a text snippet. –
In his paper, Turing makes a similar claim that Stephen Hawking makes: “machine intelligence could take off once it reached a ‘critical mass’”.
Are we there yet? Not by a long shot.
Let’s go back 30 years.
I finished school in 1986 and went to the army for 15 months before I started studying computer science in 1987; with a focus on artificial intelligence. The German Research Centre for Artificial Intelligence (DFKI) was founded in the same year. DFKI is today’s biggest research centre for AI in the world. The institute was just set up before the second ‘AI winter’ hit academia hard with reduced funding and, generally, a reduced interest in AI. Not surprisingly so: expectations were inflated and badly managed.
Two ‘AI winters’ – in analogy to nuclear winters – are widely considered. The first period of reduced funding lasted from 1974 to1980. UK research was hit especially hard. Professor Sir James Lighthill reported to the UK Parliament the utter failure of AI to achieve ‘grandiose objectives’. His conclusion: Nothing being done in AI couldn’t be done in other sciences. As a result, AI research was completely dismantled in England.
Similar funding cuts happened throughout the continent and in the United States.
The second AI winter lasted from 1987 to 1993. Again AI research could not deliver what was expected by industry, governments, and the public.
So, what was the world like thirty years ago, back in 1986?
The past: 1986
The Macintosh had stepped on the world stage just two years before; with a graphical user interface and a mouse as input devices. The first ‘personal computer’ as Steve Jobs put it. MacPaint and MacWrite were the first applications bundled with the Macintosh. Microsoft Windows was just one year old and Windows 2.0, the first usable version, a year in the future.
AI at that time?
Object-oriented programming in the form of a language called Smalltalk-80 was a big thing – Smalltalk was developed by Xerox PARC, the Palo Alto Research Centre; the same organisation that developed the graphical user interface paradigm and the mouse, and apparently did not know what to do with it commercially, until Steve Jobs came for a visit. Today’s -programming languages C++, Java, C#, and Swift, for example, are all based on those, same principles — but they are still not as powerful as Smalltalk – from an AI perspective. Other important AI languages at that time were LISP, Scheme, and PROLOG.
The problems we addressed at that time were knowledge representation and planning problems.
Let’s look at two illustrating examples:
We describe the state of a world made of blocks and a table, the Blocks World. Let’s say, we have three blocks. Block 1 sits on the table. Block 2 sits on the table, and Block 3 sits on top of Block 2. The computer is then tasked to build a stack where Block 2 is on top of Block 1 and Block 3 sits on Block 2.
Three-year-olds do not have a problem with finding the following sequence of actions: Remove Block 3 from Block 2; put Block 2 on Block 1; put Block 3 on Block 2.
Another example, that could give you an idea how difficult it was to deal with real world problems is the mathematical puzzle, Towers of Hanoi.
There are three rods and a number of disks of different sizes which can slide onto a rod. We begin with the disks in a neat stack: The biggest disk is at the bottom, the smallest disk is at the top.
The objective is to move the entire stack to another rod, following a few simple rules:
- Only one disk can be moved at a time.
- Each move consists of taking the topmost disk from one of the stacks and placing it on top of another stack, meaning: a disk can only be moved if it is at the top of a stack.
- No disk may be placed on top of a smaller disk.
What were the main difficulties?
How do you represent the state of the world? How much detail do you need to provide? What are the essential relationships that you need to model?
This representation problem is called the frame problem.
And, in the case of the Blocks World: When we leave the theoretical descriptions where only such objects as a table and blocks exist, and only such operations as ‘grasp’, ‘un-grasp’, and ‘put’ are defined, such questions pop up as: How big is the table? Where do I put a block on that table with the robotic arm? What happens if a block is dropped and falls off the table?
In 1986, AI was still in its infancy. All development was insular. Specialised hardware was available to speed up the execution of programs written in such AI languages as LISP. But the processing power was just not up for the task.
Representing knowledge was a difficult process as well. Re-use of represented knowledge was a huge problem. The major limiting factor for quicker development and exchange of algorithms and represented knowledge were missing standards.
The past: 2001
Fast forward 15 years to 2001.
No Space Odyssey.
No alien artefact.
No HAL 9000.
We have just ‘survived’ the ‘year 2000 problem’ and cured the ‘Millenium bug’.
The year 2000 marked an important first milestone for ubiquitous computing and the Internet of Things.
In 2000, the first phone marketed as a smartphone, the Ericsson R380 Smartphone, became available. A smartphone is a device combining features of a mobile phone with those of other mobile devices such as personal digital assistants. It has to be said though that the first smartphone, Simon, had been developed by IBM already in 1994. Simon had a touch screen (and a pen) and already ran such apps as Email, Fax, Address Book, Calculator, Calendar, Notepad, Sketchpad, and To Do.
In 2000, the first camera phones were sold. I got my first camera, the MCA-25, in 2002. It was plugged into my Sony Ericsson T300 and was a separate accessory. Resolution: 300,000 pixels. A far cry from today’s resolution. The phone could store 147 pictures.
A lot of software solutions based on AI research have become successful. Due to the second AI winter, solutions were rather called ‘smart’. The term ‘artificial intelligence’ was to be avoided at all costs. I worked at a University-spinout company at the turn of the millennium finishing my Ph.D. Whenever we went to Daimler, Audi, or Siemens deciders did not want to hear a word about AI. AI equaled failure at that time.
„Many observers,“ Ray Kurzweil wrote in 2005, „still think that the [last] AI winter was the end of the story and that nothing since has come off the AI field, yet today many thousands of AI applications are deeply embedded in the infrastructure of every industry.”
For example, route planning and GPS navigation started to become widely available. In 2002, TomTom released its first route planning product, TomTom Navigator, on a handheld device.
The present
Since 2007, the iPhone shook up the mobile phone market, and the PDA market, and the media player market. iOS and Android devices helped extend the compact camera systems market — and the games market. Apple’s App Store and Google Play bring new business opportunities. Daily.
Social software emerged with a vengeance.
Facebook has been around twelve years now. It has amassed 1.65 billion monthly active users [1]. Twitter, founded ten years ago, has 300 million monthly active users [2]. Google Plus, Instagram, Snapchat, etc., etc. All of them provide opportunities for data collection on an unprecedented scale.
Search engines, on the other hand, guide us to lots of information. Knowing our search history, Google helps us digging through the Internet.
Wolfram Alpha, a ‘computational knowledge engine’, uses its own knowledge base to answer specific questions, for example, about mathematics or people and history. It is a question answering system in contrast to Google’s Internet search.
Steven Wolfram, by the way, is a British scientist.
Siri, Apple’s voice-controlled personal assistant interprets spoken natural language for fact finding and searching. Among the search engines are Google and Wolfram Alpha. Microsoft calls its own version Cortana. And Google has Voice Search.
Programming frameworks such as Microsoft’s Bot Framework enable the development of competent chat bots in a short time and on a larger scale. Ask questions in natural language using spoken language, Skype chats or Facebook Messenger.
Mobile devices are powerful as never before: Mobile phones, tablets, and wearable technology such as the Apple Watch, or activity trackers developed by Fitbit or Jawbone are delivering services everywhere. You want to know about your sleep patterns and improve your rest? Get the appropriate device and app.
Those powerful devices are partnered with high-speed networks. Always on, always connected is already a reality for many of us, even though bandwidth is still an issue.
Artificial Intelligence today is called Artificial Intelligence again. The Internet of Things, the Web of sensors and connected devices, is already shaping up.
I already mentioned chess playing software and AlphaGo
The future: 2031
Jumping forward fifteen years. We are in 2031.
It is safe to say that computational power will have increased immensely. Moore’s Law — which predicts that the numbers of transistors on a chip roughly doubles every two years — will see to it.
Wireless and mobile networks will have merged. We will always be on. Our data will all live in the cloud, on servers and not on our devices.
Our interactions on social networks will intertwine with the real world even more. Personalisation and decision support will be part of our business and personal life.
Medical services, for example, will take into account actual data from us. And, we will be better-informed patients thanks to health apps. Sensors will deliver information around the clock. Through the Internet of Things, ubiquitous computing, has become reality.
The future: 2046
Jumping forward another 15 years.
The famous Greek aphorism or maxim, ‘Gnothi seauton’, know thyself, is, in one interpretation, the warning to pay no attention to the opinion of the multitude. By 2046 the multitude may very well know more about you than you do. This offers an opportunity to learn about ourselves, though. What do the digital traces that we leave behind tell about us? Our behaviour? Our ethics? Our moral standards?
The personal assistant we all crave to have might be there as a companion app. We might even all have a team of AIs working with us on our daily problems. Our health app might argue with the career app about a healthy balance of work and rest when discussing the next set of meetings? The health app will be supported by lots of health data: exercise, caloric intake and burn, the variety of our diet. The apps will learn from interacting with us, judging how we take advice, how to approach us differently if we make unwise decisions. I imagine versions of family and close friends. Just based on facts and not just on good intentions and the latest story from such magazines as Health and Fitness or Yoga. Apps will interpret and act upon data acquired from the Internet of Things.
We will be able to discuss our decision making based on live facts and hard evidence. We can ask for justifications, the relevance to the current situation and make predictions and simulate outcomes. We will be able to easily include relevant experts in the decision making.
Wrapping up
Let’s come back to the initial questions: Where will artificial intelligence lead us in the coming years? Will we all be served by robots?
Elon Musk, the billionaire founder of SpaceX and Tesla has compared developments in AI to ‘summoning demons’. Just like Stephen Hawking, he sees a threat in those developments and the possible end of the human race. With Moore’s Law being one of the main drivers of the recent development.
Luciano Floridi, professor in philosophy and ethics of information at the University of Oxford, does not think so. “No AI version of Godzilla is about to enslave us, so we should stop worrying about science fiction and start focusing on the actual challenges that AI poses,” Professor Floridi says. “Moore’s law is a measure of computational power, not intelligence.”
“My vacuum-cleaning robot,” Floridi continues, “will clean the floor quickly and cheaply and increasingly well, but it will never book a holiday for itself with my credit card.” And, „Anxieties about super-intelligent machines are, therefore, scientifically unjustified.”
Artificial intelligence, like any other technology, will not lead us. We need to take the lead and decide what to make of this technology. There are plenty of opportunities and useful applications as I have pointed out.
Developing cognitive abilities within machines will have dramatic consequences, though. Moving from manual labour to machine-powered production set people free to get a better education, to create other types of businesses. But where do we go when – not if – machine intelligence replaces accountants, administrators, middle management?
In the Science Fiction universe of Frank Herbert’s Dune a war with highly developed machines nearly wiped out mankind. The survivors made a drastic decision; they vowed to never develop machine intelligence again, turning to genetic engineering to ‘evolve’ humans.
It is us [here in the room] who have to make up their minds about the kind of society we want to live in. Maybe one in which Berta, my nosy, gossiping vacuum-cleaning robot, takes a holiday now and then.
References
[1] http://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/
[2] http://www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/