What is Artificial Intelligence?

  

Artificial Intelligence, or AI, is becoming so commonplace in the modern economy that few pause to consider its origins. They lie in the middle of the 20th century, some years before the term itself was coined. In the 1940s Norbert Wiener, a mathematician and philosopher, proposed that intelligence could be modelled by feedback loops: an action stimulates a response, which is used to determine the next action. Imagine a yachtsman, making small corrections to his steering as he comes into sight of port. The original setting may have been rough, but by adjusting the tiller as he gets closer to the quayside, he ensures that his arrival will not be.  

This idea of a gradual approximation towards the right answer was to find its ideal companion in the development of “thinking machines”, or computers. The theoretical foundations of modern computing date back to the early 19th century, but it was not until the 1950s that working programmable machines were available for use. Their serial logic, exploiting speed and repetition rather than the intuitive leaps that humans use, seemed ideally suited to the iterative process set out by Wiener. 

 

 

The term Artificial Intelligence was brought into currency in the 1950s when Alan Turing, a pioneering British codebreaker during the second world war, published a landmark study in which he speculated about the possibility of creating machines that could think. In 1956, John McCarthy, a cognitive scientist, held a conference at Dartmouth College, New Hampshire, on the topic of whether human intelligence could be “so precisely described that a machine can be made to simulate it”. The event established AI as a separate discipline and won McCarthy recognition as its spiritual founder. 

Another American, Herbert Simon, brought the concept of AI into the world of business and economics. A polymath, Simon became interested in how decisions are reached in organisations and reasoned that the best way to study human decision-making was to replicate it using machines. His General Problem Solver programme, developed in 1957 with his collaborators J. C. Shaw and Allen Newell, is considered a pioneer of AI programming (Simon was awarded the Nobel Prize in Economics in 1978). Today, AI means different things to different people but, broadly, it describes software that mimics human cognition or perception. 

Why is AI growing now? 

 1. Data 

The algorithms that power AI run on data, and we live in a world that is deluged with it. Since the onset of the digital era, the actions of people and machines all over the world, mediated by smartphones, sensors and other devices, are routinely recorded and stored in electronic databases. According to Domo’s 2018 Data Never Sleeps bulletin, the Internet receives 3,138,420 GB of data every minute, as individuals send almost 13m text messages, upload 400 hours of video to YouTube, and add 7 new articles to Wikipedia. This huge collection of diverse data provides the raw material for training algorithms. 

2. Processing power: 

To process all this data efficiently, AI programmes require considerable computer processing power. Moore’s Law broadly states that computer processing power doubles every two years. While the law is starting to come under strain, it has proved strikingly accurate since 1965: today’s mobile phones offer more processing capacity than was available to NASA during the 1969 moon landing. As such, Moore’s Law has dealt with AI’s processing capacity problem. 

3. Algorithms 

The rules-algorithms that interpret these huge reams of data have become far more sophisticated, with names like the RSA algorithm (used in cryptography), the Secure Hash algorithm (used in verification), Link Analysis (used by search engines among others), and Data Compression algorithms (used to make JPEGs and MPEGs, for example). Muhammad ibn Mūsā Al-Khwarizmi, the ninth-century Persian scholar after whom the algorithm is named, would have been astonished to see how his ideas have flowered. 

What can today’s AI really do? 

To understand the current status and future potential, of AI, it is necessary to understand the nature of the human intelligence that AI is trying to replicate. Solving a complex mathematical equation is difficult for most humans. To do so, we must learn a set of rules and then apply them correctly. However, programming a computer to follow rules and solve equations is easy. This is one reason why computers long ago eclipsed humans at “programmatic” games such as chess, which are based on applying rules to different scenarios. On the other hand, many of the tasks that humans find easy, such as identifying whether a picture is of a cat or a dog, or understanding language, are extremely difficult for computers because there are no clear rules to follow. 

As such, most of today’s emerging AI applications are found in areas where there are rules, or past examples, to follow. One application is automation. AI programmes can automate knowledge-based tasks that were previously undertaken by humans. The easiest tasks to automate are those that are repetitive, such as collating research on a topic or answering questions that have been asked many times before. A second application is personalisation. Just as Netflix personalises its homepage, AI programmes can suggest personalised healthcare treatments, education curriculums, or judicial sentences. AI can also make predictions based on what it has learned, at a scale that humans cannot match. These predictions may include the disease a patient is suffering from, where crime might occur, or the piece of infrastructure that might need to be repaired. 

The imitation game: How far could AI go? 

All of the examples above are forms of “narrow” AI. Unlike broader human intelligence, these algorithms typically focus on one or two specific tasks. Some types of narrow AI already exceed human intelligence, such as in language translation, spam filters, and Netflix recommendations. However, these algorithms are unable to do anything else. Unlike human intelligence, which derives from the power of the human brain, these narrow forms of AI are essentially large-scale data analysis efforts—of an order of magnitude that a human brain could never achieve. 

However, the ambitions of today’s AI practitioners stretch far beyond these narrow-use cases. Practitioners want to develop AI that could compete in Alan Turing’s famous “imitation game”, often referred to as the Turing test. In the game, an individual converses with two entities in separate rooms: one is a human and one is an AI-powered machine. If the individual is unable to identify which is which, the machine wins. The goal of the Turing test is to achieve “broad” or “general” AI—which means AI that can do all or most of the things that the human brain can do, including strategising, social manipulation, hacking, technology development and intelligence amplification. If such general AI is achieved, some believe that it would then continue to self-improve, ushering in an era of “superintelligence”—ie, a level of intelligence that is beyond the comprehension of humans. 

Source: 

The Economic Intelligence Unit, “Scaling Up”, The Potential Economic Impact of Artificial Intelligence in the UAE and Saudi Arabia (2019). Download full report here.