BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Rethinking Weak Vs. Strong AI

Following
This article is more than 4 years old.

Artificial intelligence has a broad range of ways in which it can be applied - from chatbots to predictive analytics, from recognition systems to autonomous vehicles, and many other patterns. However, there is also the big overarching goal of AI: to make a machine intelligent enough that it can handle any general cognitive task in any setting, just like our own human brains. The general AI ecosystem classifies these AI efforts into two major buckets: weak (narrow) AI that is focused on one particular problem or task domain, and strong (general) AI that focuses on building intelligence that can handle any task or problem in any domain. From the perspectives of researchers, the more an AI system approaches the abilities of a human, with all the intelligence, emotion, and broad applicability of knowledge of humans, the “stronger” that AI is. On the other hand the more narrow in scope, specific to a particular application the AI system is, the weaker it is in comparison. But do these terms mean anything? And does it matter whether we have strong or weak AI systems?

Defining strong AI

In order to understand what these terms actually mean, let’s look more closely at these term definitions. The term “strong” AI can alternatively be understood as broad or general AI. Artificial general intelligence (AGI) is focused on creating intelligent machines that can successfully perform any intellectual task that a human being can. This comes down to three aspects: (1) the ability to generalize knowledge from one domain to another and take knowledge from one area and apply it somewhere else; (2) the ability to make plans for the future based on knowledge and experiences; and (3) the ability to adapt to the environment as changes occur. Additionally, there are ancillary aspects that come with these main requirements, such as the ability to reason, solve puzzles, represent knowledge and common sense, and the ability to plan.

Some have argued that the above definition for strong AI is not enough to be classified as truly intelligent because just being able to perform tasks and communicate like a human is not really strong AI. Bolstering this definition of strong AI is the idea of systems in which humans are unable to distinguish between a human and a machine, much like a physical version of a Turing test. The Turing Test aims to test intelligence by putting a human, a machine, and an interrogator in a conversational setting. If the interrogator can’t distinguish between the human and the machine, then it passes the Turing Test. Nowadays, some very advanced chatbots (and even the recent Google Duplex demo) are seeming to pass the Turing Test. Is Google Duplex truly intelligent?

The second test that builds off the Turing Test is called the Chinese Room which was created by John Searle. It assumes that a machine has been built that passes the Turing test and convinces a human Chinese speaker that the program is itself a live Chinese speaker. The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese? The test occurs by a human sitting in a closed off room with a book of instructions that is in English. Chinese characters are passed through a slot, the human in the room reads the instructions in English, and provides output in Chinese characters. Searle believes that there is no difference between the roles of the computer and the person in the experiment because each follow a program with step by step instructions and produce a behavior that is deemed intelligent. 

However, Searle does not really deem this as intelligent because the person still doesn’t understand Chinese even if their output is considered intelligent. Searle argues that by this logic, the computer also doesn’t understand Chinese. Without "understanding" he says you can’t say that the machine is "thinking”and that in order to think you must have a "mind". From Searle’s perspective, a Strong AI system must have understanding, otherwise it’s just a less-intelligent simulation.  There are others that say that even this doesn’t go far enough in the definition of strong AI. Rather, philosophers and researchers say that strong AI is defined as the ability to experience consciousness. So where is the line of general AI and is that goal ever achievable?

Defining weak AI

Now that we have defined strong, or general AI, how do we define weak AI? Well, by definition narrow or weak AI anything that isn’t strong or general AI. But this definition isn’t really helpful because we haven’t yet been able to successfully build strong AI by any of the definitions above. So does that mean everything we’ve built so far is weak? The short answer is yes.

However weak AI isn’t a particularly useful term, because weak implies that these AI systems aren’t powerful or able to perform useful tasks which isn’t the case. Rather than the pejorative term “weak” AI, it is much more preferable to use the term “narrow” AI. Narrow AI is exemplified by technologies such as image and speech recognition, AI powered chatbots, or even self driving cars. We’re slowly creeping our way up the ladder of intelligence and as technology continues to advance our definitions and expectations of “smart” systems does as well. A few decades ago OCR was considered AI, but today many people no longer define OCR as AI. The meaning will change over time and continues to evolve, and doesn’t even really give us any measurable specificity as to how intelligence a system is since there is a disagreement as to how strong a system should be. 

Given all of the above, is there any value to having such a stark contrast between narrow and general AI? After all, if narrow and general AI are just relative terms, it may be better to define the intelligence of systems in terms of a spectrum of maturity of how intelligent it is against the sort of tasks or range of tasks to be done. At one end of the spectrum we have AI at its most narrow application, to a single task and barely above what you could do with straight-forward programming and rules. At the other end, the AI is so mature that we’ve created a new kind of sentient being. But between the two we have many degrees of intelligence and applicability. We should probably move away from narrow vs. general AI terminology and adopt a more graduated approach to intelligence. After all, if everything we’re doing now is narrow AI, and general AI might be a long time coming, if ever, then having these all-or-nothing terms has very limited value.

It’s easy to get lost in the philosophy but we should keep in mind how AI maturity is changing and how that maturity can be applied to meet new needs. Given the ambiguity of the above terms we should rather look at the capabilities of what these AI systems can do and map them across the spectrum while keeping in mind the ever pushing boundaries of AI technologies.

Follow me on TwitterCheck out my website