Artificial General Intelligence in plain English

Mike Bullock
Towards Data Science
11 min readOct 10, 2019

--

The holy grail of AI, the pinnacle of human achievement, the ultimate weapon, our future master, and the end of the world as we know it. Is that plain enough?

Don’t Panic… yet.

In the words of Douglas Adams, Don’t Panic. Siri, Cortana, Alexa and Google Assistant are not plotting to take over the human race. Let’s be really clear on one thing, although there are examples of AI everywhere we look, Artificial General Intelligence hasn’t been invented yet, and is probably several decades away.

What is Artificial Intelligence?

Let’s have a quick refresher. AI is just software, it’s just an application written to do a task. It’s not alive, it doesn’t have a soul or a conscious, it doesn’t have good or evil intentions, it is just software written to do a task.

When most people think about artificial intelligence they think about what gets portrayed in movies, the all-knowing super-intelligent, usually dangerous, computer that can take over the world. Think HAL9000 or Skynet. In the movies these fictitious artificial beings can not only hold a full conversation, they know everything and control everything.

Yet our day to day experience with AI is vastly different, AI is all around us today, but none of them resemble what is portrayed in the movies. Think about the facial recognition in Facebook that suggests names of people for you to tag, the fraud detection the bank uses to spot dodgy credit card transactions, or the self-adjusting Nest thermostat that keeps your house temperature comfortable.

Specialized versus General AI.

There are two main types of AI, specialized and general. Today there is specialized AI everywhere, Siri, facial recognition, Snapchat filters, Amazon.com recommendations, Nest thermostats and hundreds more examples that we almost take for granted. Specialized AI is just that, specialized. It is great at doing one particular thing (recognizing faces in photos, recommending what book you want to read next, working out if you are good credit risk, detecting skin cancer). In some cases, specialized AI is now even better than humans at doing things.

However specialized AI has the IQ of a very stupid earthworm, and that’s insulting to earthworms. Specialized AI is just a software app that just does one thing, what that thing depends on which specialized AI app you use. I’m using the term app because we can all understand the concept of an app on our phone that does one thing. On my phone, I have an app for weather forecasts, another for currency conversion, and one for ordering coffee. They all do their task well, you could say the currency conversion app is smarter than me when it comes to working out how much my new sneakers will cost in Pounds Sterling, but if I try to use it to order a coffee it is completely useless.

Specialized AI is just like those apps. Amazon.com’s specialized AI app that recommends what else you might want to buy is brilliant at finding things people will spend more money on. But ask it if a photo contains a cat and it’s completely useless, and don’t even think about asking it to order a soy latte.

Specialist AI is AI as we know it today. We have written millions of specialist AI apps (when I say apps, they are usually just a part of, or a “subroutine” for old-school geeks, a bigger application) that all do specific things, and many of them are scarily good at what they do. But all of them are completely useless at doing anything other than the specialized task they were written to do.

This is where General AI is a completely different beast. Specialized AI is great at a specialized task, General AI is great at any task (within reason). Specialized AI is created to do one thing, General AI is created to learn to do anything.

Specialized AI is created to do one thing, General AI is created to learn to do anything.

I like to call it General AI because that’s so much easier to type and say than Artificial General Intelligence, but don’t worry, they are the same thing. Sometimes it can also be called full AI, strong AI, or true AI. Given it doesn’t exist yet let’s not get too hung up on semantics, General AI will do for now.

What about Siri/Alexa/Cortana you ask, isn’t she a general AI? You can ask her anything and she’ll answer, does that make her a General AI? Nope. Siri (and all her counterparts) are just a collection of specialized AIs, she can recognize natural language, she can detect the context of a question, and she can look at your calendar or search the internet for an answer. But just because you can ask Siri what color alpaca fur is, it doesn’t mean she actually knows the answer. All she is doing is regurgitating something she found online that matches a few of the keywords in your question. Siri doesn’t even know what an alpaca is, or fur, or color. A five-year-old is way, way smarter than Siri, my dog is smarter than Siri, and my dog is not that smart.

If Siri isn’t an example of a General AI, what is? In a word, nothing. We haven’t built one yet, which is kind of good news for the human race.

Before I explain why building a General AI is probably really bad news, let’s finish explaining what General AI is.

Specialized AI is purpose-built to do one thing, like a Phillips screwdriver, it’s good for turning screws with a Phillips head. If you want to hammer a nail it’s pretty useless, or cut a bit of wood, or open a tin can. You can stick a bunch of specialized AIs together to make a Swiss army knife of AIs (which is what Siri kind of is) but as soon as you come across a task that your knife can’t handle, you’re stuck. Ever tried painting a fence with a Swiss army knife?

General AI

General AI is a very different beast, it’s an AI that can learn to do different things. Imagine a Phillips screwdriver that can morph into a saw, or a paint brush, or a tape measure. An intelligent screwdriver that can look at the problem and then adapt to be the right tool to solve the problem.

Right about now you’re probably picturing the second Terminator movie with the shiny shape-shifting robot from the future that could change into any form to solve the problem it faced. That’s a reasonable analogy but I’m not talking about a physical shape-shifting AI, remember that an “AI” is just a computer program, it’s just software. Imagine a software application that can change to suit the thing you are working on, imagine PowerPoint turning into Photoshop, then WhatsApp, Excel, and SAP. The application is multi-purpose and can be used for any task, from photo editing to timesheets, to social media.

That’s the key difference between Specialized AI and General AI. Specialized AI is great at doing what it was designed for, General AI is great at learning how to do what it needs to do. To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task.

Doesn’t Sound so Bad? It’s hardly the end of the world.

What’s the harm in a computer program that can be applied to solving a range of different problems and learning to do more than it was programmed to do? Doesn’t sound that scary, and it’s not actually.

General AI is, in general, a good thing, it will help us solve problems that are currently beyond our mental and technical capacity, such as not only predicting climate change but determining the best action that balances all of the factors we are arguing over (economic growth, sustainability, sovereign state rights over their own forests and land, rights of indigenous people, immediate benefits versus long term benefits and a long list of other factors that make it impossible for humans to currently reach an ideal solution). General AI will generally be very useful at predicting things such as weather, economic changes, social changes, human behavior and natural disasters.

When used appropriately General AI will be of immense benefit to the human race, and to the other species we share this planet with. Unfortunately, there are a few cases where used inappropriately it could spell disaster. This is where you should start to worry.

Not the end of the world, just the end of human domination.

There are two key scenarios where General AI could go horribly wrong, and I do mean horribly, not as in an IOS upgrade that deletes all the photos of your pets, but as in the end of human civilization.

Scenario One — Humans doing Bad Things

As with all new technologies and tools, they have the potential of massively benefit society, but used for the wrong reasons they can spell disaster. Metal smelting from the bronze age allowed us to make new tools to better farm the land, but it also created swords and weapons to better kill each other. Nuclear fission created a new energy source, and on the flip side it gave us the most destructive weapons ever used. The Internet connected society into a global community, but it also allowed trolls and terrorists to use it for all the wrong reasons.

General AI can be used to solve huge problems faced by society, it could also be used as a cyber weapon, a means of monitoring, influencing and controlling society. It could be used to plan economic crises, to overthrow a nation, to exploit, to plan and lead a military invasion. More subtly it could be used to mass influence society for economic gain (think Facebook and Cambridge Analytica).

The next major arms race is the race to create the first General AI. The first nation state that creates a General AI will become the next world superpower, and possibly the last ever superpower.

The use of General AI is something that we have absolute control over, as humans we can decide what we use it for and what we agree not to use it for. This is becoming a key debate at UN level, but as with all agreements it relies on people abiding by them. For decades we have honored nuclear disarmament treaties, but all it takes is a North Korea, Putin or Trump and those treaties become meaningless. Unlike nuclear weapons though, the use of a General AI is far subtler. It’s rather obvious when a nuclear weapon is used in anger, but the use of a General AI to influence politics and society would be almost invisible. We might never know it happened.

That terrible phrase “guns don’t kill people, people do” is actually quite appropriate here, General AI isn’t a bad thing, it’s what we use it for that could be bad. Very bad.

The second scenario is not one we have control over, regardless of how many agreements we make and how well we abide by them. It is the second scenario that has the biggest potential to end human civilization. Relax, it’s also the furthest away, we’ve probably still got another 50 years before it happens.

Scenario Two — Super Intelligence

Superintelligence, the point at which something exceeds human intelligence.

It’s simply a matter of perspective, as long as General AI software is less intelligent than us we’ll all be fine. You see humans have become very used to being the most intelligent species in the world (I won’t say smartest, as you could argue that what we are doing to the planet is a long way from smart). We may not be the fastest, strongest, biggest, but we are by far the most intelligent, and that’s what has helped us survive and flourish.

What would happen though if we met a species that was twice as intelligent as us, or three times, or a million times more intelligent? What if the new species was so intelligent that it viewed humans in the same way we view ants, or cockroaches? Would we be seen as a pest or plague upon the planet rather than an intelligent, creative and peaceful species? Would it consider eradication of humans as being in the best interest of the planet and all the other species that we share it with?

This may seem far-fetched, and somewhat off-topic from an article on technology, but it is very much on the road we are heading down with General AI. One of the key characteristics of General AI is the ability to learn and evolve. The thing about evolution is that each new generation is typically better than the last. In nature the speed of evolution is limited by reproduction rate, for humans that’s usually about 20 years, so we make a very minor evolution every 20 years, but in the world of AI evolutions can occur in minutes.

In creating General AI we are potentially creating a system that can learn and evolve at a rate exponentially faster than ourselves. What might be a useful General AI for modelling climate change, could be left running overnight, evolving every minute to become more accurate, more intelligent and better at its task. By the next morning it will be 720 generations ahead of where it was the evening before. In human terms that would be an evolutionary period of 14,400 years in one night. In the space of 11 days, the system would have evolved more times than the entire human race over our 315,000 year existence as homo-sapiens. That assumes a one minute evolution period, the likelihood is that as the system evolves its evolution rate would speed up, after a few days it might take only a few seconds to evolve.

After a month of evolution at an ever-increasing rate the system would have evolved millions of times, it would have surpassed the entire evolutionary process of all life on earth. Its capabilities would be well beyond anything we could ever imagine, and how it works will be beyond our comprehension. It would have achieved a level of intelligence that far exceeds our own.

In a cosmic blink of the eye we would have created our superior, a being so intelligent it would view us as nothing more than one of the many forms of life that inhabit planet Earth. Our position at the top of the intellectual food chain would be well and truly over.

What it does with that intelligence is the unknown part. Will it view humans as its creator, its god? Would it view itself as god and us as a threat? Would it not regard humans as anything more than just a low-level life form?

This is where technology, philosophy and ethics combine. When it comes to developing General AI we need to ask ourselves the question of “just because we can, should we?”.

Consciousness?

There is the question of consciousness. If a machine is able to learn, reason and evolve, is it conscious? When does it stop being a piece of software and become a living being? Is turning it off akin to murder? That’s a whole other article, for another time.

Summary

Fear not AI, fear the humans that wield it for the wrong reasons. Fear not superintelligence, fear the realization that human beings are just one step in the evolution of life. At some point, we were always going to be replaced by the next evolution. We just never imagined that it would be a being of our own creation that would replace us.

--

--