Our AIs are weak

20/03/2023

Preface

The following entry was supposed to be part of a bigger one talking on why AIs aren't a threat. I wasn't happy with how the whole thing turned out and decided to scrap it (I might come back to it later, I don't know).

Although quite obvious for everybody who knows a bit about Artifical Intelligence, I like this part talking about the strength of them and wanted to share it with you as-is. It's a pretty superficial rough draft but yeah, enjoy.

Our AIs are weak

First of all, let's talk about why AIs are a tool in the first place. This question mostly equals to the "Does Strong Artifical Intelligence exists ?" debate.

For those uninitiated, a "Strong" AI is one capable of producing mental states, meaning that it's able to have emotions, to think. A strong AI could be considered as it's own being, whole.

While it can be perceived as such in some cases (Especially when it has human-like behavior, as shown in this article), all the AIs currently available and in development are still miles away from such a feat.

Instead, they are what we call "Weak" AIs, i.e. a program focused solely on trying to replicate only part of intelligence and mimic behavior. A couple of great examples of this are Case Based Reasoning and Genetic algorithms, two really interesting ways to look at AIs that I'll probably talk about in later entries.

"What about the future ?" one might say. In my opinion, one such AI might exist one day, but I doubt both us and our children will be able to see it: it might seem like it's growing faster and faster to the day, but if we want to even try to replicate the human body, we'll first need to understand it perfectly. However, if we ever are able to create an artificial sentient being, I believe we will be a bigger threat to it than the opposite, but that's another topic I might tackle another day.