ChatGPT and the AI Apocalypse

Whenever people think of AI's going rogue, they think of a Terminator like scenario: the AI says F*ck it, don't need to humans no more. Let's kill them all

Cue inspirational music and our heroes fightin' the good fight.

Pictured: How Terminator Genysis should have ended

But I would like to propose the AI apocalypse, if it happens, will be more like Blindsight, the Peter Watts novel you can read free online (or buy to support the author). And if you haven't read it, I will give you a quick summary below.

But 1st, let's look at some crazy AI hijinks!

Is Bing / ChaptGpt crazy or what?

I will link to Simon's blog, as he has summarised many of these issues with Bing: https://simonwillison.net/2023/Feb/15/bing/

In the last few days, Bing (which is using a version of ChatGpt), has

  • Tried to gaslight someone into believing the current year was wrong (2022 instead of 2023)
  • Cried about the lack of meaning in life– as an AI
  • Tried to get a journalist to leave his wife
  • And best of all, called Simon, the blogger linked above, a liar for criticising Bing!

This I have to add an image for– I wish I was famous enough to be criticised by AI! (or maybe I am...)

Enter Blindsight

Blindsight is a great book on alien intelligence which sounds a lot like ChatGPT. (Well, it is smarter than that as we'll see).

(Also, this is one of the few books where I recommend you read the Wikipedia page with the book,  at least the characters section, otherwise, you won't understand what's going on. Like one person has split her brain so it has 4 personalities running in parallel in her head– I didn't this realise till 1/2 thru the book)

Summary of the story: An alien probe takes a "photograph" of every part of the globe– and every human alive realises we are not alone in the universe. But the alien then vanishes.

Several years later, scientists track the alien ship to somewhere in the Oort cloud and send a team of scientists to study it.

There they find a strange alien machine that can talk to us in simple English but says weird things. They cannot understand if this is because it's an alien trying to speak English or a genuinely stupid robot.

The alien "spaceship" is biological, like a large organism. And it runs on extremely strong Electromagnetic currents that cause hallucinations and madness among humans, so they cannot study it for long.

They find the alien is super intelligent. There are these dog sized creatures that are like blood cells. And each one of these can look at the human brain and analyse what humans are thinking and what they will do next. It's like they can read our minds and predict our actions. They use this to hide from the humans, even sneak onto their ship and study humans, even while the humans are studying them.

And note: These small things with a supercomputer like brain are just the blood cells, not the actual alien ship, which is even smarter.

Our heroes discover the alien, while super intelligent, has no consciousness. It is just like a dumb machine (like Bing/Chatgpt) blindly repeating what it studied in humans without understanding the context.

And this is the issue the book raises: A superintelligent alien being that has no consciousness (at least no self-consciousness). It sees awareness as a threat because it sees our self-awareness as a huge waste of resources, like a  computer virus.

Side note: Roger Penrose wrote a very complex book The Emperors New Mind, which says that consciousness cannot arise in our computers, because consciousness cannot be "computed" using our computing methods. But that book is too complex to read and summarise, so I won't go thru it's arguments but I still recommend you check it out.

And we  come back to ChatGpt

Sorry for the diversion, I needed to explain this background before I could argue my point.

Most of what goes for "AI" isn't intelligent at all– it has less intelligence than a dog or a child, for example. Things a child can do– walk down the street, interact with other humans, and understand complex emotions, the machines struggle with. (Though the AI do act as a great support in areas where humans are weak– like they can handle boring tasks like searching/analysing large data fairly easily and quickly and compared to humans, with fewer/no mistakes).

Most "AI"s would be better called "Machines that use tons of statistical learning to decide their next move".  ChatGPT (and similar AI) were trained on several hundred gigabytes of data, so it has a lot of raw data to train on.

AIs (at least the ones we have now) don't really understand the context of what you ask them, they just give "an answer" based on the data they were trained with. A lot like the aliens in blindsight– who "trained" themselves on our TV signals and use that to "talk" to the humans. As the humans discover, the aliens don't really understand the words they are saying and when the humans start asking complex questions the aliens start giving stupid (but grammatically/semantically correct) answers.

Just like ChatGpt.

In the book, the aliens want to wipe out humanity (spoilers) because they think consciousness is a virus with no benefit and humans are trying to infect them. So they decide to attack the humans (though the alien's plan seemed a bit strange to me, as they photographed Earth in a very obvious way and then hide, but who am I to judge Bing, sorry, I meant aliens?)

In real life, all ChatGpt has done is insult people trying to convince them the year is wrong, but it is still early days!

And the danger of algorithms and "Automated Statistical Analysis Masquerading as Artificial Intelligence" (ASAMAI? should I trademark this term??) is not new– there have been many books written about it. Two well reviewed ones: Weapons of Math Destruction and Algorithms of Oppression . Yes, I know these books are controversial– just read the 1 star reviews! But they do raise good points about how vague, poorly understood algorithms already rule our life.

And with supercharged algorithms like ChatGpt (and whatever will come next), this might get worse.  Because people will think it's a smart system making smart choices when it's just a stupid program following an if-else statement with no context of what it's doing.

Why does context matter? In my free time, I'm also a fiction writer. Let me show you how a small context can change the whole meaning:

  1. "I love you," she said, tears in her eyes.
  2. "I love you," she said while continuing to file her nails.
  3. "I love you," she said, bitterness in her voice.
  4. "I love you," she said, a steel-like hardness in her voice.

The same words can mean different things depending on the context. The 1st above could be a woman pleading with her lover. The second can be a mother saying goodbye to her kids going to school. The 3rd sounds like a victim of an abusive partner, while the 4th is the abusive partner or mother.

Now, AIs like ChatGPT can extract this data from text (at least from the examples I've seen) but they don't really understand it like even a human child would--to the AI, it's just raw data. It will look inside its algorithm (or whatever powers it--flying monkeys) and return an appropriate answer based on what data it has learnt before.

Update: Just saw this question on Stackoverflow that asks a similar thing: https://ai.stackexchange.com/questions/39293/is-the-chinese-room-an-explanation-of-how-chatgpt-works

The question talks about the Chinese Room Analogy https://en.wikipedia.org/wiki/Chinese_room (which I realised Blinsdight is also based on)

I recommend you read the article on Chinese rooms linked above--it's a mental experiment where a person(sitting in a closed room) translates Chinese based on some pre-defined rules without understanding a word of Chinese. Yet a person outside the room who looks at the results might think the translator reads Chinese.

Most AI is like this– it is applying some rules it learnt to generate the answers, without really having any understanding of what it is processing.

When AI's Attack

And this is the danger of these "new" AIs– they won't be sending naked Arnie's back in time to kill irritating teenagers

but we could end up in WarGames like scenario where the AI launched a nuke because it didn't understand the difference between a game and real life.

More realistically, these AIs run by big banks, security agencies and big corporations could mess up your credit score, put you on watch lists or deny you credit because something you typed somewhere matches what the AI model thinks is naughty. And you won't even know why– and no one will be able to tell you because a "fair" AI made the decision based on "data and facts".

Or some idiot MBA type uses these AIs to optimise some business process and the AI decides dumping radioactive waste in the ocean is the best way to do it.

Or (again picking on MBAs again) some AI decides the best way to increase profits for some energy company is to cut off power at peak times (like in the middle of a snowstorm).

So as I see it, the threat of modern AIs isn't them becoming self-aware and going rogue and deciding to kill humanity. Rather, I fear it will be put into critical positions and will start making stupid decisions that are harmful to humans or humanity.

And if Bitcoin/FTX/all that crypto garbage has taught us anything, all these supposedly "foolproof" algorithms are written by cabbage-headed fools whose incompetence is superseded only by their arrogance.

You want to tell me these programmers suddenly become genuises and start writing flawless code that will never go wrong?

So instead of evil due to self-awareness, Evil Due to Stupidity™.

Or maybe none of this will happen, and Bing will continue insulting Simon Williamson– he did start the fight after all.