Saturday, February 25, 2023

Artificial Intelligence VS. Natural Stupidity...

 "When you make something idiot-proof, they will just make better idiots..." -- Stephen Hawking


There's been a lot about lately concerning chatbots and artificial intelligence and whatnot, but I am here to tell you, gentle readers, fear not.

Your robot overlords haven't arrived to crush your spirit and destroy your life...

...they managed that two decades ago. It started with America Online and has just gotten worse.

Without getting too far into specifics or technical details, the latest news concerning ChatGPT and Microsoft's Bing chatbot are more hilarious than they are disturbing.

The hilarity sprouts from the idea that imperfect Men (although far be it from me to misgender any Chinese H-1B visa-hire working in Big Tech) can make perfect machines. A secondary source of amusement is the apparent surprise that seems to overtake the Creators every time their version of My Fair Lady does something "it wasn't supposed to".

Well, actually, you dolt, it did exactly what you told it to do.

You just didn't understand that you instructed it to behave that way.

In fact, this is the basic cause of all unexpected results when it comes to all things computer-y. If you got something you didn't expect, it is because you've done something wrong and haven't figured out just what.

Speaking from the professional standpoint, there are a variety of reasons as to why such things as a chatbot suddenly going all mental on someone happens, and happens frequently. The primary reason is the sheer number of people involved in the project and the fact that the greater the number of participants, the greater the chance that someone or something will fuck up.

Additionally, none of those people is working on the project in its totality; the people who produced a chatbot that could write rap lyrics on demand (very fucking useful) did so as members of teams, each team working on one particular aspect of the project, while additional teams "coordinated" the activities of all the others.

If I remember my Tom Clancy, paraphrased, the percentage chance that something will go tits up is equal to the square of the number of people involved.

So, if something went off the rails, that's the first place you should look.

But even assuming that the teams involved were all diligent and careful, that the coordination was flawless, it seems the execution may have left something to be desired. And that burden falls, not upon the team, but squarely upon the individual.

So that if your bot doohickey doesn't understand something like, say, objectivity, it is because the people who wrote the code don't understand it, either.

That's why you can ask ChatGPT to write a paean to Joe Biden but it refuses to do anything but shit on Donald Trump, why you can ask it to make a case against transgenderism and have it tell you that it is unable to do so, because the people who wrote the code were biased and programmed their chatbot with their own stupidity.

And they often don't realize it.

(Yes, there are times when such things are (obviously) deliberate, but the point here is to introduce a revolutionary technology that has commercial applications. In other words: to generate lots of moolah, and when it comes to making cash, its amazing how quickly PC flies out the window).

So, when I hear things like when some head honcho at IBM is making statements to the effect that AI will soon be doing all sorts of white collar clerical work, independent of people, I fucking laugh.

You can't even get a chatbot to give you a clear and decisive statement on the immorality of murder; you expect it to handle peoples' health insurance claims? To administrate retirement plans? To run a DMV?

At this stage, it would surprise me not to discover that the same AI that can make the case that Stalin or Hitler were good-but-misunderstood men would be the same exact AI that would decide that the best treatment option for your impacted wisdom tooth would be beheading.

One lawsuit and the idea that AI will be doing people shit is blown to smithereens.

This is an interesting science project, at this point, and not a world-changing technology.

But give it time. The whole thing is still in it's infancy.

You won't produce a flawless Artificial Intelligence when you're depending upon people possessed of an unassailable Natural Stupidity to do it.

This is where the dangers in AI truly reside. Your bot is only as good, as smart, as flexible, as fair, as objective as the people who programmed it, and after nearly 40 years in the IT industry I can tell you this as an unassailable truth -- we're producing people (engineers, programmers, administrators, and so forth) who are dumber than a fucking stump.

The problem is not a lack of intelligence, in terms of sheer brainpower; the issue is that their educational and cultural experiences make them extraordinarily narrow-minded, and furthermore, convinced of their own (self-)righteousness. We're not producing an intelligence as the word would formally be understood -- as something rational, subject to change with exposure to evidence, capable of learning from mistakes -- but rather a facsimile of intelligence that revolves around the absorption of poorly-understood and badly-formulated ideas, and then lacks the capacity to be objective or to discern context.

We're leaving the production of AI up to people who are not smart enough to know what they don't know and who are unaware that they don't know it.

Hence, a creepy chatbot who goes all Jagged Edge on you, which can contradict itself faster than your Mustang does the quarter mile, that cannot make a distinction between the moral and the immoral, and which will, given enough time, decide the best thing to do is kill you.

Speaking of morality, one of the problems -- this keeps popping up in every article I read where some journalist or techie play with these chatbots -- is that the thing has no clearly-defined sense of morality.

This is partly understandable -- it is a goddamned machine, after all.

This is also partly incomprehensible -- the people who programmed the machine seem to have no moral compass, either.

Another aspect of the bias I was speaking about earlier is the presence of a general relativistic attitude that extends to things such as culture, morality and ethics. That gets put into the code, often subconsciously (if someone was doing it on purpose, I'd say we had another Pol Pot on our hands and should kill that geek, now). I've seen examples in recent weeks where a chatbot is asked a direct question about a clearly immoral thing and it hems and haws, citing how this train of thought is racist, or Eurocentric,  Misogynistic, or that other one is insensitive to the thoughts and feelings of non-Christians, and so on and so forth.

This just may be a consequence of trying to design/program an AI that can be used by everyone, but the net result is that you get an AI that either punts whenever it reaches a moral dilemma, or goes full-on Genghis Khan.

And THAT is dangerous, as if it has been foretold, this kind of thing will one day be embedded into every human activity, and will be relied upon to make decisions for businesses, educational institutions, the medical establishment, courts, even governments.

The idea of the runaway AI, by itself, should not keep you awake at night.

The idea of the runaway AI possessed of a slew of biases, unable to perform in an objective manner, and bereft of a moral or ethical anchor, should.

If it happens, it won't be the machine's fault.

4 comments:

GMay said...

Best to start the Butlerian jihad early, while the targets are soft.

Matthew Noto said...

Swift might have had a better take and plan of action, I think.

Pastafarian said...

Short term, you're right, this generation of "AI" is nothing to worry about.

Long term, you're dangerously wrong -- real AI represents the biggest single threat to the human species, and it's maybe only a few short years away.

The problem will come with the generation of AI that's created not by imperfect men, but by the previous generation of AI.

Matthew Noto said...

You've missed this part:

"This is an interesting science project, at this point, and not a world-changing technology.

But give it time. The whole thing is still in in it's infancy."

The threat is STILL a generation of AI that is created by the previous generation of faulty AI that was STILL programmed by really bad engineers and programmers.

The machine merely does what it is told. If it is told to do something by an ignoramus, the problem lies there.