Thursday, March 2, 2023

Ethical AI?

 No quote today because I couldn't think of one that was appropriate.



Yesterday this Galactic Dictator was forced to undertake what was, laughingly, termed an "Ethical AI Course".

My employer "mandated" it. I have come to despise that word in the last three years.

The names have been changed to protect the fucktarded.

The idea behind this course, and some of the other nonsense I have been subjected to on this subject over the years, is that any artificial intelligence regime can -- and should -- be scrubbed clean of any errors or potential errors (or sometimes non-errors) that can be attributed to anything that might be considered bias.

A secondary, but surely not-less-important idea, is that the use of artificial intelligence must be undertaken in something of a holistic spirit. The purpose of AI, you see, is to "help humanity", not, as one might imagine, to do things humans can't already do for themselves, only faster, on a grander scale, and without the papercuts. The premise is that any AI regime that was created "to help" should take great care to ensure that it does not harm.

Consider it a Hippocratic Oath for Geeks.

That all sounds well and fine. We should take care that anything as potentially dangerous as artificial intelligence should be as free as we can make it of any dangerous or harmful qualities or potential.

But in the process of taking this course I began to notice a certain...mindset...creeping into the discussion which, from my point of view, renders any attempts to render AI useful-but-not-harmful undertaken under the auspices of this mindset, itself, useless.

And a catastrophic failure, at that.

It also might help explain the bizarre behavior we've seen recently in some of the more-(in-)famous examples of chatbots that have begun to hit the market.

As succinctly as I can sum it up, this mindset begins with two redefinitions of otherwise easy-to-understand concepts.

The first is the term "ethical".

The second is the term "harm".

The underlying problem here, sez me, is Political Correctness. More specifically, the pernicious "woke" variety.

Ethics used to be defined like this:

A system of moral principles; the rules of conduct recognized with respect to a particular class of human actions.

This appears to no longer be the case, specifically for two reasons:

a. AI, computer programming in general, does not lend itself very well to being shaped by morality. It is a mathematical function, and numbers, in and of themselves, have no (im-)moral quality, nor does math.

b. When speaking of moral quality with regards to Artificial Intelligence, who gets to define what represents a moral quality?

Numbers and mathematics are OBJECTIVE things, after all, while one can argue that morals are not.

In fact, the answer to b is quite simply "whoever designs the AI".

This becomes problematic when joined to the other redefinition, which is the word "harm". Harm can be defined as any circumstance which may cause pain, discomfort or injustice to another being. If that definition holds true, then the natural follow up to "what is harm?" becomes "what constitutes harm?".

And the answer to that can be SUBJECTIVE. In many cases, very subjective.

This where things begin to get very messy.

So, let us say, for the sake of example, that a bank wishes to use AI to screen potential loan customers for things like creditworthiness before offering them or approving a loan. Let's assume that the Ai in question is using a variety of data to make it's decision, including data from the Census and perhaps the IRS.

In the course of this screening operation, the AI makes a determination that based upon statistical data compiled by the IRS and Census, that people in a certain neighborhood, where the average income indicates poor credit risks, should be denied loans.

In real terms, this is an AI that is determining RISK. This is what bankers are interested in, after all. When they loan money out, they'd like to get it back, and so they must evaluate the risk involved in the loan.

The AI spit out the rejections based upon data which is assumed to be purely empirical and verified; it does this without any personal feelings and no intrinsic biases.

In the Ur Days of Data Processing, this would have been considered all well-and-good, an example of exemplary programming and perfect use of available data to make a sound financial decision.

Nowadays, this is called "racism".

(Actually, you can insert any form of -ism in there that you might imagine).

This can be considered harm.

Now, the AI doesn't (necessarily) know the race(s) of the people it is classifying as poor credit risks; it simply was given data and told to sort it in such-a-such a fashion, which one assumes, because it is empirical, is unbiased.

And this is where "ethical" needs to be redefined.

For even if the AI, even programmed with the best of intentions, manages to spit out names like Shaquanda Magillicuddy and Raekwon Washington with anything resembling even a scintilla of regularity, it is somehow, still, racist, and therefore, harmful. Therefore, it is unethical.

Which brings us to the next problem presented by "Ethical AI".

The Reporting.

As an adjunct to "Ethical AI Rules" there seems to be a requirement -- in fact, given today's social mores, it fairly screams for it -- a system of Stasi-like reporters who are constantly on the lookout for anything that causes "harm" based upon a variety of (often-)subjective factors, which are then used to modify the AI to make it "more equitable".

The course not only beats this drum ad nauseum, it actively encourages everyone involved to become a junior member of the Artificial Intelligence Stasi. If you "feel" or "perceive" that an AI is behaving "unethically", even subjectively defined, it is somehow your cosmic duty to report this apostate computer program so that it might be "rectified", i.e. reprogrammed so as to take all the nasty -isms out of it.

Toot sweet, Bucko.

Essentially, this requires cheating. For so long as anything can be deemed "harmful", this means the AI gets reprogrammed in such a way as to continually distance it from it's original purpose, i.e. the empirical evaluation and processing of data to be applied towards solving a specific problem.

I've spoken about this before, in a limited context, but it is time that we brought out the real problems with AI, ethical or not, that very often cannot be simply avoided, papered over, or ignored.

And that is BIAS in a variety of form. I hate to keep repeating this, but this is the crux of the whole problem with AI at the present.

Generally speaking, there are three forms of bias inherent in any AI regime, and they are all unavoidable in the context of creating an "ethical" AI given the parameters listed above.

The first is algorithmic bias.

Basically, this is now what we call it when the numbers tell you something you don't like or would rather not admit. We've already seen deliberate algorithmic bias play out for reelz when, for instance, Facefuck Facebook or Twitter didn't like your COVID opinions, and so the Code Monkeys at Central went ahead and rejiggered the algorithms so that "disinformation" (defined subjectively, often selectively) resulted in a banning, automatic removal of posts, or downgrading of posts within a feed.

Sometimes, however, it is just a matter of the numbers not lying or revealing something uncomfortable, but unmistakably true, which could bring "harm" to a gentle soul (mental patient) who has an allergy to truth, or who has a particular axe to grind.

Under the modern "Ethical AI" regime, any mathematical result that tells us an unpleasant truth, and which might cause someone's fragile feelz to be bruised, is rebranded as "algorithmic bias" and the narcs at "Ethical AI Headquarters" (would that be the Lubiuyanka or the Prinz Albrechtstrasse?) will demand it be removed or modified...

...creating a whole new bias.

The second, as I've stated before, is Programmer Bias. Individual programmers will favor different methods of achieving the same goal, and so they will program something according to their tastes. Very often, more often sub-consciously than deliberately, their choice of method will...ahem...color the result.

There really is no defense against this. But, again, once a bias -- legitimate or not -- is detected, the effort to ensure "Ethics" will require the thing be reformulated, as many goddamned times as it takes until every last molecule of detectable programmer personal preferences are bleached out.

This cannot help but affect the final product.

The final one is Selection Bias.

What data does the AI decide to use and which data does it choose to ignore? Again,. this is a matter of humans -- for the AI doesn't make the decision all of it's lonesome, it follows instructions laid out by humans.

Once again, if the method of selection does not pass the Politically Correct Smell test of the Kubernetes (look it up!) Kempetai, it is verboten, and must be reshaped so that square peg is sledgehammered into round hole..

The resulting pile of fecal matter then is basically useless. Oh, it may do some things really, really well, especially after it has be reprogrammed 900 times, but then it always turns out there's a need to revise attempt 901 in a more-holistic way, which renders the first 900 instances of success somewhat suspect in future.

And so it is that you end up with chatbots that can sing the praises of the most-horrible people to ever live, threaten to kill you, fall in love with you and then immediately profess to detest you, curse you out, threaten to report you to authorities for being a MAGAhead, engage in rank hypocrisy, or condescendingly lecture you on why your opinions absolutely suck, are absolutely wrong, and mark you out as a Nazi.

Assuming the thing doesn't get all fucking uppity and just refuse to answer your question, because the Snowflakes have subjected it to an electron microscope, a spaghetti strainer and a colonoscopy.

After a short while, I began to understand, however, what was really happening here.

The Course, such as it was (it did not tell me anything I didn't already know, and was, frankly, full of irrelevant batshit) wasn't intended to teach me the finer points of Ethics or even Artificial Intelligence, at all.

It was more like a test of my responses to a variety of, shall we say, slanted scenarios. Fait accomplis, in a manner of speaking.

For when it came time to answer questions regarding the course, the computer that provided it was keeping score.

And every time I gave something that seemed to me to be a perfectly logical and correct answer, I ran a very good risk of losing points. Conversely, when I answered a question with the "proper" "woke" choices, I was rewarded with more points.

It did not take me very long to figure this out, and so because the course was required, I reckoned I would perform an experiment of my own, and deliberately answer every question in which the smell of political, racial or gender-based orthodoxy might be involved, I answered with what seemed to be the most "woke" responses from those available.

And wouldn't you know it? I achieved a perfect score from that point onwards! A score, I was then told, that would be recorded for posterity.

You know, like when you got the highest score on PacMan at the corner candy store and your initials remained there forever until someone else with a quarter knocked you off Olympus.

So that the real point of the "Ethical AI" course wasn't so much to teach me anything about AI, or how to be a responsible, more-socially-conscious geek, but rather to measure the extent to which my brain had been turned to tapioca by political correctness.

It was all about measuring my reliability, as defined as "willingness to be a simp".

In other words, this had nothing to do with my actual WORK. It is a means by which the wokescolds in HR can gauge my ability to play well, like a good little mentally-neutered drone, in the Diversity Sandbox.

Two hours of my life that I will never, ever get back.

5 comments:

Anonymous said...

I have to do over a dozen stupid mini courses yearly which basically acknowledge I won’t go against my corporate interests. Stupid I let them play while I’m at lunch elsewhere on my laptop.

GMay said...

Surely there's a quote from Asimov that might have graced this piece? I haven't read any of his stuff in about 30 years, so it's a bit misty for me.

But I do want to curse you for making me look up some filthy IT neckbeard term!

Matthew Noto said...

** Facepalm **

Asimov...of course...

Matthew Noto said...

Yes, I made you look it up. But you learned something, didn't you?

GMay said...

Yes, I remember learning something, and also forgetting it. The hard drive between my ears filled up a few years ago. Shoving something new in there resembles Daffy Duck stuffing the genie back in the lamp.

https://youtu.be/DE0miV8YBBw?t=65