The Ethical AI

I find it funny that every time a very clear statement is explained or exaggerated with an adverb, the explanation is a hint that the opposite is present or the statement is not entirely true.

He absolutely doesn’t drink alcohol. I will totally buy a ticket for the next Taylor Swift concert. The students will use ChatGPT entirely for research purposes.

ChatGPT and “(Ethically)!” in one sentence, in a paid ad.

The first time I encountered the moral dilemma around the use of AI was in the Robots series by Isaac Asimov. I read that long before I owned a computer and totally bought the idea of a positronic brain. Asimov saw that robots if allowed to do whatever they wanted would just start killing. He envisioned a set of forced limitations that AI never hurts humans (the full list of 3+1 laws is here) as the only way for robots to be useful. Asimov noticed in his books that robots would replace human labor and eventually cause stagnation but that was only partially addressed in his series after centuries of expansion.

Who could’ve imagined that the first appearance of any resemblance of AI would need a very different set of laws than Asimov’s first 3? The present-day AI already appears in multiple forms, each of which has its own ethical challenges. The prompt-based tools tend to use human content and present it as their own with no citation or link to the author. They’re awesome for faking homework. The image generation tools copy artists’ work and make it semi-unique, filling the need for cheap illustrations on spam websites that would slip undetected by Googlebot. The chatbots and robocalls automate tasks that were once reserved for humans, causing unemployment. There has to be a fine line between what’s okay and what isn’t.

I’ve been thinking about how I would regulate that thing since my last post on the subject from last year and came up with roughly this:

  • Any statement by AI should cite the sources of information and provide links
  • AI should not present slight modifications of human content as its own
  • AI should not use the prompts of one human outside of the context of the interaction with that human.

But after writing this, I had a lightbulb moment. If anyone put a thought on this, it has to be the EU administration. And yes, the EU agreed on a much longer document, where Generative AI is just one type of risk and contains an Asimov-like masterpiece:

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
EU AI Act

Generative AI is not even considered a high-risk type of tool. EU considers AI tools as high-risk if they classify humans, analyze emotions, collect faces, provide advice on health or legal matters, talk to children, and so on. How didn’t Asimov think about that? The existential dangers of a toy that can explain dangerous activities to children.

Overall, the definition of an ethical use of AI is taking some shape but I wonder how much damage will be caused to human content creation and creativity until any of that’s adopted.

At least, none of these risks is Skynet and Asimov’s laws are not yet relevant.

Checkov’s gun

Checkov’s gun is a concept that in any story, there should be no irrelevant elements as they create an expectation that’s never met. Elements, like a gun, with no connection to the story should be edited out. Assuming a blog post is a story, you cut it down until it’s all down to the essence.

I find this theory fascinating and probably right, no matter if I agree with it or not. And while I’m trying to match it when writing, I think I enjoy seeing well-placed exceptions. One of my favorite book quotes is completely disconnected from the storyline, yet it stuck in my brain and made a footprint there.

“Trout, incidentally, had written a book about a money tree. It had twenty-dollar bills for leaves. Its flowers were government bonds. Its fruit was diamonds. It attracted human beings who killed each other around the roots and made very good fertilizer.”

― Kurt Vonnegut, Slaughterhouse-Five

Why do bad things happen to good people?

I feel uneasy when something bad happens, particularly to people who deserve the opposite. Traffic accidents, for example, are a constant source of unfairness. They are indiscriminate and unforgiving, particularly to the people with the least guilt – crossing the street or sitting in the backseat. The news outlets make sure that we know about the worst of these accidents, or the second worst on a day off.

For most of my life, I believed that an answer to the question exists. For example, a car accident must be preventable – drive slowly or be careful when crossing. Don’t go out in the car on a snowy day. That person who got sick? They must’ve eaten too much sugar, smoked, or something else. But what about the friends who got hit by a young BMW driver while waiting for a bus? Did they do something terrible and get punished by a cosmic entity?

I’ve found a variety of spiritual explanations for the problem: “There’s a greater plan for everyone, everything bad that happens now is for the good that will happen tomorrow”, “God takes the angels early” and so on. For example, you break a leg but it saves you from being drafted into the military and dying there. Also, a whole range of explanations are based on the concept of karma and punishment. A seemingly random terrible thing can be explained by that person’s previous sins. For ages, I was stuck in this kind of thinking, and probably still am.

Thinking Errors

Thinking errors or Cognitive distortions are beliefs that make a person perceive the world in a wrong way. It took me a while to realize that while the question “Why do bad things happen to good people” is not a thinking error, almost any answer to it is, and it can be one of a number of different ones.

I randomly read about the Fallacy of Fairness one day and had a slowly progressing lightbulb moment.

The Fallacy of Fairness is the belief that things that happen should be fair. You work hard, you need to get a reward. You do a good thing, it gives you a +1 karma. That person is good, they should be rewarded, and they should not be hit by a car. When an event happens, you feel bad because it is not fair. You try to find reasons for the event to make it fit within the realm in which only fair things are allowed to happen. The whole train of thought gives no relief as eventually, you reach the fundamental problem that bad things do happen to good people, and this makes no sense if the world is fair. But the world cannot be unfair.

A Chess Reference

In the game of Chess, players try to checkmate each other by executing multiple-move attacks. Most players will execute these multiple-move attacks no matter what their opponents play. However, not predicting a response would often defeat that attack after the first move, so sticking to the same plan would just lead to a defeat. The position changes after each move and you need to constantly rethink the position and look for new multiple-move combinations.

To translate this example to real life, let’s say a catastrophe happens. The world changes. It’s not fair. It’s easy to:

  • Stick to the past. Say that the change in the world was not fair. We need it to be fair and we refuse to move on. However, in Chess, this leads to losing games – we are moving pieces without reevaluating the position.
  • Explain it, and make it sound fair. The car was stolen because of bad karma. The dinosaurs did horrible things to each other and never traveled to Mars, that’s why they got the asteroid. However, trying to explain an evil with something like bad karma is also defined by Dr. Albert Ellis as Rationalization. It is yet another thinking error.

The correct response, if life was a game of Chess, would be to base your next move on the current position, not on the past position. We can’t control almost all of the bad stuff that happens. It can be random, or it can be for a reason, or it can be for a reason so stupid that we will never go to believe it even after seeing all 10 slides. We can only choose how we respond to the bad event.

My Current Answer

Bad things happen to good people because it is possible. Play an infinite number of chess games, and any possible position will eventually happen. Unlike Chess, life does not offer the relief of treating lost games as mistakes and the knowledge that a mistake is preventable. Life offers the Fallacy of Fairness as a chance to feel extra bad, and the Rationalization as a way out of feeling bad by blaming bad karma or hoping for a future cosmic reward.

We still have the opportunity to be kind to one another and to make sure good things also happen.

Poor Man’s Bitcoin

The communism withdrew from Bulgaria in 1989 and when the political police became unemployed, all sort of weird new things popped up to fill the gap. Grocery stores and supermarkets. New TV channels. More than one kind of ice cream. Fortune tellers. Horoscopes. Insurance racket. Chainmail too – you rewrite this letter 5 times and put it in 5 mailboxes and you’ll live a long and happy life. You don’t and you’ll die in pain. Multiple testimonials included.

One kind of chain mail had a price tag and wasn’t supposed to work, but worked for a while. Let’s call it The Poor Man’s Bitcoin. Here is roughly how it worked, excuse my faint memories for any inaccuracies.

There’s a sheet of paper, cut from a notebook, handwritten by a person who we can call “The Seller”. That sheet of paper contains the rules of the chainmail and 6 home addresses or PO boxes. Rules are as follows:

  • You need to send 2 Leva to the last 5 people in the chain, and also 2 Leva to the person who invented it (the number varied)
  • The way you prepare new copies is by rewriting the sheet and filling your name at the bottom of the list of 5 people in the chain, removing the first one
  • It contains terrible curses that will reach you if you violate the rules, sell more or less than 5 sheets of paper, or don’t mail money to the people in the chain
  • In order to get your money back, you need to temporarily become “The Seller”.

If you follow the rules closely, you should quickly get your money back – just find 5 people to buy your sheet of paper and letters with money would start flowing. Does it sound familiar?

  • It inflated algorithmically
  • Its intrinsic value was zero
  • The only things that kept it living for some time were people’s feelings and beliefs, mostly greed and fear of missing out
  • Some people loved it and were willing to fight that they’ll become rich once they get back their thousands of letters
  • It worked because people didn’t really understand how it works and look past the first 5 or 25 people who would give them money
  • Each transaction benefitted the author of the scheme + the regular system that guaranteed money transfer (the post)
  • Everyone could start their own chain letter

Once most of us were exposed to it, the number of letters turned out to be disappointing, and most people realized that these curses that guarantee the distribution don’t work, it vanished to be replaced years later by lottery tickets. Same feelings but no need to be able to write.

I find it amusing that so many people consider Bitcoin a form of investment while it’s pretty much like the chainmail from the 90s. At least this is how I see things. I do not understand how it works.

If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.