The Ethical AI

I find it funny that every time a very clear statement is explained or exaggerated with an adverb, the explanation is a hint that the opposite is present or the statement is not entirely true.

He absolutely doesn’t drink alcohol. I will totally buy a ticket for the next Taylor Swift concert. The students will use ChatGPT entirely for research purposes.

ChatGPT and “(Ethically)!” in one sentence, in a paid ad.

The first time I encountered the moral dilemma around the use of AI was in the Robots series by Isaac Asimov. I read that long before I owned a computer and totally bought the idea of a positronic brain. Asimov saw that robots if allowed to do whatever they wanted would just start killing. He envisioned a set of forced limitations that AI never hurts humans (the full list of 3+1 laws is here) as the only way for robots to be useful. Asimov noticed in his books that robots would replace human labor and eventually cause stagnation but that was only partially addressed in his series after centuries of expansion.

Who could’ve imagined that the first appearance of any resemblance of AI would need a very different set of laws than Asimov’s first 3? The present-day AI already appears in multiple forms, each of which has its own ethical challenges. The prompt-based tools tend to use human content and present it as their own with no citation or link to the author. They’re awesome for faking homework. The image generation tools copy artists’ work and make it semi-unique, filling the need for cheap illustrations on spam websites that would slip undetected by Googlebot. The chatbots and robocalls automate tasks that were once reserved for humans, causing unemployment. There has to be a fine line between what’s okay and what isn’t.

I’ve been thinking about how I would regulate that thing since my last post on the subject from last year and came up with roughly this:

  • Any statement by AI should cite the sources of information and provide links
  • AI should not present slight modifications of human content as its own
  • AI should not use the prompts of one human outside of the context of the interaction with that human.

But after writing this, I had a lightbulb moment. If anyone put a thought on this, it has to be the EU administration. And yes, the EU agreed on a much longer document, where Generative AI is just one type of risk and contains an Asimov-like masterpiece:

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
EU AI Act

Generative AI is not even considered a high-risk type of tool. EU considers AI tools as high-risk if they classify humans, analyze emotions, collect faces, provide advice on health or legal matters, talk to children, and so on. How didn’t Asimov think about that? The existential dangers of a toy that can explain dangerous activities to children.

Overall, the definition of an ethical use of AI is taking some shape but I wonder how much damage will be caused to human content creation and creativity until any of that’s adopted.

At least, none of these risks is Skynet and Asimov’s laws are not yet relevant.

One thought on “The Ethical AI

Leave a comment