What’s the Endgame with AI

AI can write code. It can’t do 100% of the work, can’t even do 50%, but it can do a lot and is improving. It doesn’t get tired and doesn’t freak out when facing a new codebase.

It can write blog posts. Maybe not good posts but some posts that can fool some readers. It can also generate quantity that cannot be matched by humans.

AI can respond to support requests. Maybe not the greatest support on the planet but enough to be used by all Bulgarian telecom operators. It might be bad but it is also fast.

Given that 10 years ago, none of that was possible, if we plot a chart where 10 years ago we had nearly zero AI, and today we have some AI replacing humans, where does the chart go 10 years from now? Is AI becoming omniscient?

I recently read a 1962 book where one AI had the ambition to eliminate the entire human creativity (Gordon Dickson’s Necromancer). Gordon Dickson wasn’t far of from what LLM is doing at the moment. It’s hard to predict how far it will go without imagining some things it could possibly start doing and doesn’t do right now.

Here’s some AI engineering milestones to watch for.

1. AI, play me 5 new seasons of Wednesday

We should be close to that. Maybe a bit expensive with the tokens but what’s really missing to make it possible? It doesn’t even need a robotic body.

Speaking of bodies, giving it access to a printer or some other tools opens a Pandora box of possibilities.

2. AI, make me a sandwich

Why not? Building that may not even require AI. Most of the tech for it is available, perhaps some software and hardware is missing here and there but we can imagine seeing startups doing cooking bots in the nearby future. Cleaning and refilling the food toner would be very interesting challenges.

3. AI, make me a car

Okay, this is a tougher one. Lots of patents will be violated. AI doesn’t seem to care right now, and I’m sure there will be ways to circumvent intellectual property. Would that ever be possible? It should be. The AI may need access to some machinery but nothing that doesn’t already exist.

4. AI, make me a nuke

An AI capable of building cars would have no trouble producing weapons, and particularly copying and modifying existing weapon systems. I’m sure it will be used for weapons long before it’s used for cars. But what if this capability becomes available to individuals, not state actors?

And last but not least,

5. AI, print me some cash

The primary reason for not going down this ladder would be that the blueprints are protected, not that it can’t be used that way if it’s trained that way.

Overall, I think the development of AI presents bigger problems than humans becoming redundant, administrative bloat, and UBI. While we already observe a decline in all kinds of human activities that are being automated and made mediocre by AI, humans won’t stop trying to use it elsewhere. I see lots of room for changes that can damage the existing societal order.

Bad Grammar Can Be a Feature

Engines love to consume lengthy content and rank it higher on search. ChatGPT can generate tons of additional meaningful text for the idea. However, as a reader, I prefer to read content written by humans and for humans. I’d rather read meaningful ideas in ugly sentences with simple words and poor grammar than AI-assisted beautiful novellas with a summary and headlines.

In that context, bad grammar, slang, lower-case text, and such can be a form of anti-language that identifies the post as human-written and non-AI-augmented. It can be a feature, not a bug (now I have an excuse to turn off Grammarly lol).

The Ethical AI

I find it funny that every time a very clear statement is explained or exaggerated with an adverb, the explanation is a hint that the opposite is present or the statement is not entirely true.

He absolutely doesn’t drink alcohol. I will totally buy a ticket for the next Taylor Swift concert. The students will use ChatGPT entirely for research purposes.

ChatGPT and “(Ethically)!” in one sentence, in a paid ad.

The first time I encountered the moral dilemma around the use of AI was in the Robots series by Isaac Asimov. I read that long before I owned a computer and totally bought the idea of a positronic brain. Asimov saw that robots if allowed to do whatever they wanted would just start killing. He envisioned a set of forced limitations that AI never hurts humans (the full list of 3+1 laws is here) as the only way for robots to be useful. Asimov noticed in his books that robots would replace human labor and eventually cause stagnation but that was only partially addressed in his series after centuries of expansion.

Who could’ve imagined that the first appearance of any resemblance of AI would need a very different set of laws than Asimov’s first 3? The present-day AI already appears in multiple forms, each of which has its own ethical challenges. The prompt-based tools tend to use human content and present it as their own with no citation or link to the author. They’re awesome for faking homework. The image generation tools copy artists’ work and make it semi-unique, filling the need for cheap illustrations on spam websites that would slip undetected by Googlebot. The chatbots and robocalls automate tasks that were once reserved for humans, causing unemployment. There has to be a fine line between what’s okay and what isn’t.

I’ve been thinking about how I would regulate that thing since my last post on the subject from last year and came up with roughly this:

  • Any statement by AI should cite the sources of information and provide links
  • AI should not present slight modifications of human content as its own
  • AI should not use the prompts of one human outside of the context of the interaction with that human.

But after writing this, I had a lightbulb moment. If anyone put a thought on this, it has to be the EU administration. And yes, the EU agreed on a much longer document, where Generative AI is just one type of risk and contains an Asimov-like masterpiece:

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
EU AI Act

Generative AI is not even considered a high-risk type of tool. EU considers AI tools as high-risk if they classify humans, analyze emotions, collect faces, provide advice on health or legal matters, talk to children, and so on. How didn’t Asimov think about that? The existential dangers of a toy that can explain dangerous activities to children.

Overall, the definition of an ethical use of AI is taking some shape but I wonder how much damage will be caused to human content creation and creativity until any of that’s adopted.

At least, none of these risks is Skynet and Asimov’s laws are not yet relevant.