What Level of AI Use Is Okay for a Blog Post?

I spend part of my day reading posts and I catch myself developing an allergy to AI slop. At the same time, AI is right into the editor, and attempts to change every word I write incorrectly. So not using it would be foolish. Here’s my gut feeling about what’s fine and what isn’t.

Not Fine

  • Copy/pasting any direct results of a prompt. It deserves a separate post why that’s not fine but for now, it’s a form of bad taste.
  • Any form of GenAI images. They were fun for a little bit. Now, they are just a way to state that the post is AI slop.
  • Let AI change the meaning of content when improving. AI tends to flip the meaning when doing subtle improvements.
  • Engine-oriented titles. If I’m the reader, I want text oriented for humans, not engines. The engines can burn some more CPU and figure it out.
  • Letting AI remove complex words, slang, puns, and emotion. Too much uniformity doesn’t improve readability, it sterilizes the text.
  • Emoji. Thanks to ChatGPT, emoji are like peanuts in a text.

Fine

  • Syntax, clarity, and feedback. AI can help improve the structure and readability the text.
  • Improve individual sentences. I tend to write long sentences and use unconfident words (examples from my post below). Funny that it uses the word unconfident but asks me to remove the word unconfident.
  • Research. Copy/pasting the body of a post to ChatGPT sometimes help find stuff I missed and fact checks. Particularly useful for book reviews.
  • SEO improvements, as long as it doesn’t change the meaning of the content.

Overall, when I catch excessive AI use, I feel an ick about the text. If it feels AI generated entirely, zero chance that I’ll read it.

Why aren’t intelligent people happier

I found this nice article today that digs into the subject. Check it out.

The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.

The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.

Protect the Lion

We are going to have a protest for preserving the Bulgarian national currency, the Bulgarian Lev (Lion). The organizers decided to use an AI generated lion for their posters.

I’ve never seen a lion with such crooked teeth. He has one mid-tooth at the top, zero at the bottom, his gums are made of plastic, and has no tongue. Something in his face reminds me of a pug. He’s also in a desperate need of a dentist.

Here’s my proposal for a more realistic lion

A lot more teeth, has tongue, and the correct number of fingers on some limbs. Also, officially dressed, ready for a political career.

GPT-5 is out

I tried it in an 8h coding session. It performs worse than Claude 4 for me, and it’s slower. It made me wait for 10 minutes at a time. Eventually, I gave up, used my brain to understand the problem properly, and hand held Claude to a solution, which took about 1h. I think I lost most of my time with GPT-5 in loops where it fixes one thing at the expense of another while the general approach looked sufficiently sound to fool me but not sufficiently sound to eventually work in all cases.

This might be due to high traffic and not because the model is worse. I’ll give it another chance when the hype fades.

Pay to Crawl

Cloudflare introduces a private beta to a service where engines are required to pay to access the information on a site. It made me think.

AI breaks the open web model in at least three different ways.

  • First is that the open web gets filled with AI-generated garbage
  • Second is that any word posted anywhere, from websites to DMs, may be used to train models, and then later retold and sold as AI
  • Third is that the lack of transparency of how the models work is a fertile ground spreading precisely controlled lies (in the shape Generative Engine Optimization – GEO)

While the first and third don’t bother me much yet, the second bothers me a lot. I feel like Google broke the pact it made with the Internet to provide neutral web search in exchange for profiting from paid search. Now website owners need to pay so their content is crawled, so the different AI tools can present it as universal knowledge. Useful or not, many AI tools feed from the web and give content creators nothing in return. Here’s Cloudflare’s product idea – to charge crawlers, make it less unfair.

However, even though this is a step in the right direction, it still doesn’t feel right, even if there was a way to enforce it.

When BMW designs a car, they charge per car sold, not per car design stolen.