I’m not sure if I should be happy, proud, or upset about that, but Google is now summarizing my blog in the AI overviews.

Cats, good books, AI, and religious walking in the city of Sofia
I’m not sure if I should be happy, proud, or upset about that, but Google is now summarizing my blog in the AI overviews.

I spend part of my day reading posts and I catch myself developing an allergy to AI slop. At the same time, AI is right into the editor, and attempts to change every word I write incorrectly. So not using it would be foolish. Here’s my gut feeling about what’s fine and what isn’t.


Overall, when I catch excessive AI use, I feel an ick about the text. If it feels AI generated entirely, zero chance that I’ll read it.
I found this nice article today that digs into the subject. Check it out.
The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.
The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.
I tried it in an 8h coding session. It performs worse than Claude 4 for me, and it’s slower. It made me wait for 10 minutes at a time. Eventually, I gave up, used my brain to understand the problem properly, and hand held Claude to a solution, which took about 1h. I think I lost most of my time with GPT-5 in loops where it fixes one thing at the expense of another while the general approach looked sufficiently sound to fool me but not sufficiently sound to eventually work in all cases.
This might be due to high traffic and not because the model is worse. I’ll give it another chance when the hype fades.
Cloudflare introduces a private beta to a service where engines are required to pay to access the information on a site. It made me think.
AI breaks the open web model in at least three different ways.
While the first and third don’t bother me much yet, the second bothers me a lot. I feel like Google broke the pact it made with the Internet to provide neutral web search in exchange for profiting from paid search. Now website owners need to pay so their content is crawled, so the different AI tools can present it as universal knowledge. Useful or not, many AI tools feed from the web and give content creators nothing in return. Here’s Cloudflare’s product idea – to charge crawlers, make it less unfair.
However, even though this is a step in the right direction, it still doesn’t feel right, even if there was a way to enforce it.
When BMW designs a car, they charge per car sold, not per car design stolen.