Opus

Claude Opus is my current most favorite model. I had a few blissful months of using it. Generated some good PRs, got stuck in debug loops not as many times as with previous models. I ended up extending the spend limit multiple times.

Opus Cocktail Bar, Sofia

After burning through far too many tokens, I had to stop and think. Is my usage really appropriate? Is it worth thinking how much tokens each prompt consumes? Is it because of the MCPs? Why does it make all these API calls to my dev server? How much does all of that even cost? It’s not clear from the dashboard at all.

While I’m rethinking my life’s choices, I switched to Codex and GPT 5.2. I feel like between the 4 AI editors that I have, I may have enough agent time available to last until the end of the billing period.

Being stuck with one option is not ideal. The situation is not like I have to write code without agents but my overuse of Opus is giving me a glimpse into a future where these models may start costing as much as people.

Complaining

Daily writing prompt
What do you complain about the most?

My region is known for cultural complaining. Gather people from the Balkans around a table, and it’s like a championship in complaining. I suspect this bad habit has Ottoman roots. You shouldn’t stand out, to not gather unwanted attention. Misery as camouflage.

For me – not sure. At one point, I read How to Win Friends and Influence People by Dale Carnegie. Carnegie is strongly against complaining. He thinks nobody enjoys listening to that and he’s a smart person. I suspect he’s at least partially right. So I’ve been putting some significant effort to not do that, or at least not as much. Also, some of the most popular subreddits on Reddit are all around people complaining about their relationships. So maybe at least some people find diving into other people’s problems a good use of their time.

Overall, I think have a good capacity to complain, and also a strong desire to not do it. Here, I let myself complain about cars, which I believe are bad for everyone, and about AI overviews of websites, which I believe are unfair use of other people’s intellectual property.


Speaking of heritage, here’s a Sofia classic. Pickled food and cats

What Level of AI Use Is Okay for a Blog Post?

I spend part of my day reading posts and I catch myself developing an allergy to AI slop. At the same time, AI is right into the editor, and attempts to change every word I write incorrectly. So not using it would be foolish. Here’s my gut feeling about what’s fine and what isn’t.

Not Fine

  • Copy/pasting any direct results of a prompt. It deserves a separate post why that’s not fine but for now, it’s a form of bad taste.
  • Any form of GenAI images. They were fun for a little bit. Now, they are just a way to state that the post is AI slop.
  • Let AI change the meaning of content when improving. AI tends to flip the meaning when doing subtle improvements.
  • Engine-oriented titles. If I’m the reader, I want text oriented for humans, not engines. The engines can burn some more CPU and figure it out.
  • Letting AI remove complex words, slang, puns, and emotion. Too much uniformity doesn’t improve readability, it sterilizes the text.
  • Emoji. Thanks to ChatGPT, emoji are like peanuts in a text.

Fine

  • Syntax, clarity, and feedback. AI can help improve the structure and readability the text.
  • Improve individual sentences. I tend to write long sentences and use unconfident words (examples from my post below). Funny that it uses the word unconfident but asks me to remove the word unconfident.
  • Research. Copy/pasting the body of a post to ChatGPT sometimes help find stuff I missed and fact checks. Particularly useful for book reviews.
  • SEO improvements, as long as it doesn’t change the meaning of the content.

Overall, when I catch excessive AI use, I feel an ick about the text. If it feels AI generated entirely, zero chance that I’ll read it.

Why aren’t intelligent people happier

I found this nice article today that digs into the subject. Check it out.

The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.

The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.