Commentary on Brandolini’s Law

Brandolini defined the following law:

The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

It normally applies to arguments and misinformation. Now that generating content with AI is so easy, the orders of magnitude have changed. Both the statement and the refutation can be forged very quickly and people believing stuff on the Internet just look more and more foolish.

So, while trying to figure out what to even think about this, I came up with the following commentary on Bardolini’s Law.

The probability that any new online content is AI, BS, or both increases over time.

With the growth of DC power, personal AI orchestrators, GEO, and SEO, this probability will eventually be trending towards 1. What would be the point to refute, or even read anything, if it’s one of a million clear BS pieces of information? So here’s my prediction:

The authenticity of content will become more important than its quality.

Rusty the cat

A story about the 33-year-old cat Rusty took Reddit by storm. Rusty passed away at the age of 33, making him into top 10 of the oldest cats in the recorder cat history. The poster submitted proof to the mods of r/cats that Rusty was real. People rushed to update Wikipedia’s list of oldest cats, expressed condolences, sent love and wishes. The post generated 134K likes and was probably viewed by most of the non-bot Redditors.

Unfortunately, Rusty was AI slop. Rusty:

The post and the bot that submitted it got deleted.

I think my attempt to restrict Reddit to a few essential subreddits like r/cats is not very successful and I still have exposure to something that I wouldn’t even call AI slop. More like farming for free human-generated text for the purpose of training LLMs. Rusty helps me understand why the sudden rush to gather personal IDs and verify humans on social media. All the social networks are vulnerable to slop and risk losing engagement if they don’t put it under some level of control.

Here’s a real orange cat for you, blissfully unaware about the decay of r/cats.

UPDATE: this is a male cat named Pesho, also known as The Son of a Mother. Likes cuddles and bites for no reason.

Muted Reddit’s auto-subscription after it became intolerable

Reddit somehow detected that I’m interested in AI and Data and auto-subscribed me to several hundred AI and Data subreddits. This turned their feed to an endless slop machine. Every single pre-IPO hype post or or doom prophecy would pop on my feed multiple times per subreddit, 5 if it was Sam Altman’s. Multiply that by at least 100 AI subreddits and you get the idea. You are scrolling and scrolling without seeing a single cat. Claude this, Sam Altman that, Anthropic this, I vibe coded this genius thing, and then Sam Altman again.

Rather than adding Reddit to the list of sites I ignore, I followed the subreddits where I’ve recently posted comments, and then disabled the following setting:

I greatly recommend this approach. Cats, rockets, and cars showed up again. The apocalypse is muted.

AI Singularity

I’m a software engineer. My job mostly boils down to three things:

  1. Understanding a requirement and turning it into tasks
  2. Implementing those tasks or monitoring the implementation by others
  3. Querying data about the project – events, stats, trends and such

About a year ago, I switched entirely to AI-first engineering or vibe coding. Essentially, I let AI do most of the engineering work and mostly review the output it and provide feedback. By doing that, I felt a gradual increase of my output.

At first, the gains were modest and it wasn’t uncommon to lose more time with AI than I would’ve lost without it. Then the models improved. I experimented with new tools and ways of work. I learned what works for me and what doesn’t.

Today, I feel somewhere between 2x and 5x more productive than I was before. It happened gradually, not overnight.

Which leads to the bigger questions:

How far can this trajectory go? What are the moral and societal implications if it keeps scaling up?

If an individual engineer can increase their output 2x, does global engineering output double? Or do we simply need half as many engineers? And what happens if the multiplier isn’t 2x but 20x? At what point does implementation become irrelevant? Is that threshold 5x productivity? 50x? There must be a number after which coding won’t matter.

Is there a future where organizations only need small groups of engineers who mainly handle:

  • Rare edge cases
  • System architecture
  • Oversight of autonomous systems

If that happens, what becomes of the generation that studied software engineering expecting decades of demand, where the demand is now just gone?

If each engineer becomes a force multiplier, 5x or 50x of what a human of the past would be, then human capability is expanding, not shrinking. So how can the need for humans decrease if every human is more powerful?

Then there’s the ethical layer.

Most coding models are trained, at least in part, on open-source code. Millions of developers contributed to that, often without attribution or compensation. Zero of them did it to make a couple of demigods the richest men alive. And some of the people whose code was used to make the AI coding agent possible would face a future of unemployment and misery.

If coding productivity keeps accelerating, could we approach something resembling a software singularity? A point where:

  • Anything specifiable is immediately implementable
  • Humans are no longer required for execution
  • Software creation becomes a matter of compute and cost

If that’s theoretically possible, the constraint stops being talent and starts being infrastructure and tokens.

How many data centers would it take to autonomously build the world’s software? How much compute to replace human implementation entirely? And if we ever reached that point, what happens to money or people?

I don’t have answers. But I see that that the society is accelerating like a spaceship towards a black hole. I wish there were more conversations going on about the vision for the future. I’ve not seen anything inspiring from the leaders of the AI transition from OpenAI, Google, Microsoft, or Anthropic. Everyone just hopes there’s no singularity ahead of us, while speeding up the ship that way.

Opus

Claude Opus is my current most favorite model. I had a few blissful months of using it. Generated some good PRs, got stuck in debug loops not as many times as with previous models. I ended up extending the spend limit multiple times.

Opus Cocktail Bar, Sofia

After burning through far too many tokens, I had to stop and think. Is my usage really appropriate? Is it worth thinking how much tokens each prompt consumes? Is it because of the MCPs? Why does it make all these API calls to my dev server? How much does all of that even cost? It’s not clear from the dashboard at all.

While I’m rethinking my life’s choices, I switched to Codex and GPT 5.2. I feel like between the 4 AI editors that I have, I may have enough agent time available to last until the end of the billing period.

Being stuck with one option is not ideal. The situation is not like I have to write code without agents but my overuse of Opus is giving me a glimpse into a future where these models may start costing as much as people.