AI Singularity

I’m a software engineer. My job mostly boils down to three things:

  1. Understanding a requirement and turning it into tasks
  2. Implementing those tasks or monitoring the implementation by others
  3. Querying data about the project – events, stats, trends and such

About a year ago, I switched entirely to AI-first engineering or vibe coding. Essentially, I let AI do most of the engineering work and mostly review the output it and provide feedback. By doing that, I felt a gradual increase of my output.

At first, the gains were modest and it wasn’t uncommon to lose more time with AI than I would’ve lost without it. Then the models improved. I experimented with new tools and ways of work. I learned what works for me and what doesn’t.

Today, I feel somewhere between 2x and 5x more productive than I was before. It happened gradually, not overnight.

Which leads to the bigger questions:

How far can this trajectory go? What are the moral and societal implications if it keeps scaling up?

If an individual engineer can increase their output 2x, does global engineering output double? Or do we simply need half as many engineers? And what happens if the multiplier isn’t 2x but 20x? At what point does implementation become irrelevant? Is that threshold 5x productivity? 50x? There must be a number after which coding won’t matter.

Is there a future where organizations only need small groups of engineers who mainly handle:

  • Rare edge cases
  • System architecture
  • Oversight of autonomous systems

If that happens, what becomes of the generation that studied software engineering expecting decades of demand, where the demand is now just gone?

If each engineer becomes a force multiplier, 5x or 50x of what a human of the past would be, then human capability is expanding, not shrinking. So how can the need for humans decrease if every human is more powerful?

Then there’s the ethical layer.

Most coding models are trained, at least in part, on open-source code. Millions of developers contributed to that, often without attribution or compensation. Zero of them did it to make a couple of demigods the richest men alive. And some of the people whose code was used to make the AI coding agent possible would face a future of unemployment and misery.

If coding productivity keeps accelerating, could we approach something resembling a software singularity? A point where:

  • Anything specifiable is immediately implementable
  • Humans are no longer required for execution
  • Software creation becomes a matter of compute and cost

If that’s theoretically possible, the constraint stops being talent and starts being infrastructure and tokens.

How many data centers would it take to autonomously build the world’s software? How much compute to replace human implementation entirely? And if we ever reached that point, what happens to money or people?

I don’t have answers. But I see that that the society is accelerating like a spaceship towards a black hole. I wish there were more conversations going on about the vision for the future. I’ve not seen anything inspiring from the leaders of the AI transition from OpenAI, Google, Microsoft, or Anthropic. Everyone just hopes there’s no singularity ahead of us, while speeding up the ship that way.

Opus

Claude Opus is my current most favorite model. I had a few blissful months of using it. Generated some good PRs, got stuck in debug loops not as many times as with previous models. I ended up extending the spend limit multiple times.

Opus Cocktail Bar, Sofia

After burning through far too many tokens, I had to stop and think. Is my usage really appropriate? Is it worth thinking how much tokens each prompt consumes? Is it because of the MCPs? Why does it make all these API calls to my dev server? How much does all of that even cost? It’s not clear from the dashboard at all.

While I’m rethinking my life’s choices, I switched to Codex and GPT 5.2. I feel like between the 4 AI editors that I have, I may have enough agent time available to last until the end of the billing period.

Being stuck with one option is not ideal. The situation is not like I have to write code without agents but my overuse of Opus is giving me a glimpse into a future where these models may start costing as much as people.

What Level of AI Use Is Okay for a Blog Post?

I spend part of my day reading posts and I catch myself developing an allergy to AI slop. At the same time, AI is right into the editor, and attempts to change every word I write incorrectly. So not using it would be foolish. Here’s my gut feeling about what’s fine and what isn’t.

Not Fine

  • Copy/pasting any direct results of a prompt. It deserves a separate post why that’s not fine but for now, it’s a form of bad taste.
  • Any form of GenAI images. They were fun for a little bit. Now, they are just a way to state that the post is AI slop.
  • Let AI change the meaning of content when improving. AI tends to flip the meaning when doing subtle improvements.
  • Engine-oriented titles. If I’m the reader, I want text oriented for humans, not engines. The engines can burn some more CPU and figure it out.
  • Letting AI remove complex words, slang, puns, and emotion. Too much uniformity doesn’t improve readability, it sterilizes the text.
  • Emoji. Thanks to ChatGPT, emoji are like peanuts in a text.

Fine

  • Syntax, clarity, and feedback. AI can help improve the structure and readability the text.
  • Improve individual sentences. I tend to write long sentences and use unconfident words (examples from my post below). Funny that it uses the word unconfident but asks me to remove the word unconfident.
  • Research. Copy/pasting the body of a post to ChatGPT sometimes help find stuff I missed and fact checks. Particularly useful for book reviews.
  • SEO improvements, as long as it doesn’t change the meaning of the content.

Overall, when I catch excessive AI use, I feel an ick about the text. If it feels AI generated entirely, zero chance that I’ll read it.

Why aren’t intelligent people happier

I found this nice article today that digs into the subject. Check it out.

The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.

The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.