AI Singularity

I’m a software engineer. My job mostly boils down to three things:

  1. Understanding a requirement and turning it into tasks
  2. Implementing those tasks or monitoring the implementation by others
  3. Querying data about the project – events, stats, trends and such

About a year ago, I switched entirely to AI-first engineering or vibe coding. Essentially, I let AI do most of the engineering work and mostly review the output it and provide feedback. By doing that, I felt a gradual increase of my output.

At first, the gains were modest and it wasn’t uncommon to lose more time with AI than I would’ve lost without it. Then the models improved. I experimented with new tools and ways of work. I learned what works for me and what doesn’t.

Today, I feel somewhere between 2x and 5x more productive than I was before. It happened gradually, not overnight.

Which leads to the bigger questions:

How far can this trajectory go? What are the moral and societal implications if it keeps scaling up?

If an individual engineer can increase their output 2x, does global engineering output double? Or do we simply need half as many engineers? And what happens if the multiplier isn’t 2x but 20x? At what point does implementation become irrelevant? Is that threshold 5x productivity? 50x? There must be a number after which coding won’t matter.

Is there a future where organizations only need small groups of engineers who mainly handle:

  • Rare edge cases
  • System architecture
  • Oversight of autonomous systems

If that happens, what becomes of the generation that studied software engineering expecting decades of demand, where the demand is now just gone?

If each engineer becomes a force multiplier, 5x or 50x of what a human of the past would be, then human capability is expanding, not shrinking. So how can the need for humans decrease if every human is more powerful?

Then there’s the ethical layer.

Most coding models are trained, at least in part, on open-source code. Millions of developers contributed to that, often without attribution or compensation. Zero of them did it to make a couple of demigods the richest men alive. And some of the people whose code was used to make the AI coding agent possible would face a future of unemployment and misery.

If coding productivity keeps accelerating, could we approach something resembling a software singularity? A point where:

  • Anything specifiable is immediately implementable
  • Humans are no longer required for execution
  • Software creation becomes a matter of compute and cost

If that’s theoretically possible, the constraint stops being talent and starts being infrastructure and tokens.

How many data centers would it take to autonomously build the world’s software? How much compute to replace human implementation entirely? And if we ever reached that point, what happens to money or people?

I don’t have answers. But I see that that the society is accelerating like a spaceship towards a black hole. I wish there were more conversations going on about the vision for the future. I’ve not seen anything inspiring from the leaders of the AI transition from OpenAI, Google, Microsoft, or Anthropic. Everyone just hopes there’s no singularity ahead of us, while speeding up the ship that way.

6 thoughts on “AI Singularity

Leave a reply to Joey Jones Cancel reply