However, the sense of worry got severed somehow and I absolutely don’t want it back. It’s like losing the sense of smell when you’re in a poorly-smelling toilet. Imagine living in the Bog of Eternal Stench. It smells awful but you can get used to it. The sense of worry is gone at the moment. I don’t miss it, and I don’t want it back. If I worry about anything, it’s worry coming back and going through my defenses.
I know several risks – economic, political, health, and age-related. But I don’t currently worry about them because worrying won’t change anything. I know a new thing could make me worry again but it has to be new. I refuse to come up with imaginary future scenarios that affect me in a way that matters.
This is Bruce Wang from Netflix with his inspiring talk about Technical Debt. I enjoyed his talk and wanted to write about it but as it turns out, I’m opinionated on the subject and would like to share my thoughts rather than his. Bruce says Technical Debt is the delta between the current code and the ideal code. I think Technical Debt is when a coding team takes a shortcut and comes up with a solution that mostly works but needs future changes. The work that’s needed and may never be done is the debt.
Samples of technical debt from my experience:
Copy/paste, quick and incomplete fixes
Slowly building obscure God classes or long chains of ifs in long functions. All changes are tiny but the final result is bad
Quickly building large and complex pattern-based solutions to simple problems
Using old and unmaintained libraries
Insufficient test coverage or test coverage that’s guaranteed to break for legitimate changes
Agreeing on any kind of a second system and then insisting the old system is the technical debt until either the old or the new system is removed
Out of these, the second system presented the biggest and the most time-consuming challenges. All the other problems can be improved with small iterations but when you have two competing systems, you’ll keep having two until the very end of whatever the solution is.
This code is bad, it will be better to rewrite it from scratch
— An engineer getting in trouble
To my understanding, this is the worst type of technical debt – one that’s hard to repay because the best outcome of the work is that nothing visually changes. Some issues with rewriting:
Old code that’s in heavy use tends to cover many cases. Easy to miss them and produce regressions
The new code tends to focus on the area unsupported by old code so it quickly deviates from compatibility
The old code tends to accumulate changes that are unsupported by the new system while the new system is in progress
The amount of work is usually much larger than the wildest estimates
Migrating from one system to another is hard, and may even be cost-prohibitive
It never gets easier to complete it, and it keeps draining life from engineers doing this or that but never achieving the final result (one system)
Before I continue, note that sometimes rewriting is the only option. 20-ish years ago, I saw a complete ASP/MS SQL website that used horizontally expandable tables. A new record would need a new column. I was contacted as a freelancer to fix it because the owner ran out of columns. The whole thing felt bad beyond repair. It, however, was not a high-traffic or high-responsibility service.
In many cases, the rewrite is initiated when other solutions exist.
Here’s what I’d do if a rewrite of a heavy-use code is suggested. First, I’d come up with a vision for what the final result needs to be, then challenge that vision with honest questions like “Do I need this?”. Most of the time the answer is no, You aren’t gonna need it (YAGNI). With the vision in mind, I’d need to reach the most simplified version of the future version of the code with small iterations that go to production so that there are never two systems and there is no migration between them, for example, by using the Strangler Fig pattern. I’ve done this and it works well enough that nobody notices. And if the second system is started and never gets completed, it soon becomes everyone’s problem.
I’ve made plenty of choices where I wanted two things incompatible with one another and chose one. I don’t feel like any of that was a sacrifice, I treat it as a choice.
For example, I returned to the university in my late 20s way outside of the ordinary university age because I realized I had too many gaps in my coding & engineering skills. In the following years, I combined full-time office work (with flexible hours, thanks to my former boss) with a relatively demanding education. It’d be common to leave home before 7am for a morning lecture, drive to work, work for 6-7 hours, and then go to the university for an evening session. But although physically exhausting, it felt great and I had a purpose. It was worth the effort.
I’m trying to not look at the loss after a bad choice with too much emotion when possible. The reason is primarily work. When deploying a new change, there’s always a risk that I overlooked something and caused major harm. You need to be willing to do these things despite the possible negative outcome, otherwise you won’t ever move. Many if not all of the beautiful things in life are hidden behind a wall of risk – choosing a career path, falling in love, having kids, investing. So many things can go wrong at every step. This is not a reason to stay home and not make any steps. Once things go wrong, revert (if possible), identify what’s affected, write a plan, execute, learn, and move on. This is an oversimplification taken from the software world but I believe in it.
There’s something very deep in our response to righteous anger. Spicy autocomplete. Righteous anger, followed by italic uncertainty. Dehumanizing the AI so that we’re ready for eradication. At the same time, AI is already replacing human content creators – journalists, bloggers, illustrators, troll farms, SEO experts, photographers, data labelers, etc.
Generative AI is not necessarily terrible. ChatGPT can be forced to link to the source of each of its statements and will become like a search engine. Websites can flag human-created content with a badge of honor. The chatbots could be used where human support was previously impossible, increasing the need for more specialized human support and sales.
The society managed to navigate harmful technological advances in the past. Open Source happened. I don’t quite see how we’ll push back against some of the negative uses of AI like deepfakes but we’ll have to figure it out.
Engines love to consume lengthy content and rank it higher on search. ChatGPT can generate tons of additional meaningful text for the idea. However, as a reader, I prefer to read content written by humans and for humans. I’d rather read meaningful ideas in ugly sentences with simple words and poor grammar than AI-assisted beautiful novellas with a summary and headlines.
In that context, bad grammar, slang, lower-case text, and such can be a form of anti-language that identifies the post as human-written and non-AI-augmented. It can be a feature, not a bug (now I have an excuse to turn off Grammarly lol).