Partial rewrites

Every complex software system tends to have sub-systems that are in the process of being rewritten but stuck in limbo. I call this myself the Hydra, although it’s a term that doesn’t exist outside of my head.

Generated with imgflip.

The subject of second systems deserves a full essay but while I sit on it and finish the books I have in mind, let’s enjoy this gem from Artur’s blog:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

Gall’s law

Can money make you happier?

Lack of money can make you unhappy but does more money above a certain threshold bring happiness? No.

I’ll try to prove that with pseudo-math and book references.

First, can we have a functional relationship between money and happiness? Is there a function Happiness = f(money) where the curve is logarithmic? It’s clear that whatever the relationship is, it is not a curve. Plenty of rich people are on social media and they look busy, and some of them are miserable, not happy. At the same time, there has to be a monk out there with zero money and powerful spirituality close to Nirvana.

Obvious correlations exist elsewhere.

Age

There’s a clear relationship between Age and Happiness. Kids are as happy as a human can be, and so are the elderly (source). Who’s the unhappiest? Working-age adults, and the older they are, the unhappier. The chart with wealth? Almost the opposite (source).

Although correlation doesn’t mean causation, you can say that if Happiness = f(a, b, ...n), one of the parameters is age.

The high of shiny new things

Money can get you items and experiences. More money can get you more items and more pleasant experiences. Purchases make us happy by releasing endorphins and dopamine, and also perhaps by improving our lifestyle a little bit. However:

  • The high we can possibly get from a purchase is limited by our brain chemistry
  • The high from shopping is a function of anticipation and uncertainty. A kid saving a year for an iPad would get a lot more than a billionaire purchasing a new Ferarri. A sandwich tastes better when you are hungry.
  • It’s easy to improve the lifestyle if you’re deprived but hard to do if your needs are met

We can experiment with that – write in a journal how happy you felt from buying different items. An expensive purchase, like a new car, can be no better than a pair of shoes that fit well. A new iPhone can be equal to a cold Coke Zero on a hot summer day or worse. This topic is a can of worms that inspired marketers and philosophers to write books over the last 3 centuries. The satisfaction is measurable, limited, and based on things like goals, needs, risk, and anticipation, not only the price paid.

The curse of the lottery winners

The high increase in income puts people in situations for which they are not prepared to be and pulls them outside of their social circle. A good and very scientific book on the subject is The Winner by David Baldacci. It shows details on the catastrophic impact of quick money on people stating that the vast majority of lottery winners go bankrupt within 2-3 years of their win. The same can be seen with sports stars. Many of them fail once they enter the life of expensive cars, parties, and financially motivated partners in all areas of life – from intimate to business. Another nice fiction book on the subject is Sooley by John Grisham.

So, watching lottery winners and sports stars shows us that the quick exponential growth in wealth comes with changes in social circles and personal life that negatively impact happiness. What about slow growth, that gives people time to adjust and find new social circles? That should work! However, the threshold may never be reached, and the question assumes an imaginary threshold above which no added amount of money makes the person happier.

Before and after ambition

It’s human to compete and strive for more, however, it’s not universally true for all ages.

Would a 1-year-old be happier if they had lots of money, a leather stroller, and servants? They’d likely be happy if they were healthy, dry pants, fed, loved, and slept well. What about a 100-year-old person? Would they be happier if they were the world’s first trillionaire? They’d probably be happiest if they were healthy, well-cared, loved, with good memories of a meaningful life.

Ambition likely is part of the equation for happiness, and so is having a purpose in life.

Recent studies

A great article on the subject was published today that shares the insights by a research group that includes Daniel Kahneman. It mentions two amounts of money that cap happiness – $75K calculated in 2010 and $500K from 2023. Whatever the cap is, the researchers estimated not one but two. I would take the amount with a grain of salt but their overall conclusions matched my expectations.

The search for happiness is related to money up to a point. Other factors have a stronger impact – relationships, health, love, faith, purpose in life, ambition, and so on. Happiness may not be a good life goal at all. I wrote an essay about the Sense of Purpose 6 years ago, and I still believe it.

Impenetrable Language

Obscure and impenetrable language conveys a sense of expertise

Dan Ariely

Have you been in a situation in which the person you talk to speaks and you don’t get anything? I have in a variety of cases:

  • Worked for 3 months on the same team with a statistician
  • Spent time with Systems and DevOps folks speaking their language
  • Discussed some renovation projects with a variety of contractors
  • I have a 5-year-old

The common thing between these situations is that the person in front of me may speak with excitement and a lot, and I only catch a fraction of what’s said, and most certainly not the meaning. Many of these scenarios are harmless and funny, but some aren’t. Sometimes, the language can be used maliciously. For example, some of the contractors I spoke with convinced me to do unnecessary repairs or justified expenses without me understanding what they meant. Subpar and expensive work followed.

How does that work?

Each discipline forms terminology and professional jargon that’s used between experts to understand each other and to transfer knowledge. Knowing that language creates a sense of belonging to an exclusive group of people. We are used to accepting that people know things better than us in almost all areas. So it is expected that professionals in an area use the professional language. The caveat is, that they’re expected to do that between each other. When these words are used outside of the professional context or during a negotiation with a person with different expertise, it could indicate a few things:

  • Lack of awareness that the other person doesn’t understand everything
  • An attempt to coerce approval based on expertise, also known as an “Appeal to authority” – and is a logical fallacy.

For example, in the previous sentence, I could use “Argumentum ad verecundiam” instead of “Appeal to authority” and would use obscure language myself, trying to sound like an expert, while I’m really not.

My expectations about expert language

We can’t assume bad intent when obscure language is used in a conversation with us. We can, however, be aware that many experts would be able to switch their language depending on the audience. I have two relatives who are university professors in different technical areas. I’ve had plenty of conversations with them on their work without hearing a single challenging or jargon word.

It is also possible to introduce necessary expert words by explaining them. For example, the wordpress.com pricing advertises “Global edge caching”.

Many consumers of that information would understand it but most would have no idea what it is. So there’s a tooltip introducing the terminology, this way making it fair.

The construction worker should also be able to explain why tiles can or can’t be put on a specific foundation in a language that I understand, so should the project manager explain Scrum and the Systems engineer who wants to use AWS for their project should be able to explain why without a single BS word.

The 5-year-old is forgiven.

How to identify red flags

My strategy is to ask for an explanation when I hear something that I don’t understand. When I ask too many questions and I keep not understanding, a red flag appears. It still doesn’t necessarily mean the other person is ill-intended. It might mean I didn’t do my homework with a glossary. In that case, we may need to bring out somebody else to the conversation who is fluent in the language. Otherwise, we have a communication failure, and the communication failure can lead to bigger problems.

TL;DR – “Oh shit, I don’t understand a word they’re saying, they must be an expert” needs to become “Oh shit, I don’t a word they’re saying. Do I ask them to slow down and explain or shall I call Saul?”

The Ethical AI

I find it funny that every time a very clear statement is explained or exaggerated with an adverb, the explanation is a hint that the opposite is present or the statement is not entirely true.

He absolutely doesn’t drink alcohol. I will totally buy a ticket for the next Taylor Swift concert. The students will use ChatGPT entirely for research purposes.

ChatGPT and “(Ethically)!” in one sentence, in a paid ad.

The first time I encountered the moral dilemma around the use of AI was in the Robots series by Isaac Asimov. I read that long before I owned a computer and totally bought the idea of a positronic brain. Asimov saw that robots if allowed to do whatever they wanted would just start killing. He envisioned a set of forced limitations that AI never hurts humans (the full list of 3+1 laws is here) as the only way for robots to be useful. Asimov noticed in his books that robots would replace human labor and eventually cause stagnation but that was only partially addressed in his series after centuries of expansion.

Who could’ve imagined that the first appearance of any resemblance of AI would need a very different set of laws than Asimov’s first 3? The present-day AI already appears in multiple forms, each of which has its own ethical challenges. The prompt-based tools tend to use human content and present it as their own with no citation or link to the author. They’re awesome for faking homework. The image generation tools copy artists’ work and make it semi-unique, filling the need for cheap illustrations on spam websites that would slip undetected by Googlebot. The chatbots and robocalls automate tasks that were once reserved for humans, causing unemployment. There has to be a fine line between what’s okay and what isn’t.

I’ve been thinking about how I would regulate that thing since my last post on the subject from last year and came up with roughly this:

  • Any statement by AI should cite the sources of information and provide links
  • AI should not present slight modifications of human content as its own
  • AI should not use the prompts of one human outside of the context of the interaction with that human.

But after writing this, I had a lightbulb moment. If anyone put a thought on this, it has to be the EU administration. And yes, the EU agreed on a much longer document, where Generative AI is just one type of risk and contains an Asimov-like masterpiece:

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training
EU AI Act

Generative AI is not even considered a high-risk type of tool. EU considers AI tools as high-risk if they classify humans, analyze emotions, collect faces, provide advice on health or legal matters, talk to children, and so on. How didn’t Asimov think about that? The existential dangers of a toy that can explain dangerous activities to children.

Overall, the definition of an ethical use of AI is taking some shape but I wonder how much damage will be caused to human content creation and creativity until any of that’s adopted.

At least, none of these risks is Skynet and Asimov’s laws are not yet relevant.