Book Clutter

I’m starting to think the ideal number of books to own is almost zero. I feel like one should only keep books that are either signed, gifts, not yet read, or for some reason frequently re-read. Everything else serves very little to no purpose and only forms clutter. The individual books are pretty big units of kipple that reduces the living space of book worms like me.

I keep an overflow shelf where books are prepared for donation but I seem to have been slow with getting rid of old books and very quick with buying new ones.

Charity 1 by Wolfgang Hohlbein

There’s a wonderful feature by the local reading site Chitanka that offers you random books. I would click 10-15-20 times and find something new I’ve never seen before. A few days ago, it offered me the apocalyptic sci-fi Charity.

Charity is an astronaut and a military captain during an active invasion. An alien ecosystem enters Earth through some kind of portals, brought here by a spaceship, and obliterates everything. The invading army is so powerful that humans do not stand a chance. However, there would be no book if there was no hope, right?

It’s not yet clear what Charity’s superpower is or how she’s going to push back to the invading force. She seems to be good at surviving and also very lucky.

I’d say, almost 5*/5, not a wow book, but definitely one that makes you want to read the continuation. It is also short, which is an advantage.

Why aren’t intelligent people happier

I found this nice article today that digs into the subject. Check it out.

The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.

The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.