I found this nice article today that digs into the subject. Check it out.
The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.
The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.
Interesting on both fronts. I can’t understand why so many don’t understand that AI is driven by humans and the responses are not absolutely a fact, it based what information it decides to grab. I love to tinker on Copilot but I always look at the sources used and remember if the answer makes sense to me, it’s up to me to verify the response. I laughed so hard when Kim Kardashion blamed ChatGPT for her failing law exams. 🙂
LikeLiked by 1 person
I’m curious why one would presuppose that intelligent people are happier? Happiness is an emotion. It’s like wondering why a musician isn’t better at playing a sport.
LikeLiked by 2 people