I found this nice article today that digs into the subject. Check it out.
The article suggests that we’ve been measuring intelligence the wrong way, which leads to poor correlation with life success metrics. Most of our intelligence metrics (like IQ) focus on how well someone can solve clearly defined problems. Real life rarely works that way. Living well, building relationships, raising children, and so on, depend more on the ability to navigate poorly defined problems. As a result, you can have a chess champion who is also a miserable human.
The article goes further and states that AIs can’t become AGIs because they’re only operating with human definitions (training data), and well-defined problems coming from prompts. AGIs would have to master poorly defined problems first.







