Richard Hanania on AGI risk
To me, the biggest problem with the doomerist position is that it assumes that there aren’t seriously diminishing returns to intelligence. There are other potential problems too, but this one is the first that stands out to me…
Another way you can have diminishing returns is if a problem is so hard that more intelligence doesn’t get you much. Let’s say that the question is “how can the US bring democracy to China?” It seems to me that there is some benefit to more intelligence, that someone with an IQ of 120 is going to say something more sensible than someone with an IQ of 80. But I’m not sure a 160 IQ gets you more than a 120 IQ. Same if you’re trying to predict what world GDP will be in 2250. The problem is too hard.
One can imagine problems that are so difficult that intelligence is completely irrelevant at any level. Let’s say your goal is “make Xi Jinping resign as the leader of China, move to America, and make it his dream to play cornerback for the Kansas City Chiefs.” The probability of this happening is literally zero, and no amount of intelligence, at least on the scales we’re used to, is going to change that.
I tend to think for most problems in the universe, there are massive diminishing returns to intelligence, either because they are too easy or too hard.
Recommended, and largely I agree. This is of course a Hayekian point as well. Here is the full discussion. From a “talent” perspective, I would add the following. The very top performers, such as Lebron, often are not tops at any single aspect of the game. Lebron has not been the best shooter, rebounder, passer, or whatever (well, he is almost the top passer), rather it is about how he integrates all of his abilities into a coherent whole. I think of AGI (which I don’t think will happen, either) as comparable to a basketball player who is the best in the league at free throws or rebounds.
The post Richard Hanania on AGI risk appeared first on Marginal REVOLUTION.