Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.
Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.
Now, let me quickly disavow any interest here in evaluating the particular questions that Hanson raises here (though they are intrinsically interesting). What I want to call attention to is the assumption underlying the whole post, and most of Hanson’s posts on this subject, which is that intelligence is a matter of task performance.
I’ve commented before — often; so often — that humans have a strong tendency to understand ourselves, and especially our minds, in terms of our most recent dominant technologies. The human mind is a kind of book — no, a kind of clock — no, actually, it’s a steam engine — fundamentally, it’s a computer. Each of these machines is designed to perform tasks (and in the case of the computer, a very wide range of tasks), so if we understand our cognitive capacities in those mirrors, then of course we will think that intelligence = task performance.
But consider this alternative point of view, from Venkatesh Rao:
This also leads to an obsession with the goals an AI might pursue through its thinking. Functional thinkers seem to unconsciously conceive of AIs in their own image, via projection: as means-ends reasoners that think in order to achieve something, not because they enjoy it. They might be conceived as vastly more capable, and harboring goals that are be inscrutable to humans (“maximizing paperclips collected” has traditionally been the placeholder inscrutable goal attributed to superintelligences), but they are fundamentally imagined as means-ends functional superintelligences, that use their god-like brains as a means to achieve god-like ends. We do not ask whether AIs might think because they enjoy thinking. Or whether they might be capable of experiencing “interestingness” as a positive feedback loop variable driving open-ended, energy “wasting” pleasure-thinking.
This would be a remarkably interesting project incidentally, trying to develop an interestingness powered AI that thinks because it likes to, in a spirit of playfulness, not because it thinks curiosity-driven exploration will gain it more paperclips. To my knowledge, Juergen Schmidhuber is the only prominent researcher thinking along these lines to some extent. The only place I’ve seen this distinction made clearly at all is Hannah Arendt’s book, The Human Condition (she made a distinction between “thought” as brain activity qua brain activity, and “cognition” as brains engaged in means-end reasoning, and argued that the latter necessarily leads to nihilism, which, if you think about it, can be defined as thought annihilating itself). Mihaly Csikzentmihaly’s work on “flow”, from where I am borrowing the term “autotelic,” touches on the role of such thinking in creative work, but oddly enough fails to explore the deep distinction between functional and autotelic intelligence.
What might it mean to consider thinking not as task performance but as autotelic delight?