All posts by ayjay

in other words

How to Think, pp. 106-07:

Take, for example, one of the most common and least appealing defensive strategies I know: what I call “in-other-wordsing.”

We see it every day. Someone points at an argument — a blog post, say, or an op-ed column— and someone else replies, “In other words, you’re saying . . .” And inevitably the argument, when put in other words, is revealed to be vacuous or wicked. […]

This kind of thing is closely related to the building of a straw man. The straw man is an evidently stupid argument that no one actually holds: refuting the ridiculous straw-man argument is easier than refuting the argument that someone actually made, so up in flames goes the figure of straw. And straw-manning is a version of in-other-wordsing. But it’s also possible to in-other-words someone’s argument not to make it seem that she holds simplistic views but rather to indicate that she holds views belonging to your adversary, to your outgroup.

In-other-wordsing is a bad, bad habit, but anyone who wants to resist it can do so. (Again, as we have had cause to remember throughout this exploration, many people don’t want to avoid it, they want to use it to win political or social or religious battles. And again: this book is not for such people.)


Steven Poole:

To call your opponent a victim of groupthink, then, is to ascribe their views solely to their upbringing, area of residence or social associations, and to deny that they are capable of coming to reasoned conclusions on their own. It should hardly need pointing out that consensus on robust scientific theories such as evolution is not groupthink, or that thinking in groups – the ancient philosophical academies, or the 18th-century Republic of Letters, or the modern global academic network – is what has enabled most of the advances of human civilisation. But the modern user of “groupthink” ignores such truths, the better to paint his opponents as intellectual zombies. As used today, the word is therefore a classic example of Unspeak: a rhetorical intervention designed to shut down argument before it starts.

What is really going on when someone complains of “groupthink” is a kind of bovine attempt at self-glamorisation. You follow the herd and parrot groupthink, whereas I am a superior maverick able to think for myself and unmask the nonsense that everyone else believes. This implicit claim, however, is quite severely undermined by the cliche of using the term “groupthink” itself. After all, given that it’s so lamentably common, to accuse others of groupthink is about the most groupthinky thing you can do.

See the passages in How to Think about what C. S. Lewis calls “The Inner Ring.”

reviews, interviews, etc.

For those who might have missed them, here are some posts about my book, including reviews and some interviews with me. I’ll add to this post as events warrant.

a brief response to responses

I’ve received a good many emails — emails! — about my recent piece in the Wall Street Journal, and they are coming in three varieties:

1) People thanking me for the essay;

2) People telling me that my essay is a pathetic exercise in false equivalency, because it’s obvious that academics are far more hostile to evangelicals than vice versa;

3) People telling me that my essay is a pathetic exercise in false equivalency, because it’s obvious that evangelicals are far more hostile to academics than vice versa.

I have some thoughts. First of all, I can’t imagine any possible way to assess which of the latter two groups (if either) is correct. But second, why does it matter? What is it that changes — in our current situation or what we need to do in response to it — depending on the preponderance of fault? It seems to me that the task remains the same: to seek out the people with whom we can meaningfully converse and debate, and ignore those who seek to rend the social fabric.

And finally, we might do well to remember Les Murray’s little poem “Politics and Art”:

Brutal policy,
like inferior art, knows
whose fault it all is.

Can politicians think?

When you’re in Washington D.C. to talk about thinking, as I just was, the conversation inevitably turns to the political implications of the topic. I have admitted that the chief impetus of this book was the ever-increasing hostility and (often malicious) misunderstanding of one another that became one of the chief themes of the 2016 Presidential election here in the U.S. and of the debate over the Brexit referendum in the U.K. And when I have meditated on these unpleasant social developments and their impact on thinking, I have usually focused on the behavior and the cognitive peculiarities of the voters. But what about our politicians themselves?

This post by Ilya Somin raises some interesting questions in that regard. Its primary question is whether the government can be trusted to “intervene to protect people against their cognitive biases, by various forms of paternalistic policies. In the best-case scenario, government regulators can ‘nudge’ us into correcting our cognitive errors, thereby enhancing our welfare without significantly curtailing freedom.” But, Somin asks, doesn’t that “best-case scenario” depend on our elected officials, the maker of those policies, being themselves shrewd and fair-minded thinkers?

Which is a problem. Somin:

Politicians arguably have stronger incentives to learn about politics than voters do. Their decisions on policy issues often do make a difference. But because the voters themselves are often ignorant and biased, they tend to tolerate – and even reward – policy ignorance among those they elect. Politicians have strong incentives to work on campaign skills, but relatively little incentive to become knowledgeable about policy. It is not surprising that most do far better on the former than the latter.

And the news gets worse:

Politicians aren’t just biased in their evaluation of political issues. Many of them are ignorant, as well. For example, famed political journalist Robert Kaiser found that most members of Congress know little about policy and “both know and care more about politics than about substance.” When Republican senators tried to push the Graham-Cassidy health care reform bill through Congress last month, few had much understanding of what was in the bill. One GOP lobbyist noted that “no one cares what the bill actually does.”

(See helpful links in the original.) All this raises for me an interesting question: Who will nudge the nudgers? That is, let’s set aside for a moment the question of whether we should have paternalistic, nudging sorts of laws to correct our biases and give us incentives to become more knowledgable about the questions we must, day-by-day, decide. What can be done to correct the biases of our elected officials, and to give them incentives to become both more fair and more knowledgable?

In How to Think I argue that there is no point in telling people to think for themselves, because that is neither possible nor desirable. We always think with and in relation to others — and often trim our thinking to meet the approval of others — so the real questions that face us involve the construction and maintenance of thought-environments. Some of those environments are cognitively healthy — they encourage serious reflection — and those tend to be ones in which we can become genuine members of a group; others strive to rule us through the stick of threatened exclusion and the carrot of promised insider status, and those profoundly discourage thinking. (You can read about the former in C. S. Lewis’s essay “Membership” and the latter in “The Inner Ring”; and the two kinds of belonging, true and false, are treated side-by-side in his novel That Hideous Strength.)

Often the perversion of belonging called the Inner Ring disguises its true nature, but in political parties it is often explicit: leaders of those parties can often be cheerfully open about the rewards of loyalty and the costs of disobedience. In such a world we cannot be surprised that our legislators often haven’t read and know little about the bills they vote for, or against: they’re simply doing what they’re told as a condition of continued acceptance, and in hopes of being allowed to move one more rung up, or one more circle Inward.

None of us can do anything to change the incentive structures of political parties; but might it be possible for some people to nudge legislators towards participation in communities that do encourage genuine reflection, and offer alternatives to the Inner Ringery of party membership? Are there people who can model for our elected officials a better way to live? (As opposed to most lobbyists, who want thinking only insofar as it leads to obedience to their interests, as opposed to the party’s.) Maybe all I’m calling for is a renewal of the old Fabian Society policy of influencing the influencers … but that’s not the worst idea in the world. The question is, who — who among those who care about thinking — is well-placed to make that kind of difference?

sometimes people are wrong


When you write a book, passages end up on the cutting-room floor, which can cause regret later on. Right now I’m regretting having cut some reflections on Kathryn Schultz’s book Being Wrong, which offers us some excellent advice: Embrace being wrong, because you are. You are wrong about many things right now, some of them important. Get used to it.

But there’s another key element to this theme of wrongness: Other people are wrong too. But my point here is that they’re often wrong rather than wicked, in a state of error rather than a state of malice. For instance, many of my fellow conservative Christians think the Obama-era contraceptive mandate was motivated by a pathological hatred of Christianity, and they place a lot of emphasis on this interpretation. But it seems to me more likely that the people in the Obama administration who created the mandate knew little and cared less about Christianity, and were focused instead on bringing what they thought was a great good (free birth control) to women.

Similarly, I hear often from my liberal friends that Republicans actively want poor people to die from lack of health care, or at best are serenely indifferent to suffering; but isn’t it possible, indeed even likely, that they just have different ideas about what form the best feasible health care plan will look like?

Of course, questions of motive can rarely be definitively settled: we can’t crack open people’s minds and peer inside, and that goes for our own minds also. (Do you really think you possess accurate knowledge of your own motives? Do you truly believe that your heart, like that of Galahad, is pure?) Maybe Obama bureaucrats rubbed their hands in glee when contemplating the discomfiture of the Little Sisters of the Poor, and the smoke-filled rooms of the GOP echo with laughter at the thought of poor people slowly dying. Maybe. But maybe not.

So perhaps it would be better for all concerned if we suspended, or at least restrained, speculations about motives and focused on what’s right and what’s wrong. “The Obama administration was wrong to enforce contraceptive mandate because religious liberty matters, and here’s why.” “For all the flaw of Obamacare, the proposed GOP alternatives are far worse, and here’s why.”

Yes: it would be better for all concerned if we were content to say that our political opponents are merely wrong. But that’s unlikely to happen, at least widely, because once you say someone is wrong you commit yourself to explaining why he’s wrong — to the world of argument and evidence — and that makes work for you. Plus, you forego the immense pleasures of moral superiority and righteous indignation. So speculation about our enemies’ motives will continue to be a major feature of our political life, which will have the same practical consequences as Old Man Yells at Cloud.

Nevertheless, I insist: sometimes people are simply wrong.

thinking as delight

Here is an excerpt from a fairly typical post by Robin Hanson:

Imagine that there is a certain class of “core” mental tasks, where a single “IQ” factor explains most variance in such task ability, and no other factors explained much variance. If one main factor explains most variation, and no other factors do, then variation in this area is basically one dimensional plus local noise. So to estimate performance on any one focus task, usually you’d want to average over abilities on many core tasks to estimate that one dimension of IQ, and then use IQ to estimate ability on that focus task.

Now imagine that you are trying to evaluate someone on a core task A, and you are told that ability on core task B is very diagnostic. That is, even if a person is bad on many other random tasks, if they are good at B you can be pretty sure that they will be good at A. And even if they are good at many other tasks, if they are bad at B, they will be bad at A. In this case, you would know that this claim about B being very diagnostic on A makes the pair A and B unusual among core task pairs. If there were a big clump of tasks strongly diagnostic about each other, that would show up as another factor explaining a noticeable fraction of the total variance. Making this world higher dimensional. So this claim about A and B might be true, but your prior is against it.

Now, let me quickly disavow any interest here in evaluating the particular questions that Hanson raises here (though they are intrinsically interesting). What I want to call attention to is the assumption underlying the whole post, and most of Hanson’s posts on this subject, which is that intelligence is a matter of task performance.

I’ve commented before — often; so often — that humans have a strong tendency to understand ourselves, and especially our minds, in terms of our most recent dominant technologies. The human mind is a kind of book — no, a kind of clock — no, actually, it’s a steam engine — fundamentally, it’s a computer. Each of these machines is designed to perform tasks (and in the case of the computer, a very wide range of tasks), so if we understand our cognitive capacities in those mirrors, then of course we will think that intelligence = task performance.

But consider this alternative point of view, from Venkatesh Rao:

This also leads to an obsession with the goals an AI might pursue through its thinking. Functional thinkers seem to unconsciously conceive of AIs in their own image, via projection: as means-ends reasoners that think in order to achieve something, not because they enjoy it. They might be conceived as vastly more capable, and harboring goals that are be inscrutable to humans (“maximizing paperclips collected” has traditionally been the placeholder inscrutable goal attributed to superintelligences), but they are fundamentally imagined as means-ends functional superintelligences, that use their god-like brains as a means to achieve god-like ends. We do not ask whether AIs might think because they enjoy thinking. Or whether they might be capable of experiencing “interestingness” as a positive feedback loop variable driving open-ended, energy “wasting” pleasure-thinking.

This would be a remarkably interesting project incidentally, trying to develop an interestingness powered AI that thinks because it likes to, in a spirit of playfulness, not because it thinks curiosity-driven exploration will gain it more paperclips. To my knowledge, Juergen Schmidhuber is the only prominent researcher thinking along these lines to some extent. The only place I’ve seen this distinction made clearly at all is Hannah Arendt’s book, The Human Condition (she made a distinction between “thought” as brain activity qua brain activity, and “cognition” as brains engaged in means-end reasoning, and argued that the latter necessarily leads to nihilism, which, if you think about it, can be defined as thought annihilating itself). Mihaly Csikzentmihaly’s work on “flow”, from where I am borrowing the term “autotelic,” touches on the role of such thinking in creative work, but oddly enough fails to explore the deep distinction between functional and autotelic intelligence.

What might it mean to consider thinking not as task performance but as autotelic delight?

Daniel Kahneman is wrong

Early in How to Think I quote this passage from Daniel Kahneman’s Thinking, Fast and Slow:

Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely: “This number will be an anchor…,” “The decision could change if the problem is reframed…” And I have made much more progress in recognizing the errors of others than my own.

This seems like quite a distressing confession of failure from the person who has done more than anyone else to teach us about thinking! But Kahneman’s point here is that what he calls System 1 — the element of our thinking that “operates automatically and quickly, with little or no effort and no sense of voluntary control” — is “not readily educable.” It just does what it does, and we have little power to change it. But we do have power to recognize its work in us, and if we do, then we, like Kahneman himself, will be able to achieve a realistic assessment of our cognitive shortcomings. We can discern that Kahneman has internalized the results of his research pretty well when we notice how openly he acknowledges being wrong quite a bit.

When a group of scholars wrote in a blog post that Kahneman and his longtime research partner Amos Tversky had made some serious errors in their work on priming, Kahneman actually showed up in the comments to agree that their critique is sound:

What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.

Perhaps it’s because Kahneman understands cognitive biases so well that he’s not surprised when he’s guilty of them. But there may be other forces at work too. For instance, it could be that Kahneman’s position in his field is so secure that he can’t damage it by admitting the occasional error. Yet that doesn’t seem to happen very often; rather, some of the most successful scholars are among the touchiest about criticism. I think we might have to fall back on the old notion of character: however that character was formed, Kahneman seems to have become a person who isn’t always “talking for victory,” as Samuel Johnson put it, and who doesn’t see himself as a glorious exception to a rule that covers the behavior of others. That’s rare, that’s commendable, that’s to be emulated.

book launch event in Washington!

I’m very pleased to announce that on October 17, the American Enterprise Institute, the Ethics and Public Policy Center, and National Affairs journal will co-sponsor a conversation about How to Think. I’ll give a presentation about the book and there will be responses from Pete Wehner of the EPPC and Jonathan Rauch of the Brookings Institution.

The event will begin at 10am in AEI’s Auditorium at 1789 Massachusetts Avenue NW, Washington DC 20036. Please come if you can!

My heartfelt thanks to Yuval Levin, editor of National Affairs, for making this happen.

praise for the book

I disagree passsionately with Alan Jacobs about a number of very important things, but this indispensable book shows me how to take him by the hand while we argue, rather than the throat. In troublingly stupid times, it offers a toolbox for the restoration of nuance, self-knowledge and cognitive generosity.

— Francis Spufford, author of Golden Hill and Unapologetic

Just when it feels like we’ve all lost our minds, here comes Alan Jacobs’s How to Think, a book infused with the thoughtfulness, generosity, and humor of a lifelong teacher. Do what I did: Sign off social media, find a cozy spot to read, and get your mind back again. A mindful book for our mindless times.

—Austin Kleon, New York Times bestselling author of Steal Like an Artist

As much as this book is a manual, it’s also a self-portrait of a particular mind, whose style and skills are ballast against the cognitive turbulence of our time. Reading How to Think feels like riding in a small but sturdy boat, Alan Jacobs your pilot through turbulent waters — and if you’re eager to get where he’s taking you, you’re also grateful for the chance to simply watch him do his thing.

— Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore and the forthcoming Sourdough

We tend to regard thinking as an exclusively individual experience that operates at the intersection of neural activity and personal consciousness. But we miss the ways our thinking is shaped by the social environment we live in. In this slim and beautifully written volume, Alan Jacobs provides a courageous, erudite and deeply humane corrective.

— James Davison Hunter, author of Culture Wars and To Change the World

For those who share Jacobs’s values—thinking self-critically about one’s own beliefs and being willing to empathize with those with whom one disagrees—this guide on how to navigate an intellectual landscape dominated by snap judgments and polarization will be a delight. Jacobs initially focuses on C.S. Lewis’s concept of the “Inner Ring,” which describes how the urge to belong to groups can promote conformity, but then branches out across the philosophical spectrum, tying in the ideas of Thomas Aquinas, Søren Kierkegaard, John Stuart Mill, and David Foster Wallace, among others. Interspersing the intellectual nuggets are colorful anecdotes, including on basketball great Wilt Chamberlain’s sex life, the Westboro Baptist Church and its abandonment by member Megan Phelps-Roper, and the landmark social-psychology book When Prophecy Fails, about the groupthink of a 1950s UFO cult. Witty, engaging, and ultimately hopeful, Jacobs’s guide is sorely needed in a society where partisanship too often trumps the pursuit of knowledge.

Publishers Weekly