How rational is your politics, and how rational could or should politics be, in general? What is, and what ought to be, the role of reason and of science in policy-making or in campaigning? To answer such questions in a reasonable or scientific way, it would first be necessary to define such terms as “rationality,” “reason,” and “science.” That’s a nice Socratic-style challenge, anyway, and I’m not confident that people mean anything very clear or specific by them on most occasions. And, whatever they mean, the things themselves—conceived as faculties in people’s heads or as a series of procedures or guidelines for how to gain knowledge—have little to do with why anyone has the politics they do. People who think their own politics are rational and those of their opponents irrational (that is, more or less everybody) are engaged in a self-congratulatory self-delusion.
A traditional account of the faculty of rationality might be that it encompasses the canons of deductive and inductive reasoning and perhaps the scientific method (which it then is incumbent on the rationalizers to characterize in a general way). That is, rationality is an array of techniques, variously related, for getting true conclusions from true premises, or probable conclusions from probable premises, or data from experiments, or well-tested hypotheses from mere guesses: the rational procedures are the truth-preserving or truth-conducive procedures.
Then again, the alleged science of economics deploys a seemingly completely distinct conception of rationality, oriented to actions and agents rather than to generating true theories. Here, a rational person is one who pursues their own interests (conceived by economists, of course, as economic interests) by means that are most likely, or very likely, or fairly likely, or more likely than not, to be helpful in achieving those interests. In other words, a rational person is defined (admittedly this is comparatively clear) as one who knows how to get his, or who has effective techniques for securing resources, or, in short, who makes a whole bunch of money.
These two, or several, or many, senses of “rationality” may go back to Aristotle, who defined humans as “rational animals,” which raises doubts about whether he had ever met any of us. Aristotle defined “practical rationality” in terms of a certain style of deliberation, known as the “practical syllogism”: “I want thing X; action A will help me get X; so I’ll do A.” Of course, that leaves it entirely open what X is: it could itself be an irrational or evil goal.
Aristotle thought that we all had the same goal—happiness—and that the same means (study and friendship, for example) could help us each achieve it. But he did not give any rational reasons to prefer happiness to various other possible ultimate goals (union with God, for example, or a life of self-sacrifice), nor could he. Our goal, he thought, was built into our nature. Maybe so, but that does not in itself make it any more rational than any other goal. Also, it doesn’t make it clear what happiness (or, as contemporary versions have it, well-being) is, or why we should prefer it to other candidates for ultimacy; it just insists that happiness—itself an awfully vague concept, or a variable that just means “everything we want all at once”—is in fact our goal. But Aristotle at least connects what we might call “cognitive” and “deliberative” rationality, or perhaps logic, experimental science, and economic modeling, into something like the same conceptual structure, which is as much as anyone has done since, really.
As to the scientific method, which is supposed to be something clear enough for a teacher to scribble briefly on a blackboard: a general characterization is going to have to encompass the techniques, for example, of astronomers (instrument-aided observation), psychology (questionnaires), experimental chemistry (hypothesis and reproducible test), medicine (double-blind placebo studies), anthropology (immersion and empathy), and of course economics (statistics), among many other procedures. Good luck boiling it all down, or figuring out exactly which technique to use on a political or moral question, and how.
So, for example, let’s stipulate that science (whatever it may be, exactly) has delivered to us the truth that the planet is getting hotter because of human carbon emissions. It might also give reasons to think that certain procedures will be effective to ameliorate the problem. That’s when the practical syllogism or the economic model of rationality kicks in: if I want to ameliorate climate change I should act to reduce my emissions and to see whether I can convince you to do likewise. But I have many goals that I’m trying to achieve simultaneously, including goals that economists assert to be rational, such as maximizing my income, or paying as little as possible for the things I need. The sheer fact that I’m deliberating about how to reach some goal rationally isn’t going to help me decide which of these goals to pursue when they conflict. It’s not going to help me fix my ultimate goals, or order my goals in a list of priorities. In order to do that, I’m going to have to figure out what I really want, what I think is most important. On that matter, the practical syllogism, like particle physics, is silent.
It is sometimes said of working-class Republicans that they vote against their own interests, probably because their rationality has been distorted by manipulative politicians and media strategists. Sometimes this is conceived in sheer economic terms: people appear to oppose policies (for example, much more aggressive and pervasive welfare programs or a much more progressive tax structure) that would directly put money in their pockets. But whatever rationality of this sort may amount to, it cannot show that I ought to think of my interests exclusively in economic terms. Perhaps these allegedly irrational people are working for other interests, for example a picture of themselves as self-reliant or independent that they conceive as central to their self-respect: something they want for their children. If you think that having things like that as ultimate or important ends is obviously irrational, or that there is a rational procedure for selecting from among a group of important aims the one that is most salient or exclusively in play in a given case, I’m going to need you to prove that rationally. Of course, I’ll need to know what rationality is first, so I can assess the proof.
In general, fixing our ultimate values—in politics or anywhere else—is not an activity that lends itself to rational deliberation. It rests, rather, on visceral commitment. If I think that justice is more important than tradition, or world peace than national borders, for example, I am going to have to screw up my emotions one way or another and make the choice. And to persuade you to do likewise, I am going to have to express passion, not present a series of practical syllogisms or scientific papers. No one’s politics is based on deliberative rationality. And no one’s politics is based on science, of course.
This is one thing that David Hume meant when he made his famous declaration that “Reason is, and ought only to be, the slave of the passions.” Another thing he meant was that while passion, emotion, or desire can motivate people to action, sheer reason cannot. Though people sometimes say that science demands that we act now, it demands no such thing. It might tell us that if we don’t act now, various things will happen. It can’t show us why we don’t want them to happen, or why we should try not to let them happen, if we don’t really care as much about being screwed in the long run as we do about what’s for dinner tonight. Reason might tell us that if we want dinner tonight we should go to the grocery store and crank up the grill; it can’t tell us how much to care, or what to care about. Perhaps reason is a group or a family of strategies for generating beliefs, but, if so, it looks like they are only tangentially related to each other. At any rate, when you’ve told me that I should select my political beliefs rationally, I still don’t know exactly what you mean, or how I possibly could.
Political scientists—who are an interesting kind of scientist—tell us that, statistically speaking, our political positions tend to follow our demographics. The sort of “predictive analytics” that drove Cambridge Analytica’s interventions in the 2016 campaign on behalf of candidates like Trump indicate the same thing. It seems that if I know your race, your region, your age, your gender, your education level, or what movies you watched last month, I can predict your political positions with a fair amount of accuracy. This would be a bizarre circumstance if people were coming to their political positions through rational procedures. The oft-remarked “tribalism” of American politics, which applies just as well to college professors as to truck drivers, gives the lie to the alleged fact that some of these people (the people you agree with, no doubt) are basing their politics on reason while other people (the people you oppose) are not. People, by and large, believe to belong. But at what rate we ought to value belonging: on that, science offers no help.
Perhaps science, whatever it may be, can provide some information that would be useful to us, given that we have certain purposes. It cannot give us purpose, however. If “rationality” meant something, our politics would turn out to be no more rational than we are, overall. What did you expect?
Crispin Sartwell is Associate Professor of Philosophy at Dickinson College in Carlisle, PA. His most recent book is Entanglements: A System of Philosophy (SUNY 2017). You can follow him on Twitter @crispinsartwell