Skip to content

It Doesn’t Make It Alright

The SCOTUS decision on affirmative action has ended a hypocritical and incoherent policy.

· 15 min read
It Doesn’t Make It Alright
Demonstrators against Harvard University's admission process at Copley Square in Boston, Massachusetts, U.S., on Sunday, Oct. 14, 2018. Photo by Adam Glanzman via Getty Images

In 1994, while I was a law student at the University of Houston, I received a call from a lawyer in Austin. He wondered whether I would be interested in joining a class-action lawsuit against the University of Texas (UT) School of Law. “Why would I want to do that?” I asked. Because, he replied, my application to the law school had been rejected primarily on the basis of my race. Had I been black, based on my test scores and grades, I would have been more-or-less automatically admitted to UT Law School. However, since I am white, I was placed in a “discretionary zone,” and UT decided to exercise its discretion to reject me.

My grades and scores were, however, good enough for the University of Houston, which is a perfectly respectable law school, albeit not one of the nationwide top 10 like the University of Texas. I had, the lawyer insisted, been discriminated against on account of my skin colour. If I wanted to do so, I could join other rejected white applicants in the class-action lawsuit he was preparing, and potentially receive a settlement if the lawsuit were successful. It was a contingency-fee case, so it would cost me nothing. I could even remain anonymous while suing Texas’s top law school.

I declined. I was well into my law degree at the University of Houston and I already had enough on my plate. Although I would much rather have studied in the delightful city of Austin (the unofficial motto of which is “Keep Austin Weird”) than staid, swampy Houston, I couldn’t complain about the education I was receiving. The classes were small, the professors were accessible, and Houston, as a world-class centre of medicine and energy production, had a thriving legal market.

Besides which, I wasn’t especially opposed to affirmative action. I had grown up in the suburbs of Houston, which at that time were largely white. I had attended outstanding public schools and enjoyed all the privileges American life has to offer the son of a vice-president of Exxon Chemicals: ski trips, European vacations, summer enrichment programs at Duke University, my own car, and even membership in Pine Forest Country Club. White privilege, thy name was Andrew Hammel. If my place at UT Law school had gone to a black student who grew up in Houston’s hardscrabble Third Ward, who was I to complain?

I was studying the history of American law, which is often blighted by racial prejudice. The framers of the Constitution of the United States were far too delicate (or disingenuous) to even use the word “slave,” but America’s “peculiar institution” hovered in the background of many of its provisions, including the notorious “three-fifths clause,” which provided that unfree persons would be counted as “three-fifths” of a human being for purposes of voting allocation. American law is replete with disgraces such as the 1830 case of North Carolina v. Mann, which overruled the conviction of a slave driver for shooting a female slave named Lydia as she tried to escape a whipping. Slave masters, the court ruled, could not be held accountable for torturing enslaved persons, as “the power of the master must be absolute, to render the submission of the slave perfect.”

In 1857, the Supreme Court of the United States handed down one of the most infamous rulings in legal history, Dred Scott v. Sandford, which denied Americans of African descent any rights under the US Constitution. Writing for the Court, Chief Justice Roger Taney wrote that the Constitution had been intended to erect a “perpetual and impassible barrier between the white race and the one they had reduced to slavery.” In one of the most chilling passages in American jurisprudence, Taney went on to describe how the Constitution’s framers regarded persons of African descent as “beings of an inferior order ... and so far inferior, that they had no rights which the white man was bound to respect; and that the negro might justly and lawfully be reduced to slavery for his benefit.” Dred Scott was a key factor in unleashing the cataclysm of the American Civil War, which formally abolished slavery. Yet slavery gave way to other forms of lawful discrimination and exclusion, which persisted for a century.

In Texas in 1994, official racial segregation was still within living memory, and racial discrimination was still in evidence. Preferences for black college applicants struck me then as a reasonable way to atone for these sins. However, the challenge to racial preferences was crowned with success in 1996, when the United States Court of Appeals for the Fifth Circuit struck down UT’s racial-preference scheme in Hopwood v. Texas. It was the first major court ruling definitively holding that racial preferences violated the Constitution of the United States. As such, it paved the way for the United States Supreme Court’s similar landmark ruling of Students for Fair Admissions v. Harvard, which was handed down on June 29th, 2023.

The dawn of “affirmative action”

On June 4th, 1965, President Lyndon B. Johnson gave a speech at Howard University, the most renowned historically black college in the United States. LBJ was then engineering passage of a raft of civil-rights and voting-rights bills which would transform American society. Among these were bills that authorized quotas and preferences for members of minority groups, especially black and Native Americans. Collectively, these policies were given the name “affirmative action,” a masterpiece of sunny American branding. LBJ defended these policies with a memorable analogy: “You do not take a person who, for years, has been hobbled by chains and liberate him, bring him up to the starting line of a race and then say, ‘you are free to compete with all the others,’ and still justly believe that you have been completely fair.” Soon, institutions across the United States, including universities, government agencies, and some private firms had developed their own policies to boost minority candidates.

Opponents, however, wondered how affirmative action could be reconciled with the Fourteenth Amendment to the United States Constitution, which guaranteed all Americans “equal protection” under the law, regardless of race. Under American law, any policy which draws distinctions among citizens based on race must be presumed unconstitutional unless there is a “compelling” justification for it and the policy is “narrowly tailored” to achieving that end. Satisfying this test is nearly impossible; the rare successful case involves such life-or-death crises as preventing race riots within prisons. Helping black Americans to advance in a society still racked with discrimination was a noble aim, but was it a “compelling” need? Debate raged in law journals and editorials.

The Supreme Court first addressed these issues in 1978 in a case involving a white medical student named Allan Bakke, who had been denied admission to the medical school at the University of California, Davis, which followed a quota system reserving places for black applicants. The Bakke case fractured the Supreme Court, with conservative justices declaring the practice blatantly unconstitutional, while liberal justices declared it to be not just constitutional but necessary to overcome the lingering effects of centuries of discrimination.

Legal scholars eventually decided that the opinion of centrist Justice Lewis Powell should be regarded as the rule set down by the case. Powell agreed that strict quotas were unacceptable—they were hardly a “narrowly tailored” response—but he argued that a policy more precisely targeted at remedying proven past discrimination could pass muster. He also endorsed the idea that achieving racial diversity was a permissible goal. Powell quoted a court submission by Harvard University: “A farm boy from Idaho can bring something to Harvard College that a Bostonian cannot offer. Similarly, a black student can usually bring something that a white person cannot offer.”

Powell’s compromise vision in Bakke set the stage for how American courts would handle this delicate issue going forward. Quotas or rigid statistical cut-offs were clearly unacceptable, but race could be considered as one of many factors, so long as it was not the decisive factor, and so long as it was given only “positive” effect—that is, a hiring or college admissions policy could favour disadvantaged groups, but it could not exclude any other group. If an institution could prove that a certain demographic group—such as black Americans or Native Americans—had been targeted for proven discrimination in this specific field the past, it could label its affirmative-action policy “remedial,” which would further insulate it from court challenge.

In its later affirmative-action decisions, the Court largely hewed to this compromise course, although conservative Justices maintained that “affirmative action” was merely discrimination by another name. In a blistering dissent from a 2003 case, Grutter v. Bollinger, in which the Supreme Court upheld the University of Michigan Law School’s argument that it was needed to admit a “critical mass” of minority students, Justice Antonin Scalia wrote: “The University of Michigan Law School’s mystical ‘critical mass’ justification for its discrimination by race challenges even the most gullible mind. The admissions statistics show it to be a sham to cover a scheme of racially proportionate admissions.”

Yet these objections were not shared by mainstream academia, government, military, and business circles, as shown by the vast library of pro-affirmative action “friend of the court” briefs filed in Grutter by members of Congress, the Governor of Michigan, the Association of American Medical Colleges, the American Council on Education, the Law School Admission Council, the American Sociological Association, the General Motors Corporation, “3M and other Leading Businesses,” BP America, Exxon Mobil Corporation, the American Bar Association, “13,922 Current Law Students at Accredited Law Schools,” and last but not least, Lt. Gen. Julius W. Becton, Jr. Affirmative action had become such a shibboleth at American universities that, as a liberal professor argued in a recent New York Times opinion piece: “[A]ffirmative action—though necessary—has inadvertently helped create a warped and race-obsessed American university culture. Before students ever set foot on a rolling green, they are encouraged to see racial identity as the most salient aspect of their personhood, inextricable from their value and merit.”

This consensus—or if you prefer, our warped and race-obsessed culture—was just demolished by the 6-3 decision in Students for Fair Admission. Three Justices appointed by President Donald Trump to the Supreme Court—Amy Coney Barrett, Neil Gorsuch, and Brett Kavanaugh—joined the Court’s existing conservatives to outlaw the affirmative-action policies of Harvard College and the University of North Carolina. Chief Justice John Roberts bluntly pronounced: “Eliminating racial discrimination means eliminating all of it.” The decision provoked predictable rejoicing from conservatives and anguish from American progressives, but the reaction was curiously muted in the middle. The reaction was muted, I believe, because the case for affirmative action—built on unrealistic assumptions, contradictions, and ideological hypocrisy—rotted from within.

How the case for affirmative action crumbled

(i) Affirmative action is no longer a black-and-white issue

During the 1970s and 1980s, the case for affirmative action rested on a paradigm which, while oversimplified, contained a kernel of truth: white Americans had oppressed black Americans for centuries. It was now time for white Americans to acknowledge these historical sins and step aside so that black Americans could finally enjoy the opportunities they had been denied for so long. If this shift entailed injustice toward individual white Americans, it was a small price to pay to remedy historical trauma on the scale of slavery and Jim Crow. In any event, white Americans still enjoyed privileged status. They were the majority and were on average more prosperous and influential than minorities. They would find it much easier to rebound from a lost college slot or job opportunity than those struggling to escape blighted neighbourhoods and crumbling schools.

Reality, however, has become too complex to be crammed into this monochromatic schema. Critical to this shift was the startling success of Asian immigrants in adapting to American society. (The bureaucratic designation “Asian” of course encompasses a wide variety of distinct cultures, but that’s a debate for another day.) Another of LBJ’s mid-1960s reforms had been to apply the principle of non-discrimination to immigration. It had long been assumed that American immigration policy should favour certain “desirable” kinds of immigrants and discourage others. One example was the bluntly titled Chinese Exclusion Act of 1882. The Immigration and Nationality Act of 1965 forbade discrimination based on national origin in immigration. The result was a steady influx of foreigners, including from Asian countries. The end of the Vietnam War brought another wave of immigration, as “boat people” landed on US shores, seeking refuge in the country which had transformed their homeland into a war zone and then abandoned it to a Communist dictatorship.

Asian-Americans soon adapted to American society with astounding success. Overall, their children tended to rapidly learn English, study incessantly, and excel at the kinds of standardized tests Americans once regarded as touchstones of fairness. Soon, Asian-Americans were outperforming all other ethnic groups, including white Americans, on every imaginable standard—they had more stable families, higher grades, higher incomes, higher college graduation rates, and lower rates of crime, homelessness, substance abuse, and mental illness. After the voters of the State of California officially banned racial preferences in college admissions by approving Proposition 209 in 1998, the number of Asian students in California’s world-class university system increased by 37 percent, and on some campuses by as much as 156 percent.

This fact was crucial to the Supreme Court’s reasoning in Students for Fair Admission. One of the key arguments showing racial preferences was the deeply suspicious similarity in the number of Asian students admitted to Harvard. Although Harvard is inundated with increasing numbers of applications from Asian students with perfect grades and test scores and stellar extra-curricular activities, the percentage of Asian students enrolled hovered between 17 and 20 percent between 2009 and 2018. The share of black students also remained stable, principally because black students scoring in the 40th percentile of grades and test scores were admitted, while Asian students in the top 5 percent were denied.

The emergence of Asian students as the primary losers changed the terms of debate. A June 2023 Pew survey found Asian-Americans oppose the policy by a margin of 52 to 37 percent. Considering that many Asian university applicants are second-generation immigrants whose parents arrived penniless and without any English and worked gruelling shifts in shops, factories, or restaurants to finance their children’s educations, it’s not hard to understand why they would want to ensure their children get into the best schools possible. Although liberal Asian-American activists groups deplored Students for Fair Admissions decision as a death blow to “diversity,” most Asian-Americans likely breathed a discreet sigh of relief.

(ii) Affirmative action was never popular

Although beloved of college administrators and corporate marketing departments, affirmative action has never enjoyed broad support among Americans. The same 2023 Pew study showed that Americans disapprove of it by a margin of 50 to 33 percent. The only subgroup in favour are black Americans, and 29 percent of them oppose the policy. These numbers have been consistent for decades, as shown by the fact that California voters banned affirmative action in 1998. Another factor in the unpopularity of affirmative action was the tendency of its partisans to attack critics of the policy as racist. Defenders of affirmative action—often people with considerable power—embraced two incoherent taboos: You must be in favour of affirmative action in general, but you must never suggest that any particular person had benefited from it.

The Supreme Court had long embraced the comforting notion that discrimination aimed at helping a particular group didn’t really harm anyone. The majority in Students for Fair Admissions rejected this illogical conceit: “College admissions are zero-sum. A benefit provided to some applicants but not to others necessarily advantages the former group at the expense of the latter.” Most Americans favour diversity, but not at the price of fairness. The Supreme Court is not immune to public sentiment, and surely noticed that a policy they had long approved never earned the support of a majority of the people whom they served.

(iii) Affirmative action was always supposed to be temporary

In the 2003 Grutter decision, Justice Sandra Day O’Connor famously pronounced that affirmative action, though still necessary, would likely be unnecessary in 25 years. By that time, historical disadvantages would be compensated, and minority groups would be ready to compete on a level playing field. Her prediction was not borne out. A significant gap between the academic performance of black Americans and that of other ethnic groups persists to this day. The gap cannot be explained purely by higher poverty rates, since black students from upper-middle class households score more poorly than Asian or white students from the very lowest income groups. This is known as the “achievement gap,” and it is no exaggeration to say that it is the overriding obsession of the American educational establishment. Untold billions of dollars have been spent studying this gap and devising programs to remedy it.

What explains the gap? Progressive academics have devised a theory called “stereotype threat,” which posits that black students perform more poorly because they have “internalized” racial stereotypes which erode their performance. However, the studies purportedly proving its existence have come under increasing criticism. In their 1994 monograph, The Bell Curve, Charles Murray and Richard Herrnstein pointed to racial differences in mean IQ scores. Although the existence of these gaps is uncontroversial among psychometricians, their causes remain hotly disputed, and the hereditarian hypothesis favoured by Murray remains taboo.

The middle ground between these poles endorses a complex mix of external and internal factors, including stress caused by violent and chaotic neighbourhoods and precarious living situations, environmental contamination (most importantly high lead levels in poor neighbourhoods), underfunded schools, poor nutrition, racial discrimination, cultural attitudes which undermine academic performance, enduring high levels of racial segregation (American schools are now more segregated than they were in the 1960s), and the fact that three-quarters of black children are born to single mothers, the highest proportion of any ethnic group.

Whatever the explanation, as long as the achievement gap persists, black and Hispanic Americans will be under-represented in universities.

(iv) Affirmative action helps the wrong people

In theory, affirmative action was intended to lift promising minority students out of poverty. Yet it achieves that goal only haphazardly. Many beneficiaries of affirmative action are products of America’s black middle class, not its poorest cohort. Another dirty secret of affirmative action was its reliance on foreign students. Harvard could admit the daughter of a Nigerian business tycoon or government official and add her to the list of black students. But this wasn’t all she brought to the table. She could also afford to pay every penny of Harvard’s exorbitant tuition, and once she graduated, she would return to her home country and burnish Harvard’s worldwide reputation. Harvard’s genius at marketing its brand rivals that of Apple or Nike.

This is the heart of the leftwing critique of affirmative action made by Walter Benn Michaels in his prescient 2006 book The Trouble with Diversity: How We Learned to Love Identity and Ignore Inequality. Michaels, an English professor at the University of Illinois at Chicago, noticed that, over the years, his classes were getting increasingly racially diverse while becoming ever more economically uniform. His classes were a veritable United Nations, but all the students, regardless of race, creed, or colour, could afford stylish laptops and an endless supply of coffee from the on-campus Starbucks.

The heyday of American affirmative action coincided with a massive increase in income inequality in the United States. This, Michaels argues, was no coincidence. He analysed the change in a 2006 essay for the American Prospect:

People have begun to notice also that the intensity of interest in the race of students in our universities has coincided with more or less complete indifference to their wealth. We’re getting to the point where there are more black people than poor people in elite universities (even though there are still precious few black people). … We love race—we love identity—because we don’t love class.

Diversity without discrimination?

American affirmative action in the form of overt or covert racial preferences is now dead. Yet Americans overwhelmingly consider racial diversity a worthy goal. Across the country, lawyers and administrators and hiring teams are working on schemes to achieve diversity without running afoul of increasingly conservative courts. Students for Fair Admissions itself identified one path to this goal. While the Court struck down policies based on a student’s racial category—“race for race’s sake”—it explicitly approved consideration of race tied to individual life histories:

A benefit to a student who overcame racial discrimination, for example, must be tied to that student’s courage and determination. Or a benefit to a student whose heritage or culture motivated him or her to assume a leadership role or attain a particular goal must be tied to that student’s unique ability to contribute to the university. In other words, the student must be treated based on his or her experiences as an individual—not on the basis of race.

Although this sounds like a promising avenue, those familiar with American university admissions will greet it with a cynical chuckle. The thriving industry of consultants who help students get into college will soon learn to create “personalized” essays designed to press all the right buttons.

Another response to the new environment is to simply do away with standardized testing altogether. This movement is already well underway as part of “diversity, equity, and inclusion” efforts at American universities. Yet this approach carries considerable risks. Standardized tests may be despised by education “reformers” but they were adopted for a very good reason—they are generally accurate predictors of how a student will do in college. Students should learn they aren’t college material before they even apply, instead of after two or three years of frustrating failure (not to mention mounting student debt). Of course, this last problem could be remedied by abolishing grades, another policy gaining ground in the USA. But if you graduate from a college which lets you in regardless of your ability and lets you graduate regardless of your performance, how much is your degree really worth? And how is your sheltered college experience going to prepare you for the real world?

Fortunately, there is a better answer, and it emerged from what might seem like an unlikely place: Texas. After the 1996 Hopwood decision outlawed affirmative action in university admissions, Texas lawmakers, activists, and college administrators brainstormed a plan to keep some level of diversity in Texas universities. The “Top Ten Percent” plan guaranteed students graduating in the top 10 percent of their high-school class a spot in a Texas public university (albeit not necessarily the one of their choice). This plan ingeniously leveraged Texas’s highly segregated school system. A school which is 80 percent Hispanic would inevitably generate plenty of Hispanic students among its top 10 percent.

The Top Ten Percent plan was watered down after 2003, when the Supreme Court overruled Hopwood and held that colleges could again consider race. But while it lasted, it was entirely race-neutral and ensured admitted students were likely to be college material. The plan was controversial—college admissions have always been an ideological minefield—and it did not result in the diversity gains its backers had hoped for. But it struck most Texans as fair and ensured that minority students continued to enjoy meaningful access to higher education. It is thus hardly surprising that in the wake of Students for Fair Admissions, lawmakers across America are looking at the “Top Ten Percent” plans as one part of a broader approach to secure diverse student bodies without resorting to distasteful—and now unconstitutional—racial bean-counting.

Latest Podcast

Join the newsletter to receive the latest updates in your inbox.

Sponsored

On Instagram @quillette