The Coddling of the American Mind by Greg Lukianoff and Jonathan Haidt ****
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
Some recent campus actions border on the surreal. In April, at Brandeis University, the Asian American student association sought to raise awareness of microaggressions against Asians through an installation on the steps of an academic hall. The installation gave examples of microaggressions such as “Aren’t you supposed to be good at math?” and “I’m colorblind! I don’t see race.” But a backlash arose among other Asian American students, who felt that the display itself was a microaggression. The association removed the installation, and its president wrote an e-mail to the entire student body apologizing to anyone who was “triggered or hurt by the content of the microaggressions.”
This new climate is slowly being institutionalized, and is affecting what can be said in the classroom, even as a basis for discussion. During the 2014–15 school year, for instance, the deans and department chairs at the 10 University of California system schools were presented by administrators at faculty leader-training sessions with examples of microaggressions. The list of offensive statements included: “America is the land of opportunity” and “I believe the most qualified person should get the job.”
The press has typically described these developments as a resurgence of political correctness. That’s partly right, although there are important differences between what’s happening now and what happened in the 1980s and ’90s. That movement sought to restrict speech (specifically hate speech aimed at marginalized groups), but it also challenged the literary, philosophical, and historical canon, seeking to widen it by including more-diverse perspectives. The current movement is largely about emotional well-being. More than the last, it presumes an extraordinary fragility of the collegiate psyche, and therefore elevates the goal of protecting students from psychological harm. The ultimate aim, it seems, is to turn campuses into “safe spaces” where young adults are shielded from words and ideas that make some uncomfortable. And more than the last, this movement seeks to punish anyone who interferes with that aim, even accidentally. You might call this impulse vindictive protectiveness. It is creating a culture in which everyone must think twice before speaking up, lest they face charges of insensitivity, aggression, or worse.
We have been studying this development for a while now, with rising alarm. (Greg Lukianoff is a constitutional lawyer and the president and CEO of the Foundation for Individual Rights in Education, which defends free speech and academic freedom on campus, and has advocated for students and faculty involved in many of the incidents this article describes; Jonathan Haidt is a social psychologist who studies the American culture wars. The stories of how we each came to this subject can be read here.) The dangers that these trends pose to scholarship and to the quality of American universities are significant; we could write a whole essay detailing them. But in this essay we focus on a different question: What are the effects of this new protectiveness on the students themselves? Does it benefit the people it is supposed to help? What exactly are students learning when they spend four years or more in a community that polices unintentional slights, places warning labels on works of classic literature, and in many other ways conveys the sense that words can be forms of violence that require strict control by campus authorities, who are expected to act as both protectors and prosecutors?
There’s a saying common in education circles: Don’t teach students what to think; teach them how to think. The idea goes back at least as far as Socrates. Today, what we call the Socratic method is a way of teaching that fosters critical thinking, in part by encouraging students to question their own unexamined beliefs, as well as the received wisdom of those around them. Such questioning sometimes leads to discomfort, and even to anger, on the way to understanding.
How Did We Get Here?
It’s difficult to know exactly why vindictive protectiveness has burst forth so powerfully in the past few years. The phenomenon may be related to recent changes in the interpretation of federal antidiscrimination statutes (about which more later). But the answer probably involves generational shifts as well. Childhood itself has changed greatly during the past generation. Many Baby Boomers and Gen Xers can remember riding their bicycles around their hometowns, unchaperoned by adults, by the time they were 8 or 9 years old. In the hours after school, kids were expected to occupy themselves, getting into minor scrapes and learning from their experiences. But “free range” childhood became less common in the 1980s. The surge in crime from the ’60s through the early ’90s made Baby Boomer parents more protective than their own parents had been. Stories of abducted children appeared more frequently in the news, and in 1984, images of them began showing up on milk cartons. In response, many parents pulled in the reins and worked harder to keep their children safe.
These same children grew up in a culture that was (and still is) becoming more politically polarized. Republicans and Democrats have never particularly liked each other, but survey data going back to the 1970s show that on average, their mutual dislike used to be surprisingly mild. Negative feelings have grown steadily stronger, however, particularly since the early 2000s. Political scientists call this process “affective partisan polarization,” and it is a very serious problem for any democracy. As each side increasingly demonizes the other, compromise becomes more difficult. A recent study shows that implicit or unconscious biases are now at least as strong across political parties as they are across races.
So it’s not hard to imagine why students arriving on campus today might be more desirous of protection and more hostile toward ideological opponents than in generations past. This hostility, and the self-righteousness fueled by strong partisan emotions, can be expected to add force to any moral crusade. A principle of moral psychology is that “morality binds and blinds.” Part of what we do when we make moral judgments is express allegiance to a team. But that can interfere with our ability to think critically. Acknowledging that the other side’s viewpoint has any merit is risky—your teammates may see you as a traitor.
These first true “social-media natives” may be different from members of previous generations in how they go about sharing their moral judgments and supporting one another in moral campaigns and conflicts. We find much to like about these trends; young people today are engaged with one another, with news stories, and with prosocial endeavors to a greater degree than when the dominant technology was television. But social media has also fundamentally shifted the balance of power in relationships between students and faculty; the latter increasingly fear what students might do to their reputations and careers by stirring up online mobs against them.
We do not mean to imply simple causation, but rates of mental illness in young adults have been rising, both on campus and off, in recent decades. Some portion of the increase is surely due to better diagnosis and greater willingness to seek help, but most experts seem to agree that some portion of the trend is real. Nearly all of the campus mental-health directors surveyed in 2013 by the American College Counseling Association reported that the number of students with severe psychological problems was rising at their schools. The rate of emotional distress reported by students themselves is also high, and rising. In a 2014 survey by the American College Health Association, 54 percent of college students surveyed said that they had “felt overwhelming anxiety” in the past 12 months, up from 49 percent in the same survey just five years earlier. Students seem to be reporting more emotional crises; many seem fragile, and this has surely changed the way university faculty and administrators interact with them. The question is whether some of those changes might be doing more harm than good.
The Thinking Cure
For millennia, philosophers have understood that we don’t see life as it is; we see a version distorted by our hopes, fears, and other attachments. The Buddha said, “Our life is the creation of our mind.” Marcus Aurelius said, “Life itself is but what you deem it.” The quest for wisdom in many traditions begins with this insight. Early Buddhists and the Stoics, for example, developed practices for reducing attachments, thinking more clearly, and finding release from the emotional torments of normal mental life.
The goal is to minimize distorted thinking and see the world more accurately. You start by learning the names of the dozen or so most common cognitive distortions (such as overgeneralizing, discounting positives, and emotional reasoning; see the list at the bottom of this article). Each time you notice yourself falling prey to one of them, you name it, describe the facts of the situation, consider alternative interpretations, and then choose an interpretation of events more in line with those facts. Your emotions follow your new interpretation. In time, this process becomes automatic. When people improve their mental hygiene in this way—when they free themselves from the repetitive irrational thoughts that had previously filled so much of their consciousness—they become less depressed, anxious, and angry.
Let’s look at recent trends in higher education in light of the distortions that cognitive behavioral therapy identifies. We will draw the names and descriptions of these distortions from David D. Burns’s popular book Feeling Good, as well as from the second edition of Treatment Plans and Interventions for Depression and Anxiety Disorders, by Robert L. Leahy, Stephen J. F. Holland, and Lata K. McGinn.
Higher Education’s Embrace of “Emotional Reasoning”
Burns defines emotional reasoning as assuming “that your negative emotions necessarily reflect the way things really are: ‘I feel it, therefore it must be true.’ ” Leahy, Holland, and McGinn define it as letting “your feelings guide your interpretation of reality.” But, of course, subjective feelings are not always trustworthy guides; unrestrained, they can cause people to lash out at others who have done nothing wrong. Therapy often involves talking yourself down from the idea that each of your emotional responses represents something true or important.
There have always been some people who believe they have a right not to be offended. Yet throughout American history—from the Victorian era to the free-speech activism of the 1960s and ’70s—radicals have pushed boundaries and mocked prevailing sensibilities. Sometime in the 1980s, however, college campuses began to focus on preventing offensive speech, especially speech that might be hurtful to women or minority groups. The sentiment underpinning this goal was laudable, but it quickly produced some absurd results.
Among the most famous early examples was the so-called water-buffalo incident at the University of Pennsylvania. In 1993, the university charged an Israeli-born student with racial harassment after he yelled “Shut up, you water buffalo!” to a crowd of black sorority women that was making noise at night outside his dorm-room window. Many scholars and pundits at the time could not see how the term water buffalo (a rough translation of a Hebrew insult for a thoughtless or rowdy person) was a racial slur against African Americans, and as a result, the case became international news.
These examples may seem extreme, but the reasoning behind them has become more commonplace on campus in recent years. Last year, at the University of St. Thomas, in Minnesota, an event called Hump Day, which would have allowed people to pet a camel, was abruptly canceled. Students had created a Facebook group where they protested the event for animal cruelty, for being a waste of money, and for being insensitive to people from the Middle East. The inspiration for the camel had almost certainly come from a popular TV commercial in which a camel saunters around an office on a Wednesday, celebrating “hump day”; it was devoid of any reference to Middle Eastern peoples. Nevertheless, the group organizing the event announced on its Facebook page that the event would be canceled because the “program [was] dividing people and would make for an uncomfortable and possibly unsafe environment.”
Since 2013, new pressure from the federal government has reinforced this trend. Federal antidiscrimination statutes regulate on-campus harassment and unequal treatment based on sex, race, religion, and national origin. Until recently, the Department of Education’s Office for Civil Rights acknowledged that speech must be “objectively offensive” before it could be deemed actionable as sexual harassment—it would have to pass the “reasonable person” test. To be prohibited, the office wrote in 2003, allegedly harassing speech would have to go “beyond the mere expression of views, words, symbols or thoughts that some person finds offensive.”
But in 2013, the Departments of Justice and Education greatly broadened the definition of sexual harassment to include verbal conduct that is simply “unwelcome.” Out of fear of federal investigations, universities are now applying that standard—defining unwelcome speech as harassment—not just to sex, but to race, religion, and veteran status as well. Everyone is supposed to rely upon his or her own subjective feelings to decide whether a comment by a professor or a fellow student is unwelcome, and therefore grounds for a harassment claim. Emotional reasoning is now accepted as evidence.
Fortune-Telling and Trigger Warnings
Burns defines fortune-telling as “anticipat[ing] that things will turn out badly” and feeling “convinced that your prediction is an already-established fact.” Leahy, Holland, and McGinn define it as “predict[ing] the future negatively” or seeing potential danger in an everyday situation. The recent spread of demands for trigger warnings on reading assignments with provocative content is an example of fortune-telling.
The idea that words (or smells or any sensory input) can trigger searing memories of past trauma—and intense fear that it may be repeated—has been around at least since World War I, when psychiatrists began treating soldiers for what is now called post-traumatic stress disorder. But explicit trigger warnings are believed to have originated much more recently, on message boards in the early days of the Internet. Trigger warnings became particularly prevalent in self-help and feminist forums, where they allowed readers who had suffered from traumatic events like sexual assault to avoid graphic content that might trigger flashbacks or panic attacks. Search-engine trends indicate that the phrase broke into mainstream use online around 2011, spiked in 2014, and reached an all-time high in 2015. The use of trigger warnings on campus appears to have followed a similar trajectory; seemingly overnight, students at universities across the country have begun demanding that their professors issue warnings before covering material that might evoke a negative emotional response.
It’s hard to imagine how novels illustrating classism and privilege could provoke or reactivate the kind of terror that is typically implicated in PTSD. Rather, trigger warnings are sometimes demanded for a long list of ideas and attitudes that some students find politically offensive, in the name of preventing other students from being harmed. This is an example of what psychologists call “motivated reasoning”—we spontaneously generate arguments for conclusions we want to support. Once you find something hateful, it is easy to argue that exposure to the hateful thing could traumatize some other people. You believe that you know how others will react, and that their reaction could be devastating. Preventing that devastation becomes a moral obligation for the whole community. Books for which students have called publicly for trigger warnings within the past couple of years include Virginia Woolf’s Mrs. Dalloway (at Rutgers, for “suicidal inclinations”) and Ovid’s Metamorphoses (at Columbia, for sexual assault).
However, there is a deeper problem with trigger warnings. According to the most-basic tenets of psychology, the very idea of helping people with anxiety disorders avoid the things they fear is misguided. A person who is trapped in an elevator during a power outage may panic and think she is going to die. That frightening experience can change neural connections in her amygdala, leading to an elevator phobia. If you want this woman to retain her fear for life, you should help her avoid elevators.
But if you want to help her return to normalcy, you should take your cues from Ivan Pavlov and guide her through a process known as exposure therapy. You might start by asking the woman to merely look at an elevator from a distance—standing in a building lobby, perhaps—until her apprehension begins to subside. If nothing bad happens while she’s standing in the lobby—if the fear is not “reinforced”—then she will begin to learn a new association: elevators are not dangerous. (This reduction in fear during exposure is called habituation.) Then, on subsequent days, you might ask her to get closer, and on later days to push the call button, and eventually to step in and go up one floor. This is how the amygdala can get rewired again to associate a previously feared situation with safety or normalcy.
The expansive use of trigger warnings may also foster unhealthy mental habits in the vastly larger group of students who do not suffer from PTSD or other anxiety disorders. People acquire their fears not just from their own past experiences, but from social learning as well. If everyone around you acts as though something is dangerous—elevators, certain neighborhoods, novels depicting racism—then you are at risk of acquiring that fear too. The psychiatrist Sarah Roff pointed this out last year in an online article for The Chronicle of Higher Education. “One of my biggest concerns about trigger warnings,” Roff wrote, “is that they will apply not just to those who have experienced trauma, but to all students, creating an atmosphere in which they are encouraged to believe that there is something dangerous or damaging about discussing difficult aspects of our history.”
In an article published last year by Inside Higher Ed, seven humanities professors wrote that the trigger-warning movement was “already having a chilling effect on [their] teaching and pedagogy.” They reported their colleagues’ receiving “phone calls from deans and other administrators investigating student complaints that they have included ‘triggering’ material in their courses, with or without warnings.” A trigger warning, they wrote, “serves as a guarantee that students will not experience unexpected discomfort and implies that if they do, a contract has been broken.” When students come to expect trigger warnings for any material that makes them uncomfortable, the easiest way for faculty to stay out of trouble is to avoid material that might upset the most sensitive student in the class.
Magnification, Labeling, and Microaggressions
Burns defines magnification as “exaggerat[ing] the importance of things,” and Leahy, Holland, and McGinn define labeling as “assign[ing] global negative traits to yourself and others.” The recent collegiate trend of uncovering allegedly racist, sexist, classist, or otherwise discriminatory microaggressions doesn’t incidentally teach students to focus on small or accidental slights. Its purpose is to get students to focus on them and then relabel the people who have made such remarks as aggressors.
Even joking about microaggressions can be seen as an aggression, warranting punishment. Last fall, Omar Mahmood, a student at the University of Michigan, wrote a satirical column for a conservative student publication, The Michigan Review, poking fun at what he saw as a campus tendency to perceive microaggressions in just about anything. Mahmood was also employed at the campus newspaper, The Michigan Daily. The Daily’s editors said that the way Mahmood had “satirically mocked the experiences of fellow Daily contributors and minority communities on campus … created a conflict of interest.” The Daily terminated Mahmood after he described the incident to two Web sites, The College Fix and The Daily Caller. A group of women later vandalized Mahmood’s doorway with eggs, hot dogs, gum, and notes with messages such as “Everyone hates you, you violent prick.” When speech comes to be seen as a form of violence, vindictive protectiveness can justify a hostile, and perhaps even violent, response.
Surely people make subtle or thinly veiled racist or sexist remarks on college campuses, and it is right for students to raise questions and initiate discussions about such cases. But the increased focus on microaggressions coupled with the endorsement of emotional reasoning is a formula for a constant state of outrage, even toward well-meaning speakers trying to engage in genuine discussion.
What are we doing to our students if we encourage them to develop extra-thin skin in the years just before they leave the cocoon of adult protection and enter the workforce? Would they not be better prepared to flourish if we taught them to question their own emotional reactions, and to give people the benefit of the doubt?
Teaching Students to Catastrophize and Have Zero Tolerance
Burns defines catastrophizing as a kind of magnification that turns “commonplace negative events into nightmarish monsters.” Leahy, Holland, and McGinn define it as believing “that what has happened or will happen” is “so awful and unbearable that you won’t be able to stand it.” Requests for trigger warnings involve catastrophizing, but this way of thinking colors other areas of campus thought as well.
Then there is the eight-year legal saga at Valdosta State University, in Georgia, where a student was expelled for protesting the construction of a parking garage by posting an allegedly “threatening” collage on Facebook. The collage described the proposed structure as a “memorial” parking garage—a joke referring to a claim by the university president that the garage would be part of his legacy. The president interpreted the collage as a threat against his life.
After the student reported Jung’s comment, a group of nearly 20 others e-mailed the UCF administration explaining that the comment had clearly been made in jest. Nevertheless, UCF suspended Jung from all university duties and demanded that he obtain written certification from a mental-health professional that he was “not a threat to [himself] or to the university community” before he would be allowed to return to campus.
All of these actions teach a common lesson: smart people do, in fact, overreact to innocuous speech, make mountains out of molehills, and seek punishment for anyone whose words make anyone else feel uncomfortable.
Mental Filtering and Disinvitation Season
As Burns defines it, mental filtering is “pick[ing] out a negative detail in any situation and dwell[ing] on it exclusively, thus perceiving that the whole situation is negative.” Leahy, Holland, and McGinn refer to this as “negative filtering,” which they define as “focus[ing] almost exclusively on the negatives and seldom notic[ing] the positives.” When applied to campus life, mental filtering allows for simpleminded demonization.
Consider two of the most prominent disinvitation targets of 2014: former U.S. Secretary of State Condoleezza Rice and the International Monetary Fund’s managing director, Christine Lagarde. Rice was the first black female secretary of state; Lagarde was the first woman to become finance minister of a G8 country and the first female head of the IMF. Both speakers could have been seen as highly successful role models for female students, and Rice for minority students as well. But the critics, in effect, discounted any possibility of something positive coming from those speeches.
Members of an academic community should of course be free to raise questions about Rice’s role in the Iraq War or to look skeptically at the IMF’s policies. But should dislike of part of a person’s record disqualify her altogether from sharing her perspectives?
What Can We Do Now?
Attempts to shield students from words, ideas, and people that might cause them emotional discomfort are bad for the students. They are bad for the workplace, which will be mired in unending litigation if student expectations of safety are carried forward. And they are bad for American democracy, which is already paralyzed by worsening partisanship. When the ideas, values, and speech of the other side are seen not just as wrong but as willfully aggressive toward innocent victims, it is hard to imagine the kind of mutual respect, negotiation, and compromise that are needed to make politics a positive-sum game.
Rather than trying to protect students from words and ideas that they will inevitably encounter, colleges should do all they can to equip students to thrive in a world full of words and ideas that they cannot control. One of the great truths taught by Buddhism (and Stoicism, Hinduism, and many other traditions) is that you can never achieve happiness by making the world conform to your desires. But you can master your desires and habits of thought. This, of course, is the goal of cognitive behavioral therapy. With this in mind, here are some steps that might help reverse the tide of bad thinking on campus.
Universities themselves should try to raise consciousness about the need to balance freedom of speech with the need to make all students feel welcome. Talking openly about such conflicting but important values is just the sort of challenging exercise that any diverse but tolerant community must learn to do. Restrictive speech codes should be abandoned.
Universities should also officially and strongly discourage trigger warnings. They should endorse the American Association of University Professors’ report on these warnings, which notes, “The presumption that students need to be protected rather than challenged in a classroom is at once infantilizing and anti-intellectual.” Professors should be free to use trigger warnings if they choose to do so, but by explicitly discouraging the practice, universities would help fortify the faculty against student requests for such warnings.
Finally, universities should rethink the skills and values they most want to impart to their incoming students. At present, many freshman-orientation programs try to raise student sensitivity to a nearly impossible level. Teaching students to avoid giving unintentional offense is a worthy goal, especially when the students come from many different cultural backgrounds. But students should also be taught how to live in a world full of potential offenses. Why not teach incoming students how to practice cognitive behavioral therapy? Given high and rising rates of mental illness, this simple step would be among the most humane and supportive things a university could do. The cost and time commitment could be kept low: a few group training sessions could be supplemented by Web sites or apps. But the outcome could pay dividends in many ways. For example, a shared vocabulary about reasoning, common distortions, and the appropriate use of evidence to draw conclusions would facilitate critical thinking and real debate. It would also tone down the perpetual state of outrage that seems to engulf some colleges these days, allowing students’ minds to open more widely to new ideas and new people. A greater commitment to formal, public debate on campus—and to the assembly of a more politically diverse faculty—would further serve that goal.
Thomas Jefferson, upon founding the University of Virginia, said:
This institution will be based on the illimitable freedom of the human mind. For here we are not afraid to follow truth wherever it may lead, nor to tolerate any error so long as reason is left free to combat it.
We believe that this is still—and will always be—the best attitude for American universities. Faculty, administrators, students, and the federal government all have a role to play in restoring universities to their historic mission.
Common Cognitive Distortions
A partial list from Robert L. Leahy, Stephen J. F. Holland, and Lata K. McGinn’s Treatment Plans and Interventions for Depression and Anxiety Disorders (2012).
1. Mind reading. You assume that you know what people think without having sufficient evidence of their thoughts. “He thinks I’m a loser.”
2. Fortune-telling. You predict the future negatively: things will get worse, or there is danger ahead. “I’ll fail that exam,” or “I won’t get the job.”
3. Catastrophizing.You believe that what has happened or will happen will be so awful and unbearable that you won’t be able to stand it. “It would be terrible if I failed.”
4. Labeling. You assign global negative traits to yourself and others. “I’m undesirable,” or “He’s a rotten person.”
5. Discounting positives. You claim that the positive things you or others do are trivial. “That’s what wives are supposed to do—so it doesn’t count when she’s nice to me,” or “Those successes were easy, so they don’t matter.”
6. Negative filtering. You focus almost exclusively on the negatives and seldom notice the positives. “Look at all of the people who don’t like me.”
7. Overgeneralizing. You perceive a global pattern of negatives on the basis of a single incident. “This generally happens to me. I seem to fail at a lot of things.”
8. Dichotomous thinking. You view events or people in all-or-nothing terms. “I get rejected by everyone,” or “It was a complete waste of time.”
9. Blaming. You focus on the other person as the source of your negative feelings, and you refuse to take responsibility for changing yourself. “She’s to blame for the way I feel now,” or “My parents caused all my problems.”
10. What if? You keep asking a series of questions about “what if” something happens, and you fail to be satisfied with any of the answers. “Yeah, but what if I get anxious?,” or “What if I can’t catch my breath?”
11. Emotional reasoning. You let your feelings guide your interpretation of reality. “I feel depressed; therefore, my marriage is not working out.”
12. Inability to disconfirm. You reject any evidence or arguments that might contradict your negative thoughts. For example, when you have the thought I’m unlovable, you reject as irrelevant any evidence that people like you. Consequently, your thought cannot be refuted. “That’s not the real issue. There are deeper problems. There are other factors.”
Comments are closed.