Recently, while reading Ron Chernow’s excellent biography of Ulysses S. Grant, I happened upon a watershed moment. It didn’t concern the campaigns of the Civil War or Grant’s political maneuvers. Instead, it was a pivot on the question of addiction. Regarding Grant’s drinking, Chernow writes: “Perhaps the most explosively persistent myth about Grant is that he was a ‘drunkard,’ with all that implies about self-indulgence and moral laxity. Modern science has shown that alcoholism is a ‘chronic disease,’ not a ‘personal failing as it has been viewed by many… Grant’s drinking has been scrutinized in purely moralistic terms.” (Grant,xxiii)
If you check the citation, “modern science” turns out to be the Journal of the American Medical Association. Chernow is quoting a “viewpoint” column on addiction, by Vivek H. Murthy, that ran in 2016. What makes this so interesting is that Murthy’s classification of alcoholism as a “chronic disease” is nothing new. Some Americans have thought of it that way for a very long time—most famously, the members of Alcoholics Anonymous, who define alcoholism as an incurable, lifelong affliction. All Murthy has done is brought their previously “ideological” position into the scientific mainstream, rebranding it as an objective and testable claim. This is not a scientific discovery; it’s an institutional shift.
The potential consequences of such reclassification, however, are enormous. Our country tends to treat alcoholism as precisely what Murthy says it is not: a moral error. Except in extreme cases of overconsumption or withdrawal, treatments for alcoholism are mostly private, undertaken voluntarily and paid for out-of-pocket. Laws against drunk driving and public drunkenness don’t distinguish between alcoholics and other drinkers. Employees seeking time off or other assistance, while dealing with addiction, are at the mercy of their particular workplace. Three years after Murthy’s article first appeared, none of that has changed. Yet our understanding of our 18th President, if Chernow has his way, will be completely rewritten. It’ll be a story of success under adverse conditions—a triumph over illness—rather than a tale of moral leadership marred by shameful, all-too-human episodes.
It’s fair to say that the country is ripe for a shift in our understanding of alcoholism and the approach to treatment. We already have a tendency, nationwide, to describe opioid addiction as an “epidemic” and “untreated” condition, rather than as something criminal or vicious. Naxolone, the drug that can reverse an opioid overdose, is now deployed like epi-pens or defibrillators, albeit on a smaller scale. The medical origins of some opioid addictions, which may begin when a patient first receives prescription painkillers, have also helped make nonjudgmental medical terminology seem more applicable to opioid addicts. Perhaps this same medical language will soon predominate in discussions of alcohol abuse.
Contradictions in our descriptions of addiction are merely one part of a larger dissensus about the nature and treatment of mental illness. Since the 1950s, the Diagnostic and Statistical Manual of Mental Disorders (or DSM) has set the standard for diagnostic practice in psychiatry. Its definitions of illness include disorders such as Attention Deficit Disorder (ADD). For many people who have it, ADD produces no visibly abnormal behavior or distressing mental states at all. It merely explains a history of poor performance: in school, at work, and other personal endeavors. This poor performance is thought to be the result of poor impulse control; the diagnostic history goes back as far as 1902, when the British physician George Frederick Still first described “defect[s] of moral control” in children. So, in two respects, ADD set massively important precedents. First, the primary criteria for its diagnosis are circumstantial. How can we judge whether a patient is consistently performing “below her potential” at work? How do we distinguish between “just” getting bad grades and manifesting a disorder? The answers to these questions shouldn’t be left up to the patients, themselves, but that is frequently what happens.
Second, ADD absolves patients of any moral responsibility for impulsive behaviors. Still’s naive, judgmental language is illuminating, even today: he was trying to describe instances where children were constitutionally unable to follow certain moral guidelines. By the end the 1950s, the decade when psychiatry consolidated its prestige by standardizing its practices—or, at least, appearing to standardize them—in documents like the DSM, the question of impulse control had even been written into the criminal justice system. In 1954, the case of Durham v. The United States established the “product test” for insanity, designed to identify cases where mental illness was the efficient cause of a crime. (In other words, the crime would not have taken place if the defendant had not been ill.) Six years later, Great Britain adopted the related principle of an “irresistible impulse” that triggers a criminal act. Today, in the United States, the “irresistible impulse” lives on in the American Law Institute’s “Modern Penal Code.” It states that an insanity plea is justified when the defendant lacks the “substantial capacity” either to understand the nature of his crime or to prevent himself from carrying it out.
Thus it’s no small matter whether or not Ulysses S. Grant was “innocent” of the failure to maintain control over his drinking. If he was living with a chronic disease that produced an irresistible impulse to drink, then by our current legal standards, he shouldn’t have been held responsible for any lapses in his behavior or job performance due to alcoholism. This radical conclusion has already been tested, and has already held up, for individuals with a medically substantiated psychiatric disability. When the Americans with Disabilities Act (ADA) became law, in 1990, it mandated reasonable accommodations for disabled individuals, and established a single, unified standard for both physical and psychiatric disabilities.
Between 1993 and 1999, the percentage of EEOC claims that were specifically for psychiatric disabilities doubled. Some of this was probably due to the Commission’s own efforts to inform the public about the rights of people with psychiatric disabilities. It was also an effect of new prescription medications for psychiatric illness. When Prozac was introduced to the American public, in 1987, it was the first generally safe, widely recommendable antidepressant in history. It was followed by Paxil in 1991, and Zoloft in 1992. Giving doctors a choice among prescriptions was ultimately a boon for every manufacturer. The new drugs came with a complete modern package of diagnosis and treatment for depressed people, and every new pill added another voice to the chorus. In 1996, Shire Pharmaceuticals introduced the patented amphetamine Adderall.
Shire, cashing in on the rising number of attention disorder diagnoses, also added fuel to the fire. Disorder and pill became synonymous. Like depression, ADHD had suddenly become so easy to treat that there was seemingly no good reason not to diagnose it. No patient was going to be shocked now by a single prescription that promised to solve any number of ongoing life issues. On the contrary—patients now expected to receive it, and they did.
A backlash was inevitable. Psychiatry had already weathered the “anti-psychiatric” movements of the late-1960s and 1970s; because antidepressant treatments were an affordable form of outpatient care, they didn’t spark a public outcry. Instead, the focal point for new controversies turned out to be our schools. They are, after all, America’s largest ongoing experiment in how to treat people fairly. In 1973, Section 504 of Congress’s Rehabilitation Act provided for the special education of kids with learning disabilities. It was couched in cautious language, and until 1990 had a limited impact. That’s when it became the inspiration for the ADA, which was far more ambitious and comprehensive. The ADA was also the blueprint for the newRehabilitation Act, which was amended and reauthorized in 1992. (It was amended again, and empowered still more, in 1998.) Learning disabilities finally had the same legal status as disabilities of other kinds. Reasonable accommodation, which had begun as a movement in the workplace, was now a mandated priority for every school’s administrators.
Like ADHD, learning disabilities are largely assessed through a combination of performance tests and subjective interviews. In other words, if a student’s performance and her self-reporting about school both fit a certain pattern, that can be sufficient grounds for a medical diagnosis. A wide range of mental illnesses began to double as learning disabilities, including ADHD, depression, and obsessive/compulsive disorders. Diagnoses of mental illness in children, already on the rise, spiked dramatically as the line between disability and illness blurred. There was now a good reason for parents to want their children diagnosed with something. It gave them something proactive to do when their child got poor marks at school, especially when scolding and punishing had no effect. Anybody who did poorly enough in school was now a promising candidate for a psychiatric diagnosis, whether or not she had any other symptoms.
Academic accommodations are a boon for at-risk students. They may include extended time on tests and homework, preferential seating, individual tutoring, pre-made lecture notes, changes in the format of homework and exams, and shorter reading assignments. Furthermore, they work. Disabled students earn higher grades once they start getting accommodations.
This may seem unrelated, but it’s not: in the mid-1970s, a reliable test for anabolic steroids was invented. (As would happen later, in the case of antidepressants, new technology was the catalyst for social change.) The International Olympic Committee took immediate action and banned athletes from using steroids. The age of banning “performance enhancing” drugs, already getting underway in fits and starts, had officially begun. Its consequences for the world of professional athletics are too numerous, and (still) too new, to be fully comprehended. The consequences weren’t limited to that specialized arena, however. Across the globe, the concept of “fairness” began to include restricting access to certain kinds of technology in competitive situations.
Consider how these simultaneous, contradictory movements are going to collide. It’s already happening with Adderall and other stimulants. The drugs are often viewed, for understandable reasons, as “performance enhancers” that give a competitive edge to everybody who uses them, disabled or not. This has inspired a countless series of articles declaring that there’s a worrisome drug culture in colleges and high schools. Since the drugs are not particularly dangerous, the argument hinges on fairness: is it fair to the rest of the class if somebody shows up to the final exam on Adderall? There’s a slippery slope argument to be made, on behalf of students who feel they have to use stimulants because everybody else is doing it. It goes without saying that if you try hard enough, and interview enough college students, you can find evidence of pretty much anything. Shady diagnoses, bandwagoning, medical complications. The alarmists writing these articles live for such “firsthand” sources. A couple of good anecdotes are all they need.
So far, the impact of these reports on the medical profession has been limited, but not nonexistent. There were widespread shortages of stimulants after the DEA’s caps on production failed to keep up with rising demand. Many doctors refuse to prescribe such drugs, or impose a low, arbitrary dosage cap. Claims that ADHD is a fictitious disorder, or one that is massively over-diagnosed, have sprung back to life after being dormant for decades. There’s still general agreement that some people do need prescription stimulants, and that it’s not unfair for specific individuals to use them, but it may not last.
Within the field of education studies, a number of true believers set out to prove that only disabled students benefit from curricular accommodations. This work became particularly urgent once other papers, from more disillusioned pens, began to criticize the correlation between wealth, privilege, and reasonable accommodations. For example, the Times reported that “a 2000 audit of California test takers showed a disproportionate number of white, affluent students receiving accommodations.”That’s patently unfair, unless “normal” students don’t get any benefit from such accommodations. Unfortunately… they do. Their grades go up, although not as much. (That’s to be expected, since their starting average is closer to 100 percent; this is known as “the ceiling effect.”) After you adjust for a student’s initial performance, you find that accommodations are performance enhancers across the board.
(Note: for supporting data, see Benjamin J. Lovett’s 2010 article in the Review of Educational Research, entitled “Extended Time Testing For Students With Disabilities: Answers To Five Fundamental Questions.”)
While the media’s taking a sledgehammer to ADHD, the business of accommodations is booming. There was a very successful effort, by pharmaceutical companies, to expand the indicated uses for antidepressants, tranquilizers, and other psychoactive medication. This began to see results as early as the 1980s, but really took off in the 2000s, when the DSM’s definitions of anxiety disorders were substantially revised. “Generalized Anxiety Disorder,” for instance, became primarily defined as chronic worrying about one’s “life circumstances,” without other definitive symptoms (such as panic attacks). The intensity and visibility of the anxiety was now less important than the nature of the patient’s concerns. Anyone who worried to the point of distraction about his or her life was eligible for treatment—and anything that can be treated is, at this point, also a de factodisability that’s eligible for accommodations. Under our current model, accommodations are one size fits all. A child can receive extended time on all his exams, for the rest of his life, simply because he has persistent mild anxiety and the medical records to prove it.
The bottom line: a bizarre divide has sprung up between highly similar people with contrasting paper trails. From an early age, both are challenged. They do poorly in school. They integrate poorly into social groups. They’re impulsive, reckless, alienated types, who tend to brood and worry. They may suffer from delusions; they may read only half as fast as their peers. They’re likely to take drugs, drink too much, get into financial trouble, and change jobs and living spaces often. They often live for long periods at home, getting care from family members, well into adulthood.
One of them has a medical record of disabling mental illness; the other doesn’t. Teachers view them differently, and their experience in school is completely different. Their families have different expectations; when they make mistakes, those mistakes are classified differently too, by the principal, judge or college counselor involved. Everybody knows about extenuating circumstances—but if you do something badly, or break the law, and you don’t have any extenuating circumstances in your favor, then God help you. You’ll end up with the kind of record that bars you, as you come of age, from all sorts of essential opportunities.
As a teacher, I’ve seen this play out. I’ve seen two students miss the same deadline: one gets called lazy, while the other gets sympathy, for bureaucratic reasons that have nothing at all to do with the facts. I am glad, after 150 years, that we can finally stop condemning Ulysses S. Grant for being an alcoholic. At the time, however, he received no such dispensation. Grant was an infamous drunk, but we needed his military genius, so he got a pass. Instead of a discharge, Grant had men assigned to his care and protection, and they cleaned up after his benders. Of course, there’s precious little greatness in treating a person kindly when you know they’re indispensable. What matters is how you treat people before you know what they’ll accomplish. The options we’ve created, in order to reasonably accommodate learning disabilities and mental illness, is really a playbook for the customizations and compassion every American deserves. That may seem impractical, but even if it is, does that change matters any? The best course is a compassionate one, whether or not we understand what sometimes makes a man, much less a child, worse than he wishes to be.