Psychiatry's Inconvenient Truth: We're Not Saving Lives

 
 

In June 2018, the Center for Disease Control (CDC) released the results of a landmark study, in which all the suicides that occurred in the United States from 1999 to 2016 were recorded and examined. The results of this study should have been the biggest psychiatric news story since the advent of Prozac, but little notice was taken by either the national media or the public at large. This greatly suited the interests of psychiatric providers, and the industry that revolves around them—because the study vividly exposed the deficiencies of the medication-oriented model that dominates psychiatric treatment today. But to fully grasp the significance of this study requires some understanding of psychiatry’s struggles, and its recent history.

The Age of Prozac

Today’s biological era of psychiatry blossomed in 1987, when Prozac—the first modern antidepressant—was introduced by Eli Lilly. In clinical practice, medications that cause a lot of unwanted side effects are commonly referred to as “dirty” drugs. Prior to Prozac, all antidepressant medications were unequivocally “dirty” drugs, with horrible side effects—not the least of which was lethal cardiotoxicity. In the few years that I practiced prior to Prozac’s release, I came to recognize what I bitterly dubbed ”the tricyclic cycle”, alluding to the most popular class of antidepressants at the time. Patients would be hospitalized for depression with suicidal thoughts or behavior, where they were stabilized on tricyclic antidepressants. After discharge they would take the medication for a while, but then stop it because of its side effects—most commonly insufferable dry mouth, but it could just as well be dizziness, constipation, sedation…like I said, they were really dirty drugs. Weeks to months later, they would get depressed again—then overdose on the unused bottle of medication, and get admitted to the intensive care unit for cardiotoxic symptoms—where I would see the patient in consultation, and admit them to the psychiatric unit—where they would be again placed on a tricyclic antidepressant. Every time I prescribed these medications for my outpatients, I felt like I was handing them a loaded gun.

Prozac was a tremendous improvement over these medications, if only because it was refreshingly nonlethal. An overdose usually led to no more than a case of the jitters, and almost certainly not an ICU admission. Its sexual side effects were annoying, but they were the least of problems associated with earlier antidepressants. Prozac became hugely popular—not really because of superior efficacy, but because of its safety and tolerability. A psychiatrist friend of mine who was prescribing it before I did told me, “Paul, it’s the first antidepressant that I would take!”

Depression, of course, is as common as dirt—but because of their risks and side effects, earlier antidepressants were generally prescribed to only the most severe cases. Prozac, however, blew the lid off the target population for consideration of medication treatment. With limited risks vs. its potential benefits, a trial of Prozac was deemed appropriate for many patients with garden-variety depression of a sort that never would have been medicated before.  Given the fact that placebo response rates in clinical trials of antidepressants range from 35 to 40 percent, a very good percentage of patients reported subjective improvement on Prozac. People that were never regarded as clinically dysfunctional sometimes reported an improved level of function on the medication—a phenomenon described as “cosmetic pharmacology” by Dr. Peter Kramer in his book, “Listening to Prozac”—which, by the way, spent 4 months on the New York Times bestsellers list. Prozac became a cultural phenomenon, the wonder drug of the ‘90s—replacing Sigmund Freud as the face of psychiatry.

Since then numerous antidepressant medications in the mold of Prozac have been released, paving the way for the expanded use of other medications—such as mood stabilizers, stimulants, even new generation antipsychotics—in patients who would never have been medicated in the past. An entire generation or two has grown up identifying psychiatry as a medication-oriented specialty, rather than the analytic image it had in years past. Although other medications are more frequently prescribed nowadays, we are nonetheless still living in the Age of Prozac—the drug that made psychiatric medication cool.

In 2013, an estimated 40 million Americans—16.7% of the adult population—filled one or more prescriptions for psychiatric medications. 12% of adults were on antidepressants, 8.3% on anxiolytic or sedative medications, and 1.6 % on antipsychotic agents. 15 million Americans have now been taking antidepressant medications continuously for at least five years. This rate has almost doubled since 2010, and more than tripled since 2000. Nearly 25 million adults have been on antidepressants for at least two years, a 60 percent increase since 2010. With such a vast increase of people in psychiatric treatment, it would be logical to assume that we would see improved psychiatric health, wouldn’t it?

The Inconvenient Truth

In June 2018, the most significant psychiatric news story since the advent of Prozac came to light, but was barely noticed at all…because, you know, Trump. That was the release of a landmark study by the Center for Disease Control—the federal agency charged with monitoring the health of our nation—examining all the suicides that occurred in the United States from 1999 to 2016. Their most significant finding was the fact that over this 17-year span, suicide rates in the United States rose by 30%–from 10.4 per 100,000 people in the year 2000, to 13.5 per 100,000 in 2016. This suicide rate increased by about 1% a year from 2000 to 2006, and then by about 2% a year from 2007 to 2016.

76.8% of all those suicides were by men, who have historically been more prone to suicide. Over this time period, the suicide rate among men increased by 21%–while the suicide rate among women increased by nearly 50%. There was a shocking 70% increase in suicide for girls age 10-19, especially those age 10-14. Almost twice as many children were hospitalized in 2015 for suicidal thought or behavior than there were in 2008.  Suicide has become the second leading cause of death among those age 10 to 34, and the fourth leading cause of death for those age 35 to 54.

Of course, there are many psychosocial factors that may have contributed to this alarming rise in suicide. The declining economy, diminishing social safety net, and rising income inequality are certainly contributing factors. Changing attitudes and social mores, like the diminishing influence of religion, may have made suicide a more socially acceptable option than it used to be. The rise of social media may also be a factor, particularly among young females.

This increase in suicide rates was much more dramatic in rural areas—which confirms the likelihood that psychosocial factors are contributing significantly to this epidemic. The population in rural areas has become older, since many young people move away to live in urban areas. The economic downturn of the Great Recession has hit these areas harder, with many personal bankruptcies and closures of rural businesses. These factors combine to create a less vibrant economic and social culture in these areas, with loss of social cohesion and increased isolation. In short, rural areas have become an increasingly depressive environment. Yet access to mental health care is extremely limited in rural areas–and the lack of anonymity there discourages locals from pursuing treatment. There has also been an alarming increase in substance abuse in rural areas, especially opiates. The increased availability of guns in rural areas likewise increases the potential lethality of suicidal behavior.

Suicide rates nonetheless increased in all states, including the urban ones, with the singular exception of Nevada. It actually decreased there by 1%, which may in part be due to the fact that it’s a rural state that’s been experiencing a lot of urban growth.

54% of suicides occurred in people with no history of identified mental illness. Those without a known illness were more likely to be male, and to use a firearm. Dr. Joshua Gordon, the Director of the National Institute of Mental Health, maintains that, “When you do a psychological biopsy, and go and look carefully at medical records, and talk to family members of the victims, 90% will have evidence of a mental health condition.” It’s frankly hard for me to assess the credibility of this assertion—since “mental health condition” is a pretty vague term, and I’m not sure what “evidence” is more confirmatory than a completed suicide.

The Lead Researcher for the CDC Study, Dr. Deborah Stone, suggests that suicide transcends psychopathology–contending that it was “not just a mental health concern. There are many different circumstances and factors that contribute to suicide. This study points to the need for a comprehensive approach to prevention.”

Psychiatry Strikes Back

In contrast to the passionate concern of public health professionals, the response of American psychiatric leadership to this report was a tiny collective shrug—obviously intended to attract as little attention as possible to this alarming study. Dr. Saul Levin, CEO and Medical Director of the American Psychiatric Association, proclaimed that “the data reinforce the need to fund and enforce laws ensuring access to mental health services,”—whatever the hell that means. The current President of the APA, Dr. Altha Stewart, issued a bland public service announcement: “People should know that suicide is preventable. Anyone contemplating suicide should know that help is available, and that there is no shame in seeking healthcare.” It is unclear how many people contemplating suicide actually heard this message, since it was posted on the APA’s website. Neither of these officials gave the least indication that this might be a failure on the part of psychiatry.

In an interview for Psychiatric News, “suicide expert” Dr. Maria Oquendo, a past President of the APA, rightly called for measures to secure handguns to reduce their availability for those at risk. She also called for providers to be “vigilant” in assessing suicide risk, and “proactive” in preventing recurrent psychiatric episodes in known patients.

In all my 39 years of training and practicing in psychiatry, in numerous work settings, I’ve never encountered any psychiatric staff who were NOT both vigilant and proactive in addressing suicide risk. That’s because in the practice of psychiatry, suicide is the archenemy. Medicolegally speaking, we are expected to keep our psychiatric patients from killing themselves on our watch. The entire treatment apparatus is designed to identify risk for suicide, and to prevent its occurrence.

Naturally, it’s an imperfect line of defense–because it ultimately depends on the honesty, intent, and resources of the patient. But every failure to avert suicide is worthy of clinical review, to determine whether or not it could have been averted. In psychiatry, it’s our best opportunity to save lives like other doctors do. And like any other specialty, we should always be improving our efforts in doing so—which means taking a good hard look at our failures, acknowledging them, and changing our practices if necessary. Dr. Oquendo seems to point the finger at unnamed individuals for not being careful enough—rather than acknowledging the likelihood that a disaster of this national scale, with so much of the public already under our care, could be a failure of our profession at large.

The article goes on to note that “suicide expert” Dr. Oquendo is engaged in research using PET scans and MRIs to “map brain abnormalities in mood disorders and suicidal behavior”, to “examine the underlying biology of suicidal behavior.” Upon reading this, I felt my head explode.

We are in the midst of an epidemic of suicidal behavior that exhibits prominent socioeconomic, demographic, and geographic trends. The existence of these obvious influences—hell, the existence of an epidemic itself—contradicts the notion that there is a significant anatomical component to suicidal behavior. My own clinical experience tells me that there are many unique paths that patients take to arrive at suicidality—too many to be accounted for with such a simplistic model. I will also go way out on a limb here, and propose that a 30-minute interview by a well-trained clinician would be infinitely more effective in screening patients for suicide potential, more available to those in affected communities, and much less costly than screening patients with MRIs and/or PET scans.

Dr. Oquendo’s research pursuits seem to me emblematic of biological psychiatry’s clueless departure from clinical realities. I see it as fiddling with neuroimaging while Rome burns. People don’t typically become suicidal because something happens to their brain—it’s usually because something has happened to their life—fear, despair, anger, loss, or trauma. And psychiatry’s reigning biological model of treatment habitually glosses over such issues.

The Journey of Thomas Insel

The only major psychiatric figure I found who even hinted at the possibility that this was a failure of our profession was the one responsible for the second most significant psychiatric news story since the advent of Prozac—a story which likewise was never brought to the attention of the general public.

In 2002, Dr. Thomas Insel took the position of Director of the National Institute of Mental Health, better known as the NIMH. He had already established his reputation within the circles of neuroscience research by demonstrating the efficacy of clomipramine in treating obsessive-compulsive disorder—one of biological psychiatry’s more convincing successes, in my opinion—and animal studies revealing oxytocin’s role in emotional bonding. As NIMH Director, Dr. Insel’s main claim to fame became his oversight of the infamous STAR*D study in the previous decade. “STAR*D” stood for Sequenced Treatment Alternatives to Relieve Depression, and was Dr. Insel’s attempt to establish “precision medicine for psychiatry”—that is, an evidence-based model to evaluate the relative efficacy of various antidepressants, and to establish a common treatment protocol. This study followed over 4,000 patients in 41 clinical sites over the course of 7 years, at a cost of $35 million. It was unique not only in its scale, but also because it engaged “real world” patients who weren’t screened out for substance abuse issues, medical illness, or other contaminating influences. It was also used to collect genetic data, in the hope of identifying biomarkers to predict antidepressant response and tolerance. Its findings were frustrating inconclusive, and yet entirely consistent with the experience of most psychiatric providers in the field—many patients dropped out of treatment, many were inconsistent with compliance, and no single medication or combination of medications was found to be measurably better than any other regimen. The most convincing finding of the study was the revelation that any patient who fails to improve on one antidepressant is very likely to fail on another antidepressant as well—which was already common knowledge to clinicians.

Since then, Dr. Insel has been vilified by much of the psychiatric community for this undertaking. Instead of establishing a clinically verified protocol for antidepressant therapy, it demonstrated on a large scale just how clinically suspect our treatment model really is. A whole bunch of research money was spent, only to prove that nothing we’re doing does much good.  It’s widely regarded as biological psychiatry’s biggest blown opportunity to demonstrate its effectiveness in treating depression. But fortunately for the corporate overlords of my profession, it went unnoticed by the general public.

Since leaving NIMH in 2015, Dr. Insel has become involved in the development of a cellphone app to assess psychiatric risk—by using data from a patient’s electronic medical record, combined with monitoring of their personal electronic activities. This certainly seems far removed from the biologically-oriented ambitions of his past. In a 2017 interview for Wired magazine, he reflects:

“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs….I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness. I hold myself accountable for that.

He’s certainly assuming a lot of personal responsibility for the failure of what was in fact an earnest effort to improve treatment—especially when you consider the entire industry he’s implicating in his description. But even more intriguing is his response in the New York Times, when asked for his opinion on the CDC Suicide Study:

 “This is the question that I’ve been wrestling with: Are we somehow causing increased morbidity and mortality with our interventions? I don’t think so. I think the increase in demand for the services is so huge that the expansion of treatment thus far is simply insufficient to make a dent in what is a huge social change. In contrast to homicide and traffic safety and other public health issues, there’s no one accountable, no one whose job it is to prevent these deaths—no one who gets fired if these numbers go from 45,000 to 50,000. It’s shameful. We would never tolerate that in other areas of public health and medicine.

Since leaving the NIMH, Dr. Insel has steadfastly avoided responding to his critics. But his enigmatic comments here suggest that he might be gently trolling our profession in the wake of these results—dropping hints that psychiatry might be to blame, and then walking it back to cover his tracks.

Bringing Psyche Back to Psychiatry

What is the highest concern of any medical specialty? Well, all medical doctors swear to the Hippocratic Oath, which maintains that above all, we should do no harm. More people have been treated than ever before—and yet more people are dying. This ugly truth suggests that we may be unwittingly doing harm—but our profession appears to have no desire to explore that possibility. It’s an established fact that antidepressant medications can paradoxically increase the risk of suicide in some cases. I also worry about the potential negative impact of labeling a patient with a psychiatric diagnosis, or glibly attributing depression to a mythical “chemical imbalance”—particularly in impressionable young patients.

Even if we’re not causing harm, it should be incumbent upon psychiatry to do what all other medical specialties make every effort to do—to make damn sure nobody dies from our diseases. And if a proliferation of medications is not doing that job, then we should be looking hard for other techniques to do so.

As I see it, this CDC study is calling our profession to task. We work in a specialty that treats the most complicated organ system in the human body, the brain-mind, and it’s our job as physicians to reduce morbidity and mortality. We’ve chosen to neglect the complexity of that task—settling for a simplistic treatment model that sells pharmaceutical products, promises a quick fix for complicated problems, and makes us psychiatrists feel more like “real doctors”—but is saving fewer lives. This is a public health emergency in our own territory, and preventing suicide is most certainly our business—and if we’re not going to assume the responsibility for rigorously combating suicide, then we have no claim to leadership in the field of mental health.

Dr. Daniel Carlat of Tufts University has been advocating reform of contemporary psychiatry for over a decade—and is best-known for publishing the Carlat Psychiatry Report, a medication newsletter combating commercial bias in drug research. He’s courted controversy within the psychiatric community, by testifying in favor of licensing psychologists (with appropriate training) to prescribe psychiatric medications. This is strongly opposed by psychiatrists, despite the fact that physician assistants and nurse practitioners have been prescribing psychiatric medications for many years with only paramedical training, rather than a full medical degree, and without any significant opposition. Dr. Carlat feels patients would best be served by single provider who is versed in both medication and psychological treatments—a sort of “one-stop shopping” model of care, more convenient and available than it is now. Psychiatry’s vociferous opposition to psychologists having this privilege seems transparently motivated by a justifiable fear—that because of their greater expertise in psychotherapy, psychologists might just do our job better than we do.

I support Dr. Carlat in this cause, and the evolution of psychiatry into a more inclusive and available model of care—a sort of primary psychiatric care provider, offering screening and treatment that is less expensive, more comprehensive, and more available than it is today. Not because it would be good for my profession, but because it would be better for people in need. In response to the CDC study, Dr. Christine Moutier, Medical Director of the American Foundation for Suicide Prevention, notes that, “We need to be teaching people how to manage breakups, job stresses. What are we doing as a nation to help people to manage these things? Because anybody can experience those stresses.” Yes, even people with perfectly normal brains.

We Need a New Treatment Model

In 1808 a German physician named Johann Christian Riel coined the term “psychiatry”, the Greek roots of which literally translate to “medical treatment of the soul.” It’s an ironically romantic term for what our profession has become today. It’s also a paradox of sorts—the application of secular technologies to heal something that most of us see as ethereal, or even sacred.

This paradox has bedeviled psychiatry throughout its history—two competing schools of thought battling to define our discipline, and our role as healers. One is biologically oriented, focused on the anatomy and physiology of the brain, and wedded to more conventionally medical interventions. The other has a cognitive orientation, focused on understanding and treating that abstract entity known to us as the mind. Over the past 40 years psychiatry has been increasingly dominated by the biological school—initially triggered by significant technological breakthroughs in our understanding of brain physiology, but subsequently highjacked by a corporatist alliance of the insurance industry, hospital industry, and Big Pharma—and then legitimized by the purchased collusion of academic psychiatry. All this has led us to our current resting place—complete neglect of the “soul” that gave psychiatry its name.           

These pendular shifts in orientation haven’t occurred because either school of thought has been proven to be more clinically valid. They’ve happened when one school becomes more marketable than the other.  They are a natural consequence of the essential duality of the brain-mind, and our extremely limited understanding of its physiology—creating an academic environment that lends itself to a “gold rush” mentality. The public greatly overestimates the amount of hard knowledge we have, because we’re always overselling ourselves and our chosen tools. Biological psychiatry is now a brand, just like Freud used to be a brand—but now we have a wider array of products to sell.

Neither school of thought has been that successful at conquering mental illness. If we’re ever going to successfully treat psychiatric disorders, we’re going to have to acknowledge our need for a complete understanding of its organ system—which for each side means a better understanding of what those other guys know. I think this ingrained competition for academic and market dominance has blinded both sides to an obvious truth: Psychiatric disease by its very nature is eclectic disease—and its most effective treatment invariably calls for a truly eclectic treatment model.

Our Own Addiction

Psychiatric disorders are in fact a greater mystery than we can truly grasp. This study makes painfully clear how impotent our current biological model is in addressing the problem of suicide, and it implores us to consider a more sensitive and psychological approach in dealing with this critical issue. 

Everyone who’s worked in the treatment of alcoholics is familiar with the term “rock bottom”—that point in an alcoholic’s life when they’ve done enough destruction to their own life, and inflicted enough pain on themselves and others that their denial is finally overcome, and they finally recognize that their only path forward is to give up demon rum. In the wake of this CDC study, I think this would be a good time for psychiatry to take a hard look at what it’s been doing for the past few decades—what good we’ve done for our patients, and what good we haven’t done–and call it rock bottom. The alternative is blindly charging onward in this same direction, and seeing just how high the suicide rate can go. We’ve been drunk on the biological model for too long. It’s not working out well at all—and it’s high time we took the cure.

Previous
Previous

How to Think Like a Scientist (And Why Psychiatrists Don't)

Next
Next

Artificial Afterglow: How SSRIs Might Actually Work