Multiple factors contribute to overprescribing of antidepressants (especially in mild and subthreshold depression), including the adoption of managed care plans, consumerism, the unsubstantiated public (and professional) belief that depression is caused by a chemical imbalance that can be fixed with a pill, physicians’ time constraints, restricted access to or unavailability of non-pharmacological treatments, depression awareness campaigns, disease marketing and aggressive promotion of pharmaceuticals, an unevidenced (false) conviction that antidepressants work in mild and subthreshold depression, overestimation of antidepressants’ benefits and underestimation of harms due to selective reporting of favourable trial results and systematic methodological biases in antidepressant trials, and physicians’ overreliance on pharmacological treatments coupled with a reluctance to accept non-pharmacological treatments as effective and safe alternatives [9, 11, 13, 14, 25, 26, 57, 134, 188, 368, 369, 623, 792, 793]. These factors were discussed in detail in the chapters “The transformation of depression” and “Flaws in antidepressant research”. In this chapter I will address a pernicious and pervasive problem that is closely related and often a driver of the factors detailed above: conflicts of interest in medicine.

In his fierce but legitimate critique of evidence-based medicine, leading medical researcher Dr. John Ioannidis, professor at Stanford University, stressed:

“As EBM [evidence-based medicine] became more influential, it was also hijacked to serve agendas different from what it originally aimed for. Influential randomized trials are largely done by and for the benefit of the industry. Meta-analyses and guidelines have become a factory, mostly also serving vested interests. National and federal research funds are funneled almost exclusively to research with little relevance to health outcomes. We have supported the growth of principal investigators who excel primarily as managers absorbing more money. Diagnosis and prognosis research and efforts to individualize treatment have fueled recurrent spurious promises. Risk factor epidemiology has excelled in salami-sliced data-dredged articles with gift authorship and has become adept to dictating policy from spurious evidence. Under market pressure, clinical medicine has been transformed to finance-based medicine. In many places, medicine and health care are wasting societal resources and becoming a threat to human well-being”. [377]

To be clear, this is not just some opinion of a dissenting academic; these are established facts, consistently supported by compelling scientific evidence [57, 149, 592, 674, 767, 772, 794–799]. “Moral arguments for transparency aside, there is little debate that relevant financial or other professional and intellectual interests can, and have, distorted medical research, education, guidelines, and practice”, recently wrote the editor in chief of the British Medical Journal and the director of the Centre for Evidence-Based Medicine at the University of Oxford [800]. Thus, in the following, I will outline the various conflicts of interest that have corrupted medical research, education, and practice, including drug regulators, academic departments, researchers, and practitioners.

Let’s start with a short definition of conflicts of interest in medicine. “Conflict of interest arises when an activity is accompanied by a divergence between personal or institutional benefit when compared to the responsibilities to patients and to society; it arises in the context of research, purchasing, leadership, and investments. Conflict of interest is of concern because it compromises the trust of the patient and of society in the individual physician or the medical center” [801]. Dr Howard Brody offers the following criteria for a conflict of interest: “1. The physician has a duty to advocate for the interests of the patient (or public). 2. The physician is also subject to other interests—her or his own, or those of a third party. 3. The physician becomes a party to certain social arrangements. 4. Those arrangements, as viewed by a reasonable onlooker, would tempt a person of normal human psychology to neglect the patient’s/public’s interests in favor of the physician’s (or third party’s)” [802]. Although financial conflicts of interest, that is, physicians’ financial relationships with the biomedical industry, are presumably the most pervasive and best researched form of conflicts of interest in medicine, others should not be ignored. Non-financial conflicts of interest include, among others, personal or institutional expectations/demands and the desire for prestige and career progression [803].

In biomedical, social and psychological research, it is well established that, to advance in their career, junior scientists are pushed (incentivised) to produce spectacular and novel research findings, as much and as fast as possible, because promotion and tenure in academia are awarded by and large for number of high-impact publications and sum of grants acquired [42, 373, 804]. These incentives impose potent conflicts of interest, as they force researchers to value quantity over quality, may compel them to dredge and misrepresent their data, and to selectively report favourable research findings. In result, most research findings do not replicate and are presumably false or massively exaggerated [46, 48, 50, 493, 496, 510, 805].

In psychotherapy, most researchers strongly identify with a particular school of therapy, for example psychodynamic therapy or cognitive-behavioural therapy. By consequence, researchers devoted to one particular type of therapy have systematically overestimated the efficacy of their own therapy and downplayed the efficacy of the rival therapies, a bias known as the allegiance effect [806, 807]. In medicine, this motivation to advance the interests of a specific professional society or association is better known as the defence (or abidance) of guild interests [808, 809]. Therefore, a critical stance towards treatments and/or disease concepts established within a medical society/association (i.e., the guild) creates a professional conflict of interest, as individuals may risk their reputation and position among colleagues. As aptly stated by Dr. Krumholz,

“Unfortunately, our profession does not often reward those who question dogma. In fact, there are many episodes throughout the history of medicine and science in which truth was resisted and dogmatic beliefs, however poorly supported by evidence, were imposed by those in a position to do so … We are trained to defer to authority, not to question it … Those who ask difficult questions or challenge conventional wisdom are often isolated. They may find few opportunities to speak and their writings may not be welcome. Compliance with normative behavior may be forced by fear of recrimination. In some cases, junior faculty may fear that support from mentors will be withdrawn or promotions denied”. [810]

Iatrogenic harm, that is, prescribed medical treatments that caused harm to patients, is one particular area in medicine that is strongly affected by professional conflicts of interest. As a case in point, let us quickly have a look at a recent study by Bennett and colleagues [811]. Among others, they examined the repercussions (sanctions) clinicians experienced when they published reports of very serious adverse reactions from blockbuster drugs (and one medical device). Of 18 clinicians that alerted professionals and the public about very serious adverse reactions, 11 (61%) experienced personal or professional negative consequences. One professor of medicine lost an academic medical position, one clinician was sued by a pharmaceutical company, five clinicians reported receiving personal threats from executives of pharmaceutical manufacturers, and eight clinicians reported that their integrity and reputation was publicly disparaged.

In the following chapter I will examine in more detail how defensively (and ignorant) the healthcare sector habitually responds when professionals alert to harms from medical interventions. I also detail the denial and minimisation of harm patients typically face when they report damage from medical interventions (i.e. iatrogenic harm), which I understand as a consequence of professional conflicts of interest in medicine.

Denial and Minimisation of Harm

Professional (or guild) interests have resulted not only in marginalisation and discrediting of professionals who warned of serious adverse reactions from established treatments [810–812] but also in a pernicious culture of dismissing patient reports of iatrogenic harms. Very recently, the Cumberlege review exposed pervasive and alarming flaws in the UK healthcare system in response to patients’ reports of harm from drugs and medical devices [813]. According to the author, “We have found that the healthcare system—in which I include the NHS [National Health Service], private providers, the regulators and professional bodies, pharmaceutical and device manufacturers, and policymakers—is disjointed, siloed, unresponsive and defensive. It does not adequately recognise that patients are its raison d’etre. It has failed to listen to their concerns and when, belatedly, it has decided to act it has too often moved glacially” [813]. The review provides clear evidence that the medical profession too often shows an alarming disregard for its fundamental ethical principle—first do no harm. According to Helen Haskell, a patient safety advocate commenting on the Cumberlege review,

“Perhaps most striking was the testimony from hundreds of patients reporting lack of informed consent for their initial treatment, followed by years of dismissal by clinicians and regulators who did not want to associate life altering symptoms or injured children with their medical interventions … The review panel found that healthcare providers’ dismissive attitude toward patients was underpinned by a reluctance in all parts of the system to collect evidence on potential harms, by a lack of coordination that would allow clinicians and agencies to interpret and act on that information, and by a culture of denial that failed to acknowledge harm and error, impeding learning and safety”. [814]

The Cumberlege review is no exception. A survey conducted by ProPublica in over 1000 US patients who experienced iatrogenic harm yielded similar results [815]. Only 9% of harmed patients who completed the questionnaire said that the hospital (or other treatment facility) voluntarily acknowledged the harm; 10% of hospitals acknowledged under pressure, and nearly all other patients said they were ignored (44%) or responsibility for the harm was denied (31%) by the hospital. The situation was no better at the level of individual healthcare providers: only 13% voluntarily acknowledged the harm, 9% acknowledged under pressure, and almost all other providers ignored the complaint (40%) or denied responsibility (35%). The authors thus concluded, “Many patients described feeling victimized a second time by the way they were treated after experiencing harm. After placing trust in caregivers, they were surprised to encounter stonewalling, denial and blame” [815].

Are these accusations warranted? In my view, and based on my personal experience (persistent problems after urogenital surgery as a child), yes, they are. But don’t just take my word for it. Instead let us scrutinise the scientific evidence and see what the literature tells us about denial and minimisation of iatrogenic harm. For instance, in a survey of patients with self-reported adverse drug reactions from statins, the physicians were more likely to deny than to affirm a connection between the reported adverse events and the statin. According to the study authors, “Rejection of a possible connection was reported to occur even for symptoms with strong literature support for a drug connection, and even in patients for whom the symptom met presumptive literature-based criteria for probable or definite drug-adverse effect causality” [816]. This denial of adverse drug effects has far-reaching consequences because it results in significant underreporting and thus belated formal recognition of (and reaction to) drug-related harms. For instance, in a survey of clinicians investigating 65 suspected adverse drug reactions, the authors stressed that not one event was ever reported to an external drug safety (pharmacovigilance) agency [817]. In accordance, a systematic review found that, on average, only 6% of all adverse drug reactions were formally reported, yielding an underreporting rate of 94% [818]. The authors thus concluded “This systematic review provides evidence of significant and widespread under-reporting of ADRs [adverse drug reactions] to spontaneous reporting systems including serious or severe ADRs” [818].

Perhaps you may argue that adverse drug reactions are very rare and thus a minor issue for public health. But that’s wrong [376, 678]. Adverse drug reactions account for about 5–7% of all hospital admissions, of which most are deemed avoidable [723, 819]. According to a meta-analysis, 7% of all hospitalised patients experienced an adverse drug reaction, and the rate of fatal adverse drug reactions was 0.3%, that is 3 in every 1000 patients, “making these reactions between the fourth and sixth leading cause of death” [820]. That is, although prescription drugs undeniably can be extremely helpful and lifesaving, quite often they can also be very harmful and, sadly, kill many people unnecessarily. The massive underreporting of severe harm from drugs is therefore a serious public health issue, since pharmacovigilance (drug safety evaluation) systems depend on full reporting of suspected adverse drug reactions. Because suspected adverse drug reactions are rarely reported, by consequence, drug regulators all too frequently fail to timely detect and adequately respond to drug safety issues [171, 821].

It has also been shown that pharmaceutical companies deliberately ignored, misrepresented, and underreported suspected adverse drug reactions [376, 458, 822]. Drug regulators heavily rely on the pharmaceutical companies to timely, objectively, and transparently report suspected adverse drug reactions. If they don’t, then dangerous (harmful) treatments may remain on the market for too long, causing tremendous damage to hundreds of thousands of patients [678, 811]. But as Dr. Abraham already noted in 2002, “It is demonstrated that a pharmaceutical firm’s commitment to search effectively for evidence against the safety of its own product in order to confirm doctors’ warnings can have severe limitations” [823]. He was tragically proven right in various high-profile cases such as the Vioxx scandal, where the manufacturer Merck deliberately obscured a clear harm signal for its blockbuster drug rofecoxib (Vioxx) and withheld important safety data from the FDA [797, 824]. Rofecoxib was belatedly withdrawn from the market by Merck in late 2004 for causing major adverse cardiovascular events (e.g., stroke, myocardial infarction) and increasing all-cause mortality, but internal documents released through litigation revealed that the company suspected such a safety issue since the 1990s and definitely knew about the serious cardiovascular harm since mid-2001, that is, long before they officially acknowledged this safety issue [797, 824]. Many thousand lives could have been saved had the pharmaceutical company not systematically engaged in “deflection, silence, denial, suppression, and lying to physicians and the public” [678].

But inadequate post-marketing surveillance is just one among many issues in drug safety regulation. As summarized by Dr. Furberg and colleagues in an article published in the top-tier journal Archives of Internal Medicine,

“The current Food and Drug Administration (FDA) system of regulating drug safety has serious limitations and is in need of changes. The major problems include the following: the design of initial preapproval studies lets uncommon, serious adverse events go undetected; massive underreporting of adverse events to the FDA postmarketing surveillance system reduces the ability to quantify risk accurately; manufacturers do not fulfill the majority of their postmarketing safety study commitments; the FDA lacks authority to pursue sponsors who violate regulations and ignore postmarketing safety study commitments; the public increasingly perceives the FDA as having become too close to the regulated pharmaceutical industry; the FDA’s safety oversight structure is suboptimal; and the FDA’s expertise and resources in drug safety and public health are limited”. [821]

This failure to adequately assess the safety of drugs is well evidenced by the fact that, despite clear harm signals detected during the review of new drug applications, drug regulators approved several drugs with questionable safety profile. By consequence, various of these drugs had to be withdrawn from the market after a while because they had caused too much serious harm [148, 825]. According to a comprehensive analysis by Lasser and colleagues, 8% of all drugs approved by the FDA between 1975 and 1999 acquired one or more serious safety warnings (referred to as black box warnings) and 3% were withdrawn from the market. The probability of acquiring a serious safety warning or being withdrawn from the market after 25 years was a staggering 20% [826]. These alarming figures were consistently replicated in a more recent study focusing on drugs approved by the Canadian drug regulators between 1995 and 2010 [827].

It is also worthy of note that antidepressants are not exempt from belatedly detected serious safety issues requiring their withdrawal from the market. Examples include nomifensine (introduced 1976, withdrawn 1986 due to haematological effects), zimeldine (introduced 1982, withdrawn 1983 due to peripheral neuropathy; never approved in the US), and nefazodone (introduced 1993, withdrawn 2003 due to hepatotoxicity) [148, 825]. In the remainder of this chapter I will focus in detail on professional conflicts of interest in psychiatry in relation to antidepressants and psychiatric drug treatment in general.

Psychiatry Comes to the Defense of Antidepressants

“It is painful to discover how many lives have been harmed and harmed badly when psychiatry is done badly. Psychiatric diagnosis at its worst leads to psychiatric treatment at its worst, and together the combination is a recipe for disaster. The casualties are a living and much-needed rebuke to the field and provide the inspiration and passion for the sizable antipsychiatry movement. Psychiatry must learn from its bad outcomes and take very seriously the often well-deserved attacks of its critics”, wrote Dr. Allen Frances, professor of psychiatry and chair of the DSM-IV workgroup, in his book Saving Normal [414]. He wrote these lines because he had a good sense and first-hand experience that psychiatry usually does a poor job when it comes to adequately responding to criticism of careless psychiatric diagnosis and indiscriminate drug treatment. Dr. Peter Gotzsche went even one step further and maintained that the minimisation of drug-related harms in psychiatry amounts to “organised denial” [147]. But why is that?

Drugs are the mainstay of contemporary psychiatry in both research and practice [392, 394, 399, 828]. Psychiatric drugs are the first-line treatment in almost all mental disorders, they spurred the biological revolution, build the foundation of biomedical models, helped to consolidate psychiatry as a medical specialty, and granted the profession generous financial support from the pharmaceutical industry. In short, “Drugs, of course, were the centerpiece of the new [psychiatric] era” [394]. Non-pharmacological interventions typically play a subsidiary role, both in research and practice, and are often considered second-line or adjunct treatments despite proven efficacy and safety as first-line therapies. In view of the fundamental importance of medication in psychiatry, challenging the overreliance on drugs as well as their efficacy and safety, understandably threatens psychiatry’s professional (guild) interests [29, 30, 829]. While psychiatrists hardly respond to unfavourable evaluations of psychosocial interventions, they tend to turn out in force when drugs are the target of criticism. That is, whenever researchers or the media question the safety and/or efficacy of popular psychiatric drugs like antidepressants and antipsychotics, not before long various eminent psychiatrists will step in to defend the drugs, quite often harshly, patronising, and with condescending authority [29, 830, 831].

For instance, when in 2017 the UN Special Rapporteur on Human Rights, Dr. Dainius Puras, himself a trained psychiatrist, criticised the excessive frontline use of psychiatric drugs and the overreliance on biomedical models of mental health services, two psychiatrists published a fierce reply titled “Responding to the UN Special Rapporteur’s anti-psychiatry bias” in the Australian and New Zealand Journal of Psychiatry. In the article, they again alleged that “These arguments align with those of the global anti-psychiatry movement” [832]. Unfortunately, this article is no exception, and this pejorative label and similar others were also thrown at me several times. On social media, I was fiercely attacked by various psychiatrists. I was discredited and insulted, I was called “anti-psychiatrist”, “anti-vaxxer”, “pill-shamer”, “ideologically biased partisan”, “flat-earth-believer”, and so on. Perhaps these psychiatrists are not aware that I also wrote critical articles about the evidence base in clinical psychology and psychotherapy [55, 833]. Yet I was never labelled “anti-psychology” or “anti-psychotherapy”.

The anti-psychiatry argument is very common in debates about the effectiveness of antidepressants and other psychopharmaceuticals (for example, see [22]), and of course it is a strawman and merely serves to stifle a much-needed discussion about overdiagnosis and overprescribing of psychiatric drugs [834]. I am not aware that any academic who wrote critically about psychiatric drug use, including, among others, Drs Moncrieff, Kirsch, Gotzsche, Munkholm, Glenmullen, Jakobsen, Plöderl, Davies, Read, Healy, Bschor, Fava, Cosgrove, Zito, Ioannidis, Warren, Summerfield, Jureidini, Timimi, Kinderman, and, of course myself, identifies as anti-psychiatry. And even if some do, it would by no means invalidate their scientific arguments. Another malicious tactic, very popular during the 1990s but still prevalent today to delegitimise the arguments of critical authors is to associate dissenting views on psychiatric drugs with the sect of Scientology [9, 399]. But this is such a ridiculous accusation that I don’t want to further comment on it. Let’s instead focus on other (unscientific) accusations.

Another frequent argument purports that authors with a critical stance towards antidepressants and the current drug-centred treatment paradigm of depression are mostly psychologists or doctors without specialised knowledge in psychiatry (for example, see [23]). This is again a strawman and has no bearing on the current debate. And it’s also a terribly flawed argument at that. First, and most importantly, even if most critics were psychologists and doctors without specialised knowledge in psychiatry, this would not invalidate their scientific arguments. Second, many (perhaps most) academics who wrote critically about antidepressants (and other psychiatric drugs) are in fact psychiatrists (e.g. Drs Healy, Moncrieff, Steingard, Horowitz, Breggin, Timimi, Munkholm, Frances, Glenmullen, Bschor, Fava, Summerfield, Jureidini). Third, GPs (i.e., doctors without specialised knowledge in psychiatry) treat a much larger portion of people with psychological problems than psychiatrists and, as a group, they also prescribe many more antidepressants (and psychiatric drugs in general) than psychiatrists [575, 639, 725, 835]. Fourth, clinical psychologists are extensively trained in psychopathology and psychiatric nosology, and they often work in inpatient or outpatient psychiatric services, treating patients with mild to very severe mental health problems. Insinuating that they lack specialised knowledge in psychiatry (simply because they have no prescribing rights) is wrong, arrogant, patronising, and possibly just another attempt to retain the power imbalance in the mental health field.

When Dr. Irving Kirsch, professor of psychology at Harvard University, published his three seminal meta-analyses in 1998 [742], 2002 [836] and 2008 [198], demonstrating that antidepressants were just marginally better than placebo, there was a furious outcry from many eminent psychiatrists [11, 605]. An unprecedented media frenzy followed, often sensationalist rather than scientifically balanced and critical. Kirsch became kind of an academic celebrity. In Kirsch’s own words, “Somehow, I had been transformed, from a mild-mannered university professor into a media superhero—or super villain, depending on whom you asked” [605]. For the media he was often a superhero, but for most psychiatric professionals he clearly was a super villain. Dr Kirsch was fiercely attacked by various psychiatric organisations, their spokespersons, and many influential academic psychiatrists (including, foremost, key opinion leaders, the product champions working for the pharmaceutical industry). In heated (and sometimes hateful) articles, various critics argued that his findings were biased, that he had applied flawed statistical methods, and that he had intentionally misinterpreted the data (see, for example [21, 837, 838]).

Perhaps the most furious response came in 2012 from the APA in person of the then-president-elect Dr. Jeffrey Lieberman. “Dr. Kirsch is mistaken and confused, and he’s ideologically biased in his thinking. He is conducting an analysis and interpreting the data to support his ideologically biased perspective. What he is concluding is inaccurate, and what he is communicating is misleading to people and potentially harmful to those who really suffer from depression and would be expected to benefit from antidepressant medication. To say that antidepressants are no better than placebo is just plain wrong”, complained Dr. Lieberman in an interview with Medscape [830].

I received a similar “feedback” on my research from the heads of the Swiss psychiatric association, but more on this below. For now, let’s stay with Kirsch, as his work on antidepressants was very influential. Still, I want to clarify a few things: I don’t contend that Kirsch’s statistical analysis and data interpretation had no inadequacies [839]. Personally, I also don’t agree with him that antidepressants are merely active placebos (that is, placebos with physical side effects), for they certainly have psychotropic effects [276, 278]. However, it is debatable whether these psychotropic (“mind-altering”) effects clearly help to improve depression in most users [18, 274, 840]. Moreover, it is important to stress that Kirsch’s main finding—the marginally small average treatment effect of about 2 points on the Hamilton Depression Rating Scale or an effect size of about 0.3—was independently confirmed by many other research groups, including the FDA, and thus is certainly correct [10, 13, 17, 57, 175, 196, 739].

Criticism of statistical analyses and data interpretation are a crucial part of the scientific process, but I strictly oppose to accusations that Kirsch was driven by malicious motives. It is the offensive (and discrediting) way he was criticised that is absolutely inappropriate in scientific discourse. Unfortunately, ad-hominem attacks and personal insults are no rarity in psychiatry (and medicine in general). Another case in point is the furious attack on Dr. Peter Gotzsche, a high-profile medical researcher and co-founder of the Cochrane collaboration known for his critical stance on psychiatric drugs [147]. The provocative article was written by Dr. David Nutt (professor of neuropsychopharmacology at Imperial College London and former president of the European College of Neuropsychopharmacology; he has extensive financial ties to multiple pharmaceutical companies) together with various other leaders of British psychiatry. These co-authors included Dr. Guy Goodwin (professor of psychiatry at University of Oxford; he also has extensive financial ties to multiple pharmaceutical companies), Dr. Dinesh Bhugra (professor of psychiatry at King’s College London), Dr. Seena Fazel (professor of psychiatry at Oxford University), and Dr. Stephen Lawrie (professor of psychiatry at University of Edinburgh). Their article was published in the prestigious journal Lancet Psychiatry and in the title of this paper the authors already insinuated that Gotzsche is ideologically biased, posing the rhetorical question “Attacks on antidepressants: signs of deep-seated stigma?” [22]. And in the main text, the authors mockingly asked, “why would Professor Gøtzsche apparently suspend his training in evidence analysis for popular polemic?”. The authors concluded their critique by claiming “extreme assertions such as those made by Prof Gøtzsche are insulting to the discipline of psychiatry and at some level express and reinforce stigma against mental illnesses and the people who have them. The medical profession must challenge these poorly thought-out negative claims by one of its own very vigorously” [22]. This is the kind of backlash (repercussion) academics receive when they critically write about psychiatric drugs. And now I’ll recount my own story.

My Personal Experience

The president of the Swiss psychiatric association is Dr. Erich Seifritz, professor of psychiatry at the University of Zurich and director of the Psychiatric University Hospital of Zurich (where I did my PhD and habilitation). He was (and presumably still is) dismayed by my research on antidepressants. He complained about me to Dr. Rössler, my former doctorate supervisor and co-author on many of my research papers (including a few papers on antidepressants). Dr. Rössler also happens to be the former director of the Psychiatric University Hospital of Zurich. Anyways, Dr. Seifritz was concerned about two prospective observational studies I conducted with Dr. Rössler showing a prospective association between antidepressant use and worse long-term mental health outcomes, even when carefully controlling for treatment selection (for example depression severity, global functioning, comorbid anxiety disorders, etc.) [220, 841]. Dr. Seifritz also published a commentary to one of these studies, claiming that its methodology was terribly flawed, and “Therefore, the paper is certainly misleading and, furthermore, potentially harmful” [842]. This strong accusation warrants some comments.

The methods we applied were not “terribly flawed”. In fact, we used state-of-the-art methodology in observational studies and rigorous statistical adjustments, controlling for much more potential confounders than many previous and subsequent studies did (see, for example [484, 843, 844]). Yet, I do not contend that my studies prove a cause–effect relationship. It is just an observed association, and we were pretty clear about that in the papers [220, 841]. I’m also fine with being criticised. Debate and scrutiny are integral parts of the scientific process. I have written several comments on papers I believe had serious methodological flaws and/or drew conclusions not supported by the data (see, for example [19, 845–847]). One study published in JAMA Psychiatry was even retracted by the authors after we had pointed out that their statistical model was inadequate and thus had produced false-positive results [848, 849].

In the case of Dr. Seifritz, however, I have the impression that he was primarily protecting guild interests. At regular intervals, in the media and in the scientific literature, he has been defending the dominant drug-centred treatment paradigm in psychiatry. He is also a passionate promoter of antidepressants and makes a considerable personal income as speaker and adviser for various pharmaceutical companies [850]. Between 2015 and 2019 alone, Dr. Seifritz received general (direct) payments from multiple pharmaceutical companies, including, among others, Janssen, Lundbeck, Servier, Eli Lilly, and Pfizer (all companies are manufacturers of antidepressants), for a total amount of 159,313 Swiss Francs (about 148,620 Euros) [851]. A detailed list of the industry payments to Dr. Seifritz can be accessed freely online under https://www.pharmagelder.ch/recipient/2590-Erich-Seifritz.html. But still, in his critique of my antidepressant study, Dr. Seifritz asserted that his paper “was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest” [842]. Really? Since when are extensive financial relationships with antidepressant manufacturers not a “potential conflict of interest”, especially in an article concerned with the long-term outcome of antidepressant use? In other words, his conflict of interest declaration was factually wrong and thus a clear violation of publication ethics, for the International Committee of Medical Journal Editors (ICMJE) considers non-disclosure of conflicts of interest as research misconduct [852]. But our disagreements didn’t stop here.

In late 2019, a leading Swiss newspaper—Tagesanzeiger—printed a large interview with me [853]. In this interview I talked about the overdefinition and overdetection of depression (which is scientifically well established [117, 450, 481]). I also mentioned the modest efficacy of antidepressants and the high risk of adverse effects like sleep problems and sexual dysfunction (which is scientifically well established [10, 13, 140, 336, 854]). I talked about systematic method biases and selective reporting in antidepressant trials (which is scientifically well established [13, 57, 707]). The journalist also mentioned our meta-analysis on the suicide risk in antidepressant trials [332, 855], so I confirmed that there is mounting evidence that antidepressants may cause suicidal behaviour [328, 329, 715] and that antidepressants can even trigger suicidality in mentally healthy users [9, 325]. I further talked about physical dependence and withdrawal reactions from antidepressants (for which there is strong scientific evidence, even though mainstream psychiatry prefers the euphemistic term “discontinuation syndrome” and mistakenly claims that physical dependence, a prerequisite for withdrawal reactions, does not exist [344, 345, 347, 358]). I also mentioned that financial conflicts of interest are pervasive in psychiatry and general medicine and that they systematically bias the scientific evidence in favour of drugs (which is scientifically well established [592, 772, 794, 856]). And, finally, I talked about the chemical imbalance theory of depression that has never been proven and that is widely considered disconfirmed (as most experts in psychopharmacology agree [26, 595, 624]).

A few days later I attended the annual meeting of the German psychiatric association in Berlin to present a new research paper about flaws and inconsistencies in depression treatment guidelines [259]. This conference is also attended by many psychiatrists and other mental health professionals from Switzerland. There I learned from a colleague from the Psychiatric University Hospital of Zurich that Dr. Seifritz was slightly annoyed with me (to put it politely) because of my interview in the Tagesanzeiger. Back in Zurich, not before long, the Tagesanzeiger send me a yet unpublished reply, or rather, a complaint about my interview signed by the heads of the Swiss psychiatric association, including Dr. Seifritz. The letter was titled “We oppose false claims that unsettle ill people” (my own translation) and basically stated, often in a condescending tone and with several strawman arguments, that everything I said in the interview was utterly wrong, misinformed, and misleading. The newspaper asked me to respond to these serious accusations that questioned my scientific expertise and integrity, and so I wrote a comprehensive rebuttal where I meticulously demonstrated that all I said in the interview was supported by robust scientific evidence as referenced above.

To illustrate how absurd some of the complainants’ arguments were, here I present three examples. First, Seifritz and colleagues claimed that there is a complete lack of evidence that GPs would overdiagnose depression, when in fact even GPs admit that depression is overdetected [452]. Moreover, the largest meta-analysis on this issue published in the leading medical journal Lancet clearly confirmed that GPs make far more false-positive depression diagnoses (misidentifying non-depressed cases as depressed) than false-negative depression diagnoses (missing depressed cases) [450]. So GPs overdiagnose depression, this is an established scientific finding. Second, Seifritz and colleagues claimed that there is not one industry-sponsored academic chair in Switzerland, when in fact an independent investigation conducted in 2016 revealed over 300 contracts between the industry and several Swiss universities, of which most comprised sponsored academic chairs [857]. For example, Interpharma, a Swiss pharmaceutical industry association, sponsored the academic chair of health economics hold by Professor Stefan Felder at the University of Basel. But that’s not all. Interpharma was also allowed to have a say in the nomination of the chair and rewarded Professor Felder with a signing bonus of 300,000 Swiss Francs (about 280,000 Euros). So once again, Seifritz and colleagues made an evidently false claim. Third, Seifritz and colleagues denied that antidepressants cause physical dependence. Obviously, they were ignorant of the fact that physical dependence arises because the body (including the brain) undergoes adaptations to the presence of a psychotropic drug [343], for example serotonin receptor downregulation following SSRI use as demonstrated in a placebo-controlled neuroimaging study [342]. That is, withdrawal syndromes after drug discontinuation can only occur when the body has physiologically adapted to drug exposure, which is the very definition of physical dependence [344, 858]. Their other arguments were mostly strawmen that have no bearing on my points (e.g. “GPs also prescribe antidepressants for indications like anxiety disorders, sleep problems, eating disorders, pain, and pre-menstrual complaints”), which is why I won’t further go into detail.

I send my rebuttal of their reply/complaint to the newspaper and asked the editor that he shall publish it along Seifritz and colleagues’ letter in order that the readers may decide for themselves whose claims were misinformed and unevidenced. Unfortunately, for some unknown reason (at least to me), Seifritz and colleagues withdrew their reply so both letters were never published. Instead, a few weeks later the newspaper published an interview with Dr. Seifritz, titled “Antidepressants work”, where he once more falsely claimed that antidepressants are effective in mild depression and that they protect against suicidality [859]. As supporting evidence for the former claim, Seifritz cited a meta-analysis that looked at the efficacy of antidepressants in people with moderate-to-severe depression [258]. Besides that inferring efficacy in patients with mild depression from results in patients with moderate or severe depression is poor scientific reasoning, it is also worthwhile to point out that the reported treatment effect in patients with moderate to severe depression in said study was so small that it is of questionable practical relevance to the average patient [20]. As detailed in this book, the best scientific evidence available unequivocally shows that the efficacy of antidepressants has not been established in mild, minor, and subthreshold depression [152, 256, 261–264]. For this reason, most treatment guidelines, including those of the Swiss psychiatric association co-authored by Seifritz [860], do not recommend antidepressants as first-line treatment in this patient population [181, 232, 233]. And regarding suicidality-protective effects, there is clearly a lack of conclusive evidence that would support such a claim, but mounting evidence to the contrary, in particular in adolescents and young adults [292, 321, 323, 324, 329, 330, 333, 715, 861–864].

Seifritz was also asked about common side effects of antidepressants such as sexual dysfunction and sleep problems. He then claimed that antidepressants have few side effects, and that adverse events are often caused by the underlying depression, and not by the drug. The journalist rightly objected that this explanation is ruled out in a placebo-controlled trial, where side effects are established based on the difference between the placebo group and the antidepressant group, thus the influence of the underlying depression is precluded (since, of course, people in the placebo group also have depression). Seifritz then responded that these between-group differences are small [859]. To put that bold claim into perspective: in antidepressant trials where sexual dysfunction was systematically assessed, the rate of treatment-emergent sexual dysfunction in placebo groups is about 12%, as compared to 70% to 80% in groups of many popular SSRIs and SNRIs. This produces an absolute risk difference of 58–68% and a roughly 6 times increased risk causally related to the pharmacological action of antidepressants [336]. With all due respect, but this is not a small effect, it is a very large effect, and claiming otherwise is disingenuous. In fact, it is the strongest effect the SSRI and SNRI antidepressants have. And this effect is much larger than the therapeutic benefit antidepressants may provide, which, according to response rates, is about 40% in placebo groups as compared to 50% in antidepressant groups, thus producing an absolute risk difference of 10% and a rate ratio of merely 1.25 [140]. Compared to the absolute risk difference (58–68%) and the rate ratio (about 6) for treatment-emergent sexual dysfunction, this is a trivial effect. Finally, with respect to efficacy, Dr. Seifritz reluctantly admitted that the average treatment effect in clinical trials is small, but he confidently asserted that in real-world routine practice the treatment effect would be much larger because clinicians would flexibly adjust the dose when patients don’t respond adequately to the drug [859]. This is another false claim, for several meta-analyses of randomised controlled trials have consistently shown that adjusting the dose (mostly dose increase) does not provide any benefit compared to a fixed low or medium dose [268, 269, 272].

By now you certainly get an impression of how researchers who question the benefit–harm ratio of antidepressants are treated by academic leaders and how these eminent professors try to correct the record in the scientific literature and the media. I can assure you that these discrediting attacks certainly keep some researchers from addressing critical research questions and asking inconvenient questions. Several psychiatrists told me in private that they doubt whether the benefits of antidepressants outweigh their harms, especially in people with non-severe depression, but they don’t dare to talk about this with their colleagues. Renowned professors like Peter Gotzsche, David Healy, and Irving Kirsch will obviously not surrender to senior psychiatric academics despite serious charges levelled against them (for details, see [9, 11, 147]), but many academics, especially junior researchers, may get intimidated when confronted with such hostile responses and thus prefer to remain silent to not threaten their professional career [810]. Thus, deliberately or not, such furious attacks silence dissenting voices and result in scientific censorship. I will now get into more detail on how the scientific discourse has evolved in two controversial areas of antidepressant safety, that is, physical dependence and withdrawal as well as treatment-emergent suicidality.

Physical Dependence and Withdrawal Reactions from Antidepressants

Shortly after the introduction of the tricyclic antidepressants in clinical practice around 1960, case reports alerted practitioners and researchers that, after discontinuation of the drugs, severe withdrawal symptoms can occur [774, 775]. It was also proposed that withdrawal syndromes were due to neurophysiological adaptations following prolonged drug exposure [346]. Unfortunately, this serious issue remained largely ignored and is poorly understood until this day, but there can be little doubt that antidepressant withdrawal symptoms are caused by neurophysiological adaptations, including downregulation and desensitisation of monoamine receptors [865], a pathomechanism also subsumed under the model of oppositional tolerance [223, 351].

Experts in addiction medicine have long recognised that neurophysiological adaptations to a substance are the defining feature of physical dependence and that withdrawal syndromes (including rebound disorders) resulting from physical dependence can occur with about any central nervous system active substance [344, 858]. According to a consensus statement from the American Academy of Pain Medicine, the American Pain Society, and the American Society of Addiction Medicine, “Physical dependence is a state of adaptation that is manifested by a drug class specific withdrawal syndrome that can be produced by abrupt cessation, rapid dose reduction, decreasing blood level of the drug, and/or administration of an antagonist” [866]. The National Institute on Drug Abuse likewise states “Dependence means that when a person stops using a drug, their body goes through ‘withdrawal’: a group of physical and mental symptoms that can range from mild (if the drug is caffeine) to life-threatening (such as alcohol or opioids, including heroin and prescription pain relievers). Many people who take a prescription medicine every day over a long period of time can become dependent; when they go off the drug, they need to do it gradually, to avoid withdrawal discomfort. But people who are dependent on a drug or medicine aren’t necessarily addicted” [867].

Addiction, by contrast, “is characterized by behaviors that include one or more of the following: impaired control over drug use, compulsive use, continued use despite harm, and craving” [866]. Therefore, “For drugs not associated with abuse potential, an individual may still develop dependence; but again, this would not be classified as an addiction” [344]. And according to the National Institute on Drug Abuse, “a person can be dependent on a drug, or have a high tolerance to it, without being addicted to it” (emphasis in original) [867]. So the distinction between physical dependence and addiction is conceptually important, but in everyday language, the two terms are often used interchangeably. Complicating matters further, some patients may indicate that they feel addicted to a drug, but basically refer to physical dependence and withdrawal. According to the Cambridge Dictionary, addiction means “an inability to stop doing or using something, especially something harmful”, which is also compatible with the notion of physical dependence. It is thus understandable that some patients may describe dependence and the occurrence of severe withdrawal syndromes as addiction.

But don’t blame the patients’ use of language. Even physicians, including psychiatrists, often get it wrong. The APA and its appointed experts are presumably among the worst offenders. The confusion about dependence, withdrawal, and addiction is nicely depicted by the various revisions of the APA’s diagnostic manual of mental disorders. According to the DSM-III, the occurrence of a withdrawal reaction, that is, a drug-specific syndrome following cessation or dose reduction, was sufficient to diagnose (physical) dependence. That definition of dependence was fundamentally changed with the introduction of DSM-III-R in 1987 [858]. In the new diagnostic manual, a withdrawal reaction was not sufficient to diagnose dependence; behavioural symptoms were newly also required (e.g. much time spend to obtain the drug, uncontrolled use, continued use despite problems) [868]. Most importantly, with the introduction of DSM-III-R, addictive behaviours (i.e. uncontrolled, compulsive drug use) were subsumed under the inappropriate term “dependence” and they remain so to this day. As detailed by Dr. O’Brien, “The word ‘dependence’ was already in use for many years prior to DSM-III-R to describe the adaptations that occur when medications that act on the central nervous system are ingested with rebound if the medication is discontinued abruptly. If the word also stands for compulsive, uncontrolled, drug-seeking behavior, there is inevitable confusion and patients exhibiting normal tolerance and withdrawal without any evidence of abuse or aberrant behavior are associated with those who meet DSM-III-R ‘dependence’ criteria” [858]. Thus, indeed a very bad (and consequential) decision by the APA’s diagnostic working group.

According to the diagnostic criteria of dependence currently applicable, that is, DSM-5 and ICD-10, even severe and persistent withdrawal syndromes would not classify as dependence. Instead, starting with DSM-IV, a new diagnostic group of drug-specific withdrawal syndromes was introduced. Thus, diagnostic manuals have in fact conflated aspects of addiction and dependence, despite being clearly distinct concepts, but at the same time separated withdrawal from dependence, even though withdrawal is a characteristic feature (or consequence) of physical dependence [858, 866, 868]. Interestingly, the DSM-5 does not include a diagnosis of drug addiction. The only mention of the term addiction is via the label of the nosological category “substance-related and addictive disorders”. Thus, instead of providing conceptual clarity and consistency, the current psychiatric diagnostic manuals created an ongoing confusion about dependence, withdrawal, and addiction [132, 868, 869]. The diagnostic manuals are therefore largely responsible for the widespread denial of dependence and withdrawal reactions from antidepressants repeatedly demonstrated by many leading psychiatrists and health organisations (see examples below).

In sum, antidepressants don’t cause addiction, but they do cause physical dependence, that is, neurophysiological adaptations to drug exposure, or, in medical jargon, the body’s compensatory reaction to a drug’s pharmacodynamic effects. It has been shown that even short-term antidepressant use can lead to neurophysiological adaptations [342, 870], so antidepressants evidentially do cause dependence and withdrawal reactions [344, 865, 871]. The failure of the diagnostic manuals to differentiate between dependence (characterised by neurophysiological adaptations and withdrawal) and addiction (characterised by craving and compulsive, uncontrolled drug use), was misused by various mental health professionals and medical organisations to maintain the false belief that antidepressants are not dependence-forming. As you remember, a main objective of the Defeat Depression campaign was to educate the public that antidepressants are not drugs of dependence [425]. In a public statement, RCP and RCGP stated “It is worrying that people may fail to take the medicine in the mistaken belief that it can cause dependence” [132]. Well, the public was right, because that’s exactly what the drugs do!

What’s really worrying is that the British psychiatric association confused addiction with dependence and that it held the mistaken belief that antidepressants cannot cause dependence, that is, neurophysiological adaptations. Do they at least recognise this misconception now? No, unfortunately not. By relying on the incoherent diagnostic criteria (which confound addiction and dependence), in a recent position statement the RCP still erroneously maintained that antidepressant cannot cause dependence [872]. It is thus past time that psychiatry revises its diagnostic criteria according to the conceptual distinction between dependence and addiction long established in addiction medicine [344, 866, 867]. As urged by Dr. O’Brien, addiction expert at the University of Pennsylvania, “Educators with responsibility for teaching about addiction to medical students and general physicians have to explain that there is a normal physiological response called ‘physical dependence’, and there is ‘addiction’, which is drug-seeking behavior called ‘dependence’ in the DSM” [858]. I would add that they should not only educate general physicians about this confusion, but also psychiatrists (the ones who basically created it).

Various critics, including myself, warned that it took health organisations more than 20 years to acknowledge that benzodiazepines can cause dependence and that now they would show the same pattern of persistent denial with respect to antidepressants [9, 132, 147, 871, 873, 874]. We also expressed concern that psychiatric associations and drug regulators may severely underestimate the true burden of withdrawal syndromes (which result from physical dependence) due to their overreliance on the incoherent diagnostic criteria. For instance, Charles Medawar warned about these failures in a Nature article back in 1994 [875]. The response to his letter by Dr. Hugh Freeman, eminent psychiatrist and former editor of the British Journal of Psychiatry, confirmed that the profession was completely dismissive of withdrawal syndromes following cessation of antidepressant treatment, confusing physical dependence with addiction and treatment need:

“During the past 35 years, there has in fact been no evidence that any antidepressants—whatever their structure—cause ‘addiction’ or ‘dependence’. Medawar says there is ‘profound confusion’ over the meaning of these terms and, if so, he has certainly added to it. Diabetics are dependent on insulin and people with high blood pressure are dependent on hypotensives, in the sense they will become ill again if they stop taking the drugs. Many sufferers from depression are in the same position, but this is totally different from the experience of people who take heroin or cocaine as euphoriants”. [876]

Freeman’s view was endorsed by various official medical bodies over time. For instance, in 2004 the Committee on Safety of Medicines of the British drug regulator MHRA asserted that “There is no clear evidence that the SSRIs and related antidepressants have a significant dependence liability or show development of a dependence syndrome according to internationally accepted criteria (either DSM-IV or ICD-10)” [874]. Again, these authorities merely relied on the fuzzy (and misleading) diagnostic criteria of dependence that confound physical dependence and addiction. They, too, failed to acknowledge the definition established by experts in addiction medicine, according to which a withdrawal syndrome is a consequence of physical dependence, which in turn is due to drug-specific neurophysiological adaptations.

To prevent that prescribers and consumers link antidepressants with physical dependence, in 1997 Eli Lilly sponsored an expert meeting where it established the term “antidepressant discontinuation syndrome” that would soon replace the more appropriate term “withdrawal syndrome” [358]. In an accompanying summary report of this expert meeting, also sponsored by Eli Lilly, the experts claimed, despite a lack of reliable scientific evidence, that “discontinuation syndromes” were extremely rare, and if they would occur, they were commonly mild, short-lived, and self-limiting [877]. Such claims were confidently reiterated by most leading psychiatrists as if they were established scientific facts and given the seal of authority by reproducing them in official practice guidelines [231, 233]. However, there was never strong scientific evidence in support of these claims and we now know that they are misleading and false [345, 350, 351, 357, 358, 878]. But medical organisations were very slow to react or did not change their position at all. Although NICE and RCP now at least acknowledge that withdrawal syndromes can be severe and long-lasting, the APA still falsely maintains that withdrawal syndromes are rare, typically mild and short-lived [357, 879].

The conviction that antidepressants cannot cause dependence is so deeply entrenched in current medical thinking that various psychiatrists and GPs won’t believe their own patients when they mention problems resulting from physical dependence [873, 880, 881]. According to a large patient survey conducted by leading Danish psychiatrists published in 2005, in total 57% of antidepressant users with affective disorders agreed that “When you have taken antidepressants over a long period of time it is difficult to stop taking them” and 56% agreed that “Your body can become addicted to antidepressants”. The authors, however, were not willing to accept these experiences, and instead they claimed that antidepressant users are misinformed and have mistaken beliefs. In an all too common patronising tone they concluded “Although all these subjects had been treated in hospital settings they still had major ignorance and negative attitudes, suggesting a need for intensified psychoeducational activities” [882]. No, it is not the patients, it is the psychiatrists who need to be educated about dependence and withdrawal! As Adele Framer, founder of the peer-support website SurvivingAntidepressants.org states, “Prescriber failure to monitor, recognize, and timely address withdrawal symptoms is the motivation for almost all the site membership. In their attempts to go off the drugs, almost all have been told they have relapsed, even the many who suffered brain zaps—a hallmark of withdrawal syndrome—and especially those who have had mysterious symptoms for years, consistent with psychotropic PWS [protracted withdrawal syndrome]” [881].

In 2018, science journalists Carey and Gebeloff [883] wrote in an article for the New York Times, that many antidepressant users need to continue drug treatment because the withdrawal symptoms that develop upon dose reduction or cessation are unbearable. “Many, perhaps most, people stop the medications without significant trouble. But the rise in longtime use is also the result of an unanticipated and growing problem: Many who try to quit say they cannot because of withdrawal symptoms they were never warned about”, wrote the authors. They further explained “In a recent survey of 250 long-term users of psychiatric drugs—most commonly antidepressants—about half who wound down their prescriptions rated the withdrawal as severe. Nearly half who tried to quit could not do so because of these symptoms. In another study of 180 longtime antidepressant users, withdrawal symptoms were reported by more than 130. Almost half said they felt addicted to antidepressants” [883]. Their evaluation is supported by various other studies, both observational and experimental, which all consistently show that many long-term users are physically dependent on (or feel “addicted to”) antidepressants and experience severe withdrawal syndromes when trying to come off the drugs [278, 279, 354, 733–735, 882, 884, 885].

But still, various psychiatrists fiercely objected the New York Times article and wrote angry letters to the newspaper. One particularly dismissive commentary was written by Dr. Roy Perlis, editor of the American Journal of Psychiatry and published in the very same prestigious journal (the journal is owned by the APA). Disdainfully, Dr. Perlis went on the counterattack. “A recent front-page New York Times article reframed a mental health success story into a conspiracy theory”. It followed a long list of unsubstantiated claims and strawman arguments, before he concluded “The increasing number of people receiving standard depression treatments in the United States represents the success of a substantial public health effort. Anything that stands in the way of people seeking treatment requires that we speak up and try to address both the cognitive and affective biases that may prevent effective treatment” [886].

Perhaps, if Dr. Perlis would carefully listen to user complaints about not being able to stop antidepressants due to dependence and withdrawal, instead of lecturing about stigmatisation of drug treatment and cognitive biases, he would understand that there are legitimate concerns about long-term prescriptions in the absence of robust scientific evidence demonstrating that benefits clearly outweigh harms. And what is this obscure “mental health success story” Dr. Perlis is so convinced of? Where is the success of widespread long-term antidepressant use? He certainly knows that in the US the prevalence of depression and anxiety, as well as the suicide rate, are steadily increasing since about 20 years despite evermore people using antidepressants for ever-longer periods [887–890]. I’m terribly sorry, but from a public health perspective, this is anything but a success story [568, 891].

As detailed above, the minimisation and denial of physical dependence and severe withdrawal syndromes upon discontinuation or dose reduction are pervasive and systematic. Nutt and colleagues, in a Lancet Psychiatry article, went even one step further and insinuated orchestrated malingering and fabrication of withdrawal syndromes:

“Indeed, the new antidepressants, especially the selective serotonin reuptake inhibitors, are some of the safest drugs ever made. In our experience, the vast majority of patients who choose to stay on them do so because they improve their mood and wellbeing rather than because they cannot cope with withdrawal symptoms when they stop. Many of the extreme examples of adverse effects given by the opponents of antidepressants are both rare and sometimes sufficiently bizarre as to warrant the description of an unexplained medical symptom. To attribute extremely unusual or severe experiences to drugs that appear largely innocuous in doubleblind clinical trials is to prefer anecdote to evidence. The incentive of litigation might also distort the presentation of some of the claims”. [22]

Obviously, Nutt and colleagues willfully ignored the various doubleblind clinical trials demonstrating that withdrawal syndromes are real and frequent, in particular with paroxetine and venlafaxine, and that they can be severe to the point that they cause new affective disorders, serious functioning deficits, and/or emergent suicidality [356, 734, 735, 884, 892, 893], for systematic reviews, see [345, 347, 348].

Thus, in short, there can be no doubt that in clinical practice, antidepressant withdrawal is still frequently dismissed, misdiagnosed (e.g. as relapse, a new mental disorder, functional neurological disorder, or medically unexplained symptoms), and mistreated/mismanaged (e.g. dose escalation, adding other high-dosed psychotropic drugs, fast tapers) [357, 871, 878, 894]. This resonates with the experiences made by many patients documented in online peer-support groups [881]. According to a recent user survey about antidepressant withdrawal, in 12% of cases the doctor denied that the symptoms were related to withdrawal, in 15% the doctors were helpful but inaccurate, 42% were unhelpful and inaccurate and just 1% were helpful (29% did not respond to this question) [880]. It is embarrassing for the medical profession that even psychiatrists who personally experienced severe antidepressant withdrawal had to turn to internet sites like SurvivingAntidepressants.org for guidance when they realised that their education and training was of little help and grossly inadequate [895, 896]. By consequence, arguably the most proficient expert on antidepressant withdrawal is a lay person, the above mentioned Adele Framer, who was personally affected by severe antidepressant withdrawal and who later founded SurvivingAntidepressants.org, as she had to learn the hard way that psychiatrists and general medical practitioners have very little or false knowledge about antidepressant withdrawal [881]. The denial, misdiagnosis, and mistreatment of antidepressant withdrawal by physicians is presumably also the main driver of the constantly increasing memberships of antidepressant withdrawal peer-support groups on the internet [897].

As a case in point, I want to conclude this section with an email from Michelle I received on November 2020. This is one among many similar messages from antidepressant users I frequently receive (many other personal experiences can be found on online peer-support groups such as SurvivingAntidepressants.org). Michelle gave me her consent to reproduce the email verbatim and to quote her first name:

Dear Dr. Hengartner,

Thank you for your article “Antidepressant Withdrawal: The Tide is Finally Turning”. I’m sure you get many emails from people like me but I wanted to express my gratitude for your research and publication. Four months ago, under the supervision of my psychiatrist, I discontinued an SSRI which I had been on for 15 years, prescribed to me as a child for childhood anxiety. I had been feeling pretty good for over a year and wanted to find out what my ‘baseline’ state was without the drug. In the four months that followed, I experienced depression, anxiety, aggression, and a near-constant and overwhelming feeling of horror I had never felt before. I had not been expecting any symptoms from the drug discontinuation as I assumed these types of ‘safe’ drugs did not have withdrawal symptoms, and I was definitely not informed by my psychiatrist. After four months that I can honestly describe as the worst of my life, the symptoms remained unbearable and showed no signs of improving and, finally following the urging of both my psychiatrist and therapist, I began taking the SSRI again. I felt relief from my symptoms almost immediately, and within a week back on the SSRI I felt back to ‘normal’.

My psychiatrist and therapist have both deemed the last four months a ‘relapse’ and tried to add a new medication as well as to up the dose on my current medication. I know what I experienced and it was not a relapse—I had never experienced depression or anxiety to that level in my life before. I have also never felt aggression or horror the way I experienced in the last four months. I had also never experienced a depression lasting as long as four months before. Thanks to papers from people like you, I am able to find validation that what I experienced was in fact, withdrawal. I now face the prospect of attempting to more slowly taper off the medication without the support of a psychiatrist. I am also terrified that my brain might be permanently damaged from 15 years on the medication. There is a serious problem with the way SSRIs are prescribed and with psychiatrists and drug companies refusing to listen to the experiences of the people they are supposed to be helping. Thank you for your work against this.

Best,

Michelle

Treatment-Emergent Suicidality With Antidepressants

In children and adolescents, there is strong evidence from the syntheses of randomised placebo-controlled trials that SSRIs and other new-generation antidepressants increase the risk of suicidal ideation and behaviour [58, 292, 293, 322, 323, 780, 898]. However, as detailed in the chapter “Flaws in antidepressant research”, the pharmaceutical industry tried to obscure the harm signal in clinical trials through selective reporting and misrepresentation of suicidal events [322, 323, 715, 770]. Likewise, various influential academics disputed the increased risk of suicidal events in clinical trials based on the results of flawed and methodologically weak ecological studies [899]. Some even claimed that regulatory warnings had led to an increase in youth suicides (for through discussions debunking these erroneous assertions, see [120, 864, 900, 901]).

The risk of treatment-emergent suicidality with antidepressants is less clear in adults. While some meta-analyses of clinical trials found no increased risk of suicidal events, others found increased rates of suicidal behaviour and even suicides [324, 327, 328, 330, 332, 333, 779, 902]. According to the FDA analysis, antidepressants may reduce suicidal ideation and behaviour in older adults [324, 903]. However, and most importantly, not one synthesis of clinical trial data ever found a reduced rate of suicide attempts and suicides with antidepressants relative to placebo in the broader adult patient population. But still various psychiatrists erroneously claim that antidepressants would protect against suicide, commonly based on a few methodologically weak ecological studies and selectively quoted observational studies [899, 904]. However, neither ecological studies nor a recent systematic review of observational studies do provide consistent (and conclusive) evidence that antidepressants protect against suicide in adults [863, 899, 905, 906]. According to our recent meta-analysis of observational studies, exposure to new-generation antidepressants (i.e. SSRIs, SNRIs and atypical antidepressants) was even associated with an increased suicide risk in patients with depression as well as any treatment indication [906].

American and European drug regulators officially acknowledged in the mid-2000s that new-generation antidepressants increase the risk of suicidality in children, adolescents, and young adults [903]. However, according to Dr. Healy [9, 715] drug regulators failed to adequately investigate (and recognise) this pernicious safety issue, especially in adults, even though a harm signal was reported by various researchers during the early 1990s [326, 776, 907]. Healy is not the only one to criticise the drug regulators for their hesitance to recognise a putative causal association between antidepressant use and increased risk of suicidality. When the FDA drug safety evaluator Dr. Andrew Mosholder reported to the FDA leadership that, according to his analysis of placebo-controlled clinical trials submitted to the agency, antidepressants increase the risk of suicidality in youth, his superiors criticised his findings as “premature and based on unreliable data” and they “barred him from reporting his conclusion to an FDA advisory committee” [908]. Among those FDA leaders questioning Mosholder’s evaluation was Dr. Thomas Laughren, then director of the Division of Psychiatry Products. He presented Mosholder’s analysis, but “stressed the unreliability of the data instead of the possible risk from the drugs” [908].

Most importantly, “For more than a decade Dr. Laughren endorsed industry’s denials of an increased suicide risk for consumers of SSRI antidepressants. He dismissed safety concerns raised by FDA medical reviewers, including a reviewer who reported a seven-fold greater incidence of suicidality in children prescribed sertraline (Zoloft®). Dr. Laughren stated in a memo dated October 25, 1996: ‘I don’t consider these data to represent a signal of risk for suicidality for either adults or children’” [909]. Another eight years had to pass until the FDA formally acknowledged an increased risk of suicidality with antidepressants in children and adolescents, and in total ten years to expand their safety warning to young adults.

Researchers, safety advocates, and journalists who dared to suggest that new-generation antidepressants may increase the risk of suicide not only in children, but also in adults, were frequently reprimanded or sanctioned by academic departments, psychiatric organisations, and influential psychiatrists [9, 147, 812, 910]. For instance, in late 2002, Dr. Healy lost his future appointment as director of the University of Toronto’s Mood and Anxiety Disorder Clinic and a professorship at the university’s department of psychiatry after he had delivered a lecture where he also raised the question whether SSRIs may increase the risk of suicide among certain patients. In response this this lecture, within a week he received an email from the University of Toronto unilaterally rescinding their employment offer [812].

Two more examples. When in February 2013 a German television programme reported on the suicide risk with new-generation antidepressants, the German psychiatric association immediately responded with a public statement and a press release asserting that “antidepressants help to prevent suicides” [861]. In support of this claim they cited only one analysis of clinical trials that did not even examine suicide attempts or suicides, while deliberately ignoring the various studies specifically assessing suicide attempts and suicides that found no protective effect or even increased risk with antidepressants relative to placebo. Finally, Nutt and colleagues, in their fierce attack against Dr. Gotzsche, also dismissed the possibility that antidepressants might increase the suicide risk and in accordance with the chairs of the German psychiatric association, suggested that antidepressants protect against suicide:

“Suicide kills about 6000 people every year in the UK. Most of these people are depressed and more than 70% are not taking an antidepressant at the time of death. Blanket condemnation of antidepressants by lobby groups and colleagues risks increasing that proportion. In countries where antidepressants are used properly, suicide rates have fallen substantially”. [22]

I was repeatedly confronted with similar arguments on social media and during peer review of my studies on the risk of treatment-emergent suicidal events in antidepressant trials [332, 333, 855]. In fact, this is the preferred line of reasoning of many psychiatrists and thus can be found in various other prominent articles (see for instance [899, 904]). This typical argumentation by Nutt and colleagues is thus worthy of closer inspection.

First, suicide indeed kills many people, of which many (though by far not all) were depressed and not on antidepressants. However, this does not answer the question whether antidepressants protect against suicide. Alternatively, one could also argue that about 30% were taking antidepressants and it did not prevent them from suicide. So how can Nutt and colleagues imply that the 70% not on antidepressants had benefitted from the drugs? Of course, they cannot (or should not) draw such a conclusion from these data, which is why this is a very poor and misleading argument. This is akin to claiming that smoking does not cause lung cancer for only about 10–15% of current smokers will develop lung cancer, while 85–90% will not [911].

Second, suggesting that regulatory warnings about the risk of treatment-emergent suicidality with antidepressants would paradoxically increase the suicide rate is lacking robust scientific evidence and thus is largely unsubstantiated [912–915]. Authors arguing that the regulatory warnings about treatment-emergent suicidality in youth had resulted in increased suicide rates in this population selectively cited and/or misrepresented studies that were terribly flawed and ignored more thorough analyses that clearly disconfirmed these findings [120, 864, 901].

Third, and related to the point above, Nutt and colleagues cite one international ecological study that found that increased antidepressant prescribing was correlated with lower suicide rates. Such studies examine associations on the group level (here per countries), but not on the individual person level, and thus are prone to serious biases and cannot demonstrate cause–effect relationships [864, 916]. Moreover, the evidence from ecological studies is highly inconsistent and inconclusive [905, 917, 918]. Two systematic reviews concluded that the evidence from ecological studies provides little or no support for the view that increased antidepressant prescribing had led to a reduction in suicide rates [863, 919]. It is also worthy of note that one of the largest international ecological studies published prior to Nutt and colleagues’ article found that increased antidepressant prescribing was associated with higher suicide rates [920], which is completely the opposite finding to the study Nutt and colleagues selectively preferred to cite. Thus, not only did Nutt and colleagues overemphasise the finding from a single ecological study, they also failed to acknowledge that the evidence from ecological studies is fully inconsistent and of very limited validity.

Fourth, and perhaps most importantly, Nutt and colleagues completely ignore the findings from studies with the highest certainty in evidence, that is, meta-analyses of clinical trials. None of these studies found that antidepressants protect against suicide, and in various analyses the suicide rate was numerically (and in some also statistically significantly) higher in the antidepressant group relative to the placebo group [324, 329–333, 779, 902]. Their complete disregard for clinical trial data is striking all the more, as a few lines below they alleged, quite incorrectly, that withdrawal reactions and other serious adverse events have not been demonstrated in doubleblind clinical trials, thus relating such harms to the drugs would mean to prefer “anecdote to evidence” [22]. According to their own argumentation, it follows that Nutt and colleagues prefer anecdote to evidence when it comes to the alleged suicide-protective effects of antidepressants.

But authors were not only reprimanded for contending that antidepressants may increase the suicide risk, some were also fiercely criticised for correctly pointing out that antidepressants barely protect against suicide. For instance, in 2019, US psychiatrist Dr. Amy Barnhorst published an article titled “The empty promise of suicide prevention—Many of the problems that lead people to kill themselves cannot be fixed with a little extra serotonin” in the New York Times. In this opinion paper, Barnhorst maintained that in most cases antidepressants won’t protect against suicide, for suicide is often the tragic consequence of an impulsive reaction to desperation caused by socio-environmental adversity. She thus concluded “We need to address the root causes of our nation’s suicide problem—poverty, homelessness and the accompanying exposure to trauma, crime and drugs” [921]. Although this article was not inherently critical of antidepressants, it provoked an angry response by Dr. Jeffrey Lieberman, a dinosaur of US psychiatry, chair of the psychiatry department at Columbia University and former president of the APA.

The next day the article was published, Dr. Lieberman wrote on Twitter “Amy Barnhorst doesn’t read scientific literature or skipped training. This article is wrong. Suicide is largely preventable, if proper measures taken and prescription drug provided. New York Times please vet authors better”. At the end of his tweet he then tagged the APA (@APAPsychiatric). Dr. Barnhorst responded ironically with “I skipped training”. Several commentators were appalled by Dr. Lieberman’s condescending and hostile tweet. For instance, an anonymous psychiatrist (@FightOn49er) wrote “This is not how we speak to colleagues we disagree with. You are a department chair at Columbia! Do better, Dr. Lieberman!” and Dr. Leah DeSole, a clinical psychologist, stated “Thank you for this comment. I’m glad Jeff cares deeply about this topic! However, let’s remember to prize civility, professionally and personally, in our tweets”. Dr. Lieberman then immediately responded to her with “All for civility except in the case of misinformation that puts lives at risk, especially when purveyed by a professional who wears the patina of credibility” (see the whole Twitter conversation here: https://twitter.com/FightOn49er/status/1122183796806148098).

It is incomprehensible that Dr. Lieberman makes such bold and defamatory claims, given that the US has just experienced the highest suicide rate since World War II (14.2 suicides per 100,000 people in 2018 [888]) and despite the fact that evermore people, currently about 70% of people with serious psychological distress, receive mental health treatment (mostly drug treatment) [477]. Likewise, among US veterans diagnosed with a mental disorder and receiving mental health treatment (again, by and large drug treatment), the suicide rate is alarmingly high (about 68 per 100,000 people) and has remained largely constant over time [922]. Thus, if suicide was preventable the way Dr. Lieberman pretends (or wishes) it to be, then why do mental health services in the US fail so terribly at it? Certainly not because they would insufficiently prescribe antidepressants and other psychiatric drugs.

Considering the disturbing surge of suicides that runs parallel to the increasing societal problems in the US (e.g. poverty, inequality, drug abuse), should Dr. Lieberman not at least be open to suggestions that the current biomedical approach to suicide-prevention is inadequate or at least insufficient? Would we not expect that someone like Dr. Lieberman would critically reflect on the terrible impression his defensive and defamatory tweets make on the public and other mental health professionals? It’s not that Dr. Lieberman would just be some brash medical student; he is a leading professor of psychiatry and former president of the most powerful psychiatric association in the world, the APA. With all due respect, but if this really is the best answer academic psychiatry has to offer, then the profession’s guiding biomedical paradigm truly is in crisis [390, 392].

Now, this was certainly not the first time Dr. Lieberman revealed a complete disregard for constructive debate and respectful conversation. I have already detailed above how he denigrated Dr. Kirsch as “mistaken and confused” and “ideologically biased in his thinking” [830]. In his habitual manner to insult people who do not share his drug-centred views, he is by no means an outlier. In my view such desperate ad hominem attacks are the norm rather than the exception, and as detailed above, they come disproportionally often from leading academics. Unfortunately, instead of engaging in a constructive debate based on empirical evidence and scientific arguments, when it comes to defending the alleged benefits and the mass prescription of antidepressants (and other psychiatric drugs), many psychiatrists resort to derision, delegitimisation, defamation, misrepresentation, strawman arguments, and unevidenced claims. No wonder that these “debates” yield nothing but anger and anguish, but never new insights, critical reflection, or scientific progress. In short, this behaviour is utterly unscientific. Such responses are thus best conceived of as defences of guild interests and claims to leadership and power [30, 810, 923]. Readers familiar with the philosopher of science Dr. Thomas Kuhn and his famous book The Structure of Scientific Revolutions probably will also consider such hostile responses as desperate defences of an incommensurable scientific paradigm [924].

Corporate Bias

Medicine has made incredible progress over the course of the twentieth century, especially in the first half. Medical breakthroughs, for example, antibiotics, insulin, vaccines, chemotherapy, surgical innovations, immunosuppressants, and antiretrovirals had a huge impact on the prevention and treatment of various life-threatening diseases. In step with these major advancements, the healthcare sector grew massively. In the late twentieth century, healthcare services and biomedical research became a highly competitive and lucrative multi-billion-dollar market. Innovative surgical techniques were introduced, sophisticated imaging procedures were developed, many new drugs were marketed, managed care plans were developed, and patients became healthcare consumers. Although the prevention, detection, diagnosis and treatment of various diseases had advanced considerably, many experts expressed concern over problematic developments in modern biomedicine that increasingly put commercial and professional interests over public health, patient safety, academic freedom, and research integrity [29, 59, 171, 375–377, 379, 381, 382, 650, 659, 767, 812].

The pharmaceutical industry is arguably the main perpetrator and its list of transgressions (i.e. healthcare fraud and scientific misconduct) is long and shocking [376, 428, 925]. As reported by Public Citizen, in the US alone, pharmaceutical companies paid a total of $38.6 billion in penalties for 412 settlements reached with the federal and state governments in the 27 years from 1991 through 2017 [926]. Unlawful promotion of drugs accounted for the most financial penalties (US$11.3 billion, 29% of all financial penalties). GlaxoSmithKline and Pfizer were the worst offenders; these two companies paid more financial penalties than any other companies ($7.9 billion and $4.7 billion, respectively). Although these tremendous sums are impressive, “Financial penalties continued to pale in comparison to company profits, with the $38.6 billion in penalties from 1991 through 2017 amounting to only 5% of the $711 billion in net profits made by the 11 largest global drug companies during just 10 of those 27 years (2003–2012)” [926].

According to the most recent analysis adjusted for inflation, among 26 major pharmaceutical companies, 22 (85%) had financial penalties in the US for illegal activities for the period 2003 to 2016. The combined value of financial penalties during this 14-year period totaled a staggering $33 billion. Eleven of the 26 companies accounted for 88% of the total penalties, of which the worst offenders were GlaxoSmithKline ($9.8 billion), Pfizer ($2.9 billion), Johnson & Johnson ($2.7 billion), Abbott Laboratories ($2.6 billion), Merck ($2.1 billion) and Eli Lilly ($1.8 billion). But even for GlaxoSmithKline, the shocking $9.8 billion in total penalties amounted to a mere 1.6% of its total revenues. For the other companies listed above, this proportion was less than 0.8% [925]. Thus, although some companies paid tremendous penalties for illegal activities (mostly pricing violations, off-label marketing, and kickbacks), these penalties are trivial in comparison to the companies’ huge revenues and may be accepted as common (out-of-pocket) business expenses. As stated by Jureidini and McHenry, “Expensive litigation, for the industry, is just part of the price of doing business” [29]. It has thus been suggested that courts should not only punish the companies, but also the corporate executive officers [927]. When the directors of pharmaceutical companies would face jail for illegal corporate activities, perhaps then, and only then, the companies would change their way of doing business.

There is no doubt that pharmaceuticals are an incredibly lucrative business [388], and the pharmaceutical industry found many ways—both legal and illegal—to increase its profits. The very meaning of corporate bias is that through their extensive financial power, the pharmaceutical industry can exert substantial control over the healthcare sector by influencing health policy, drug regulation, medical associations, consumer organisations, academic departments, and individual prescribers. Pharmaceutical products can be very helpful and lifesaving, but in many indications they are largely ineffective and various drugs caused more harm than good [171, 376, 383]. In any case, there is compelling scientific evidence that the benefits of drugs have been systematically exaggerated while harms were downplayed or ignored. Many drugs are massively overused and inadequately prescribed, largely due to aggressive pharmaceutical marketing and promotion (to both doctors and the public) as well as the industry’s influence over continuing medical education, academic medical departments and the research landscape [376, 381, 428, 459, 928]. However, don’t think that the medical profession was solely fooled and betrayed by the pharmaceutical industry as insinuated by various authors, for example by Dr. Ben Goldacre in his book “Bad pharma: how drug companies mislead doctors and harm patients” [428]. It is not just a one-way direction, where industry is the bad guy and doctors the naïve but well-meaning dupes. Medicine was quite often complicit in this widespread deception/exploitation of patients and the public because it regularly and eagerly partnered with the industry [459]. As succinctly articulated by Dr. Matheson, “Is medicine the manipulated victim of the pharmaceutical corporations, or their colleague in corruption? The answer, of course, is both. Sometimes medicine is pharma’s unwitting dupe, sometimes its eager bedfellow” [659].

The pervasive and detrimental effect of corporate bias is perhaps best illustrated by the opioid epidemic in the United States that, as of 2019, has accounted for about 770,000 deaths over the past 20 years [929]. According to the Centers for Disease Control and Prevention (CDC), the opioid crisis is the “worst drug overdose epidemic in history” [930]. Notably, this epidemic was not caused by a pathogen, it is a man-made plague for which the pharmaceutical industry and its allies are largely responsible [929]. The main causes of the opioid epidemic are aggressive pharmaceutical marketing, misleading/deceptive industry-sponsored medical education programs, bribes and kickbacks offered to doctors for prescribing opioids, the downplaying/denial of the drugs’ potential for addiction, the promotion of pain as a fifth vital sign, and the creation of pain advocacy groups to advance the industry’s corporate agenda [929–931]. According to Dr. Jonathan Marks, a professor of bioethics, humanities, and law,

“There is overwhelming evidence that the opioid crisis—which has cost hundreds of thousands of lives and trillions of dollars (and counting)—has been created or exacerbated by webs of influence woven by several pharmaceutical companies. These webs involve health professionals, patient advocacy groups, medical professional societies, research universities, teaching hospitals, public health agencies, policymakers, and legislators. Opioid companies built these webs as part of corporate strategies of influence that were designed to expand the opioid market from cancer patients to larger groups of patients with acute or chronic pain, to increase dosage as well as opioid use, to downplay the risks of addiction and abuse, and to characterize physicians’ concerns about the addiction and abuse risks as ‘opiophobia’”. [928]

A recent legal settlement proves that the marketing strategies of the pharmaceutical industry were utterly unethical and illegal. In October 2020, the US Justice Department announced that Purdue Pharma, maker of the highly addictive opioid oxycodone (OxyContin), has agreed to plead guilty to criminal charges related to its marketing of oxycodone. The company faces penalties of roughly $8.3 billion [932].

The opioid crisis tragically illustrates how pharmaceutical corporate bias can damage patient safety and public health. Is the situation different in psychiatry? I contend it’s not, and many experts in evidence-based medicine, public health, and bioethics agree [28–30, 147, 414, 767, 768, 771, 933]. For instance, Dr. Barry Blackwell, a psychopharmacologist and member of the International Network for the History of Neuropsychopharmacology, gloomily wrote in 2017:

“Industry has taken over and corrupted clinical trials, bribed academics to be complicit, infiltrated medical education and its curricula, seduced professional and consumer organizations, lobbied politicians to relax regulations and partially funded the FDA, influencing its decisions, meanwhile vastly inflating the populations at alleged risk for mental disorders and the willingness of physicians to medicate them, a process aided and abetted by the DSM diagnostic system coupled with misleading advertising direct to the public and dubious marketing strategies for gullible doctors”. [934]

Dr. Scull, a historian of medicine, also wrote a damning summary on the issue of corporate bias in psychiatry:

“And so to scandal. He who pays the piper calls the tune, and to a quite extraordinary extent, drug money has come to dominate psychiatry. It underwrites psychiatric journals and psychiatric conferences (where the omnipresence of pharmaceutical loot startles the naive outsider). It makes psychiatric careers, and many of those whose careers it fosters become shills for their paymasters, zealously promoting lucrative off-label uses for drugs whose initial approval for prescription was awarded on quite other grounds. It ensures that when scandals surface universities will mainly turn a blind eye to the transgressions of those members of their staff who engage in these unethical practices. And it controls psychiatric knowledge in multiple ways. Its ghostwriters produce peer-reviewed ‘science’ that surfaces in even the most prestigious journals, with the most eminent names in the field collaborating in the deception. Researchers sign confidentiality agreements, and inconvenient data never see the light of day. The very categories within which we think about cognitive and emotional troubles are manipulated and transformed to match the requirements of the psychiatric marketplace. Side effects, even profound, permanent, perhaps fatal side effects, are ignored or minimised. Fines may be levied when somnolent regulators are finally prompted into action, or damages paid where aggressive class action lawyers force hitherto suppressed findings into the public arena, but the profits already booked far exceed these costs of doing business”. [394]

Although provocatively articulated, Dr. Scull’s assessment is accurate and empirically well supported. Numerous articles and books confirm that psychiatric research and practice are strongly biased towards the commercial interests of the pharmaceutical industry [27–29, 67, 399, 414, 768, 856, 935–937]. More specifically, the marketing departments of the pharmaceutical companies are the powerhouse in psychiatry. As outlined by Dr. Healy,

“the [pharmaceutical] marketing department starts once a compound has been discovered. Marketing decides whether a new drug will be an antidepressant rather than an anxiolytic or a treatment for premature ejaculation. Marketing determines which journals with which lead authors clinical trials will appear in. Marketing recruits academics, including geneticists, neuroimaging specialists and social psychiatrists, to consultancy and speaker panels, and makes friends for the company. The marketing department supports educational events by putting on symposia, sponsoring speakers and bringing psychiatrists to international meetings. The work of the marketing departments is to create ‚evidence‘ and establish consensus”. [938]

Now, a few things warrant clarification before I move to the next sections. First and foremost, it is important to stress that a financial conflict of interest does not imply that a physician (be it an academic or practitioner) is necessarily biased in his judgement. And, of course, it does by no means indicate that someone is corrupt or bought by industry. Biases resulting from conflicts of interest don’t need to be conscious and explicit. Often, perhaps predominantly, they are unconscious and implicit. However, there can be no doubt that, overall, financial conflicts of interest lead to more industry-favourable assessments, biased benefit–harm evaluations and medical overuse, that is, overdiagnosis and overprescribing [14, 28, 29, 379, 380, 458, 459, 800, 939–941].

Just like a manager of a football team more often interprets an ambiguous situation in favour of his/her own team compared to an independent observer (e.g., whether an intervention was a foul or not, whether the ball was out or not), so do pharmaceutical company employees and physicians working for the industry (e.g. as speakers and/or consultants) interpret ambiguous data more often in favour of the industry compared to independent experts without industry ties. And just like in sports (e.g. a footballer diving to obtain a penalty kick for his team), when success (or profit) depends on a decisive action, not so uncommonly there are also clear instances of dishonesty, deception, and fraud in the pharmaceutical marketplace [29, 771, 822, 824, 933]. But let us hear from an insider. Dr. Matheson worked in pharmaceutical marketing between 1994 and 2010. He rightly admits that pharmaceutical companies have contributed to many major breakthroughs in medicine (the recent development of vaccines for the new coronavirus disease being just one example). “On the one hand, it remains my belief that pharmaceutical research and development efforts are capable of great good”, wrote Matheson [659]. But there is also another, dark and troubling side of drug company influence. “On the other hand, pharmaceutical marketing is anathema to science, corrupting to medicine, wasteful to economies, and harmful to patients, and I must acknowledge the moral difficulty that for many years I sold my intellect in its service. Pharma itself, of course, has never truly acknowledged its underbelly of secrets, half-truths, corruption, power, and death, and it flaunts the language of ethics like a silk cummerbund over a paunch. If it is a lie to dissemble, distort, or omit, then pharma must be considered a liar whose subtle falsehoods stock the annals of medicine” [659].

Most stakeholders thus agree that financial conflicts of interest can and do have a detrimental impact on healthcare, but some physicians are reluctant to accept it or try to minimise the problem. In fact, a few opponents even suggested that conflict of interest policies and regulation may harm medicine (for a critique of these notions, see [942]). In view of the compelling scientific evidence demonstrating a most likely causal association between financial conflicts of interest and positions, assessments and prescribing patterns that are systematically biased in favour of the industry, such concerns are empirically unfounded and misleading [592, 593, 772, 794–796, 799, 943–948].

As aptly summarised by Drs Steinbrook, Kassirer, and Angell, all three being former editors of the leading New England Journal of Medicine,

“Judges are expected to recuse themselves from hearing a case in which there are concerns that they could benefit financially from the outcome. Journalists are expected not to write stories on topics in which they have a financial conflict of interest. The problem, obviously, is that their objectivity might be compromised, either consciously or unconsciously, and there would be no easy way to know whether it had been. Yet Rosenbaum and Drazen [opponents of conflict of interest policy and regulation] seem to think it is insulting to physicians and medical researchers to suggest that their judgment can be affected in the same way. Doctors might wish it were otherwise, but none of us is immune to human nature”. [942]

I will now detail how the pharmaceutical industry exerts influence over psychiatry (and medicine in general) at all levels, starting with drug regulators, then turning to academic departments, researchers, medical journals, and concluding with medical organisations and prescribers/practitioners.

Drug Regulators

“The regulatory state and the pharmaceutical industry work largely in partnership and behind a cloak of secrecy”, wrote Dr. Abraham in 2008 [949]. His view is strongly endorsed by many others. For instance, based on an investigation of some 1600 FDA inspection and enforcement documents conducted by Science, it was concluded that the agency’s oversight of clinical research was “lax, slow, and secretive” [950]. In an investigation published earlier the same year in JAMA, Dal-Re and colleagues reported on FDA inspections that revealed clear research misconduct in two influential industry-sponsored pre-marketing (phase III) clinical trials (ARISTOTLE and RECORD4), comprising alterations of patient records, data falsification, failure to fully report adverse events and noncompliance with protocol procedures [951]. Consequently, the FDA excluded results from one trial (RECORD4) in their benefit–harm evaluation but granted license approval for the investigational drug and the flawed trial results were published by the sponsor. Despite being clearly fraudulent, the trial publication was cited over 1100 times and was included in meta-analyses and clinical practice guidelines. The results from the second fraudulent trial (ARISTOTLE) were not excluded from the FDA assessment and the agency granted a license approval for the investigational drug. The trial publication was cited more than 6900 times and was included in many meta-analyses and clinical practice guidelines. The FDA never communicated its detection of research misconduct in these two influential trials to doctors or the public. The authors thus concluded, “FDA trial inspection reports have been largely hidden from public view, but access to information on the integrity and quality of clinical trials that underpin a product’s assessment is critical, particularly when irregularities or misconduct are identified. Public availability of these reports is required to meet current standards for clinical trial transparency and uphold the integrity of the scientific evidence base” [951].

By now you have certainly realised that we cannot uncritically rely on the scientific literature, that approved drugs are both effective and safe. Most concerning, however, is that drug regulators appear to increasingly protect the commercial interests of the industry rather than public health and patient safety, which is unequivocally a manifestation of corporate bias [825]. This process was also coined regulatory capture, “a variable and dynamic effect of corporate bias that describes a shift in policy by government agencies away from regulation in the interests of patients and public health to prioritization of the private interests of the regulatees instead” [952]. That is, drug regulators frequently act in ways that benefit the industry rather than patients and the public. As stated by Dr. Vinay Prasad in an interview with Science, the “FDA is a regulatory agency charged with protecting the public’s best interests. But at times it behaves like an attorney working on behalf of the [pharmaceutical] companies” [950]. Such accusations are by no means new and there is quite compelling evidence that they are well founded.

For instance, in 2004, Dr. David Graham, then associate director of the FDA’s Office of Drug Safety, testified to US Congress that he was urged by his superiors “to not warn the public about dangers of drugs like Vioxx” [716]. Recognising his responsibility as a drug safety analyst, he warned the public nonetheless, but then was “marginalized by FDA management and not asked to participate in the evaluation of any drug safety issues”. The following year he stated that “FDA is inherently biased in favor of the pharmaceutical industry. It views industry as its client, whose interest it must represent and advance. It views its primary mission as approving as many drugs as it can, regardless of whether the drugs are safe or needed” [716]. He is not an isolated case. Dr Curt Furberg, a member of the FDA’s drug safety advisory committee and a prominent authority on drug safety, was forbidden to participate in FDA hearings on the safety of COX-2 inhibitors after he made remarks towards the media that valdecoxib (Bextra) may cause heart attacks and strokes just like rofecoxib (Vioxx), a drug from the same class recently withdrawn from the market by its manufacturer Merck for this specific safety reason [953].

Finally, Dr Ronald Kavanagh is a former FDA reviewer of psychiatric drugs who was fired from his position in 2008 for whistleblowing [716]. In an interview he said about his former employer: “While I was at FDA, drug reviewers were clearly told not to question drug companies and that our job was to approve drugs … If we asked questions that could delay or prevent a drug’s approval—which of course was our job as drug reviewer—management would reprimand us, reassign us, hold secret meetings about us, and worse … Sometimes we were literally instructed to read a 100–150 page summary and to accept drug company claims without examining the actual data, which on multiple occasions I found directly contradicted the summary document” [29]. These three examples indeed suggest that there is systematic and pervasive corporate bias (regulatory capture) at the FDA. But let us look a bit deeper at these issues.

A survey among FDA scientists found “pervasive and dangerous political influence” at the FDA [954]. In particular, 40% of the 997 respondents said they fear retaliation for voicing safety concerns in public, and 18% indicated that they had been asked to inappropriately exclude or alter technical information or conclusions from the data for non-scientific reasons in an FDA scientific document. Only 47% believed that the FDA routinely provides complete and accurate information to the public, and 81% agreed that the public would be better served if the independence and authority of the FDA post-market safety systems were strengthened. Commercial interests resulting in inappropriate acts (or attempts) to reverse, withdraw, or modify FDA determinations or actions was endorsed by 60% of the respondents. Finally, 20% said they “have been asked explicitly by FDA decision-makers to provide incomplete, inaccurate, or misleading information to the public, industry, media, and elected officials” [954].

How strong are the ties between drug regulators and the pharmaceutical industry? This is the question I will now address. Let us first have a look at how the major drug regulatory agencies are funded. Both FDA (US) and EMA (Europe) obtain more than half of their budget through industry fees. EMA is funded by industry fees rising from 20% in 1995 to 75% in 2010, whereas the FDA is funded by industry fees reaching 50% by 2002 and over 60% by 2010 [825]. In 2017, altogether 79% of the FDA budget was paid for by the biomedical industry through required user fees [955]. The British Medicines and Healthcare products Regulatory Agency (MHRA) is entirely (100%) funded by industry. The industry pays these fees to drug regulators in return for accelerated drug regulatory review times [956]. For instance, the FDA faces a 6-month deadline for priority drug reviews and a 10-month deadline for most other drugs. If the agency doesn’t adhere to these deadlines, the pharmaceutical companies won’t pay the fees. So, does this financial pressure affect the quality of the reviews and regulatory decisions? Yes, it likely does. According to a comprehensive analysis, drugs approved just before deadline had a higher rate of post-approval safety problems, including market withdrawals, serious safety warnings, and safety alerts. According to the authors, the study “suggests that the deadlines may impede quality by impairing late-stage deliberation and agency risk communication” [957].

But what about the directors of drug regulatory agencies? Are they personally tied to the industry? Yes, many are [956]. For instance, Dr. Scott Gottlieb was chief executive (commissioner) of the FDA from 2017 to 2019. He was known for having extensive financial relationships with the industry over his professional career [958]. Before he became the highest FDA official, he had served on the boards of various pharmaceutical companies. He was also a fervent advocate of accelerated and permissive drug approvals. “What we can’t have is an FDA that’s ruled by statistics over medicine,” he once said. “Americans deserve a less cautious FDA, and an FDA that actively embraces advances in science” [958]. This is quite a strange statement for a drug regulator, for how can drug development and evaluation advance without sound statistical methods? In other words, Gottlieb preferred fast (permissive) drug approvals over stringent (cautious) benefit–harm evaluations. Understandably, the industry loved him for such a pro-business position, and he was swiftly rewarded: soon after he left the FDA in 2019, he joined Pfizer’s board of directors [959].

You think Dr. Gottlieb is an outlier? He clearly is not. That leading officials and senior scientists leave the FDA to work for the pharmaceutical industry is very common and has also been described as the “FDA’s revolving door” [960]. According to Drs Hayes and Prasad, “This employment pattern may raise concern that, although regulators intend to act always in the best interest of the public, the frequent opportunity for subsequent employment with the industry may serve to dissuade them from being too oppositional or critical” [961]. Dr. Gottlieb certainly had strong ties to the industry, but there are even more extreme examples [956]. For instance, before Dr. Ian Hudson became director of the British Medicines Control Agency in 2001 and later chief executive of the MHRA, he was worldwide safety director of SmithKline Beecham (now GlaxoSmithKline), one of the largest pharmaceutical companies worldwide. Among his many tasks as safety director for SmithKline Beecham, he was also responsible to defend the safety of paroxetine in court, claiming that the use of paroxetine could not be related causally to any suicidal or homicidal event [9]. However, note that in the mid-2000s, drug regulators concluded that paroxetine can cause suicidality in children and adolescents [786]. Likewise, an independent evaluation of paroxetine trials demonstrated a probability of 98–99% (i.e. close to certainty) that paroxetine use in adults is associated with an increased risk of suicide attempts relative to placebo [328]. Would you trust a former drug company director like Dr. Hudson to defend public health and patient safety against the industry’s commercial interests? Let me ask differently. Would a former director of an oil company be the right person to lead an environmental protection agency? But back to the FDA…

Dr. Thomas Laughren was team leader of FDA’s Psychiatric Drug Product Division from 1983 to 2005 and from then on director of this division until his retirement in late 2012. Throughout his career at the FDA, he had maintained close collaborations with the pharmaceutical industry, and some of his industry-collaborations were highly controversial [716, 909]. For instance, Dr. Laughren participated in various industry-sponsored consensus panels and conferences promoting polypharmacy and expanded use of psychiatric drugs for unapproved indications (i.e. off-label use). He also advocated for broadening diagnostic criteria in approved indications and authored several articles on these controversial topics with some of industry’s highest paid key opinion leaders and even with pharmaceutical company directors such as Eli Lilly’s chief medical officer Dr. Leigh Thompson [909]. Having such a loyal ally among the FDA leadership certainly is a great asset for the pharmaceutical industry, but is this in the public’s best interest? Unfortunately, corporate bias (or regulatory capture) isn’t limited to directors.

Although the FDA makes the final decision whether to approve or reject a new drug application or whether an approved drug should be withdrawn from the market or receive a safety warning, the agency strongly relies on the benefit-harm assessments provided by its Drug Advisory Committees. These committees comprise external experts, mostly leading academic physicians, and they quite often have financial relationships with the manufacturers of the drugs under consideration. Do these conflicts of interest influence the experts’ judgements? Yes, they probably do [961, 962]. For instance, in a Drug Advisory Committee meeting from 2004 discussing the safety of COX-2 inhibitors (including rofecoxib), it was shown that 10 of the 32 voting panel members had financial ties to manufacturers of COX-2 inhibitors, including receipt of speaking or consulting fees or research support. If the 10 members with financial conflicts of interest had not been allowed to vote, a majority of the panel would have voted to withdraw two of the COX-2 inhibitors from the market. However, with their votes included, a majority of the panel was in favour to keep these drugs on the market [963]. Since then the FDA has slightly tightened its conflict of interest policy, so pharmaceutical companies now increasingly adopt “after-the-fact compensation”—rewarding influential physicians who voted in favour of the company’s drug with speaking and consulting honoraria or research support after regulatory agencies came to a decision [964]. I will now leave the drug regulators and turn to academic research and publishing.

Academic Medical Departments, Researchers, and Medical Journals

Fabbri and colleagues conducted a review on the influence of industry sponsorship on the research agenda. Based on the scientific evidence, they concluded “Corporate interests can drive research agendas away from questions that are the most relevant for public health. Strategies to counteract corporate influence on the research agenda are needed, including heightened disclosure of funding sources and conflicts of interest in published articles to allow an assessment of commercial biases. We also recommend policy actions beyond disclosure such as increasing funding for independent research and strict guidelines to regulate the interaction of research institutes with commercial entities” [651]. Testoni and colleagues also conducted an analysis of the health and biomedical sciences and found that bioindustry, in collaboration with a few elite universities, largely sets the research agenda. They concluded, “Overall, the main focus of the prevailing HBMS [health and biomedical sciences] agenda appears to be set on therapeutic and specifically pharmacological intervention involving the use of novel drugs or innovative molecular biology techniques. At the same time, prevention and assessment of socio-environmental factors influencing disease onset are almost absent … A more balanced research agenda, together with epistemological approaches that consider socio-environmental factors associated with disease spreading, could contribute to being better prepared to prevent and treat more diverse pathologies and to improve overall health outcomes” [965]. Likewise, in psychiatry it is well established that the pharmaceutical industry exerts control over the research landscape by supporting research projects centred on its commercially favoured topics (e.g. treatment efficacy instead of drug safety, see [78, 772]). Moreover, industry support has also created a marked power imbalance between different research fields, resulting in a strong bias towards research on psychopharmacology and the neurosciences at the expense of environmental, social, and psychological research [390, 392].

Through funding entire fields of research, the biomedical industry has the power to influence health policy, healthcare provision, and clinical decision making. Many academic medical departments and biomedical research institutes are sponsored, entirely or in part, by the pharmaceutical industry [29, 30, 812]. For instance, the Lundbeck Foundation, owner of Lundbeck Pharmaceuticals, sponsors the professorships for six leading Danish neuroscientists, three at Aarhus University and three at the University of Copenhagen [966]. The Swiss pharmaceutical association Interpharma sponsors a professorship in health economics at the University of Basel [857]. Eli Lilly financially supports the Centre for Addiction and Mental Health, a prestigious psychiatric university hospital in Toronto [812]. The Sackler family, owner of Purdue Pharma, established the Sackler Graduate School of Biomedical Sciences at Tufts University and funded the Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences at Yale University [29, 928]. And so the list goes on… Now, let’s have a closer look at the academic medical departments, the alleged purveyor of research integrity and academic freedom.

Anderson and colleagues examined the academic affiliations of directors board members of the largest pharmaceutical companies [967]. They found that 94% of the US pharmaceutical companies had at least one directors board member who concurrently held a leadership position at an US academic medical center. The leadership positions included university presidents, deans, hospital or health system executive officers, and clinical department chairs or center directors. In a subsequent analysis, the authors found that pharmaceutical company directors were affiliated with 19 of the top 20 National Institute of Health funded medical schools and all the 17 top ranked US hospitals [968]. Among the 279 academically affiliated pharma directors, 121 were professors, 85 were trustees, and 73 were leaders (e.g. university chief executive officers, university presidents and vice presidents, and deans or presidents of medical schools). Finally, Campbell and colleagues showed that among the department chairs of US medical schools, 60% of the respondents had some form of personal relationship with industry, including serving as a consultant (27%), being a member of a scientific advisory board (27%), a paid speaker (14%), an officer (7%), a founder (9%), or a member of the board of directors (11%) [969].

Based on these findings it is difficult to conceive of how leading academic medical departments can fully adhere to the principles of both research integrity and academic freedom. In my view these principles are necessarily compromised when academic leaders are bound by contract to increase the profits of a pharmaceutical company and to act in the company shareholders’ best interest. Let me ask a few pertinent questions. Do you think that a pharmaceutical company would tolerate that one of its directors, in his/her role as chair of an academic medical department, decides to focus on the long-term harms associated with the prescription drugs the company markets? Do you think the company would appreciate if he/she is devoted to research on psychosocial interventions as cost-efficient alternatives to the costly drugs his/her company produces? Or do you think the company would be pleased if he/she wants to specialise in the topics of illegal marketing, unethical drug promotion, and research misconduct—transgressions his/her company has been charged with? Assuming that this academic leader adheres to scientific integrity, that is, objectivity, honesty, and transparency, do you think that his/her research output is in accord with the vested interests of the pharmaceutical company he/she is a member of the directors board? If you can’t answer all these questions in good faith with “Yes”, then you understand why contemporary academic medicine is compromised by pervasive corporate bias [29, 59, 380, 812].

The majority of biomedical research, especially clinical trials, is sponsored by the private for-profit industry [656, 970, 971]. In addition, most principal investigators in drug trials have financial ties to the pharmaceutical industry [592]. Ebrahim and colleagues showed that 79% of meta-analyses of antidepressants have financial conflicts of interest, either because they were sponsored by the industry or the authors were industry employees or had financial ties to the industry [772]. According to a systematic review of randomised placebo-controlled antidepressant trials for depression published between 1980 and 2011, 97% of trials were sponsored by the pharmaceutical industry [194]. The comprehensive meta-analysis of active-controlled (head-to-head) and placebo-controlled antidepressant trials for depression conducted by Cipriani and colleagues found that 78% were funded by the pharmaceutical industry [141]. However, the latter figure is likely an underestimate, for the study sponsor was not declared in all trials [13]. Moreover, even if a drug trial is demonstrably not industry-sponsored, at least one study author commonly has financial relationships with the pharmaceutical industry. Unfortunately, these conflicts of interest are quite often not fully disclosed in journal articles [29, 972, 973]. Thus, there are just very few, if any, reports of antidepressant trials that have no financial conflict of interest, even when the trials were governmentally funded. To illustrate how pervasive industry relationships often are in governmentally funded trials, I will present two of the arguably most important non-industry sponsored antidepressant trials below.

STAR*D was the largest and with a cost of $35 million the most expensive antidepressant trial ever conducted. It was sponsored by the NIMH, lending it credibility as a governmentally funded, independent trial unaffected by industry’s commercial interests. However, this is far from the truth. Even though STAR*D was not sponsored by the pharmaceutical industry, financial conflicts of interest were pervasive, for 9 of the 12 authors listed on the main publication had extensive ties to the pharmaceutical industry, including speakers, advisory, and consultancy board memberships, receipt of research grants, and even equity holdings [974]. Shown below is the conflict of interest statement. You will easily notice that, in close competition with the conflict of interest statement in the APA depression practice guideline pasted farther below, this is the longest paragraph of the entire book:

“Dr. Rush has served as an advisor, consultant, or speaker for or received research support from Advanced Neuromodulation Systems, Inc.; Best Practice Project Management, Inc.; Bristol-Myers Squibb Company; Cyberonics, Inc.; Eli Lilly & Company; Forest Pharmaceuticals, Inc.; Gerson Lehman Group; GlaxoSmithKline; Healthcare Technology Systems, Inc.; Jazz Pharmaceuticals; Merck & Co., Inc.; the National Institute of Mental Health; Neuronetics; Ono Pharmaceutical; Organon USA Inc.; Personality Disorder Research Corp.; Pfizer Inc.; the Robert Wood Johnson Foundation; the Stanley Medical Research Institute; the Urban Institute; and Wyeth-Ayerst Laboratories Inc. He has equity holdings in Pfizer Inc and receives royalty/patent income from Guilford Publications and Healthcare Technology Systems, Inc. Dr. Trivedi has served as an advisor, consultant, or speaker for or received research support from Abbott Laboratories, Inc.; Akzo (Organon Pharmaceuticals Inc.); Bayer; Bristol-Myers Squibb Company; Cephalon, Inc.; Corcept Therapeutics, Inc.; Cyberonics, Inc.; Eli Lilly & Company; Forest Pharmaceuticals; GlaxoSmithKline; Janssen Pharmaceutica; Johnson & Johnson PRD; Meade Johnson; the National Institute of Mental Health; the National Alliance for Research in Schizophrenia and Depression; Novartis; Parke-Davis Pharmaceuticals, Inc.; Pfizer Inc; Pharmacia & Upjohn; Predix Pharmaceuticals; Sepracor; Solvay Pharmaceuticals, Inc.; and Wyeth-Ayerst Laboratories. Dr. Wisniewski has received research support from the National Institute of Mental Health and served as an advisor/consultant for Cyberonics, Inc. Dr. Nierenberg has served as an advisor, consultant, or speaker for or received research support from Bristol-Myers Squibb Company; Cederroth; Cyberonics, Inc.; Eli Lilly & Company; Forest Pharmaceuticals Inc.; Genaissance; GlaxoSmithKline; Innapharma; Janssen Pharmaceutica; Lichtwer Pharma; the National Institute of Mental Health; the National Alliance for Research in Schizophrenia and Depression; Neuronetics; Organon, Inc.; Pfizer Inc; Sepracor; Shire; Stanley Foundation; and Wyeth-Ayerst Laboratories. Dr. Stewart has served as an advisor, consultant, or speaker for or received research support from Eli Lilly & Company; GlaxoSmithKline; Organon USA Inc.; Shire; and Somerset. Dr. Warden has received research support from the National Institute of Mental Health and has equity holdings in Bristol-Myers Squibb Company and Pfizer, Inc. Dr. Thase has served as an advisor, consultant, or speaker for AstraZeneca; Bristol-Myers Squibb Company; Cephalon, Inc.; Cyberonics, Inc.; Eli Lilly & Company; Forest Laboratories, Inc.; GlaxoSmithKline; Janssen Pharmaceutica; Eli Lilly & Company; Novartis; Organon, Inc.; Pfizer Pharmaceutical; Sanofi Aventis; Sepracor, Inc.; Shire US Inc.; and Wyeth Pharmaceuticals. Dr. Lavori has served as an advisor, consultant, or speaker for or received research support from Bristol-Myers Squibb Company; Celera Diagnostics Inc; Cyberonics, Inc.; the Department of Veterans Affairs; Forest Pharmaceuticals, Inc.; GlaxoSmithKline; Leaf Cabrezer Hyman and Bernstein; the National Institutes of Health; and Neuronetics, Inc. Dr. McGrath has served as an advisor, consultant, or speaker for or received research support from Eli Lilly & Company; GlaxoSmithKline; Lipha Pharmaceuticals; the National Institute of Mental Health; the National Institute on Alcohol Abuse and Alcoholism; New York State Department of Mental Hygiene; Organon, Inc.; Research Foundation for Mental Hygiene (New York State); and Somerset Pharmaceuticals. Dr. Rosenbaum has served as an advisor, consultant, or speaker for or received research support from Astra-Zeneca; Boehringer-Ingelheim; Bristol-Myers Squibb Company; Cephalon; Compellis; Cyberonics; EPIX; Forest; GlaxoSmithKline; Janssen; Lilly; MedAvante; Neuronetics; Novartis; Orexigen; Organon; Pfizer, Inc; Roche Diagnostics; Sanofi; Schwartz; Somaxon; Somerset; Sepracor; Shire; Supernus; and Wyeth. He has equity holdings in Compellis, Medavante, and Somaxon. Dr. Sackeim has served as an advisor, consultant, or speaker for or received research support from Cyberonics, Inc.; Eli Lilly & Company; Magstim Ltd.; MECTA Corporation; Neurocrine Biosciences Inc.; Neuronetics Inc.; NeuroPace Inc.; and Pfizer Inc. Dr. Kupfer has served as an advisor, consultant, or speaker for or received research support from Amersham; the Commonwealth of Pennsylvania; Corcept Corporated; Eli Lilly & Company; F. Hoffmann-La Roche Ltd.; Forest Pharmaceuticals; Lundbeck; the National Institute of Mental Health; Novartis; Pfizer, Inc; Servier Amerique; and Solvay/Wyeth. He has equity holdings in Body Media and Med Avante and receives royalty income from Oxford University Press. Dr. Fava has served as an advisor, consultant, or speaker for or received research support from Abbott Laboratories; Alkermes; Aspect Medical Systems; Astra-Zeneca; Bayer AG; Biovail Pharmaceuticals, Inc.; BrainCells, Inc.; Bristol-Myers Squibb Company; Cephalon; Compellis; Cypress Pharmaceuticals; Dov Pharmaceuticals; Eli Lilly & Company; EPIX Pharmaceuticals; Fabre-Kramer Pharmaceuticals, Inc.; Forest Pharmaceuticals Inc.; GlaxoSmithKline; Grunenthal GmBH; J & J Pharmaceuticals; Janssen Pharmaceutica; Jazz Pharmaceuticals; Knoll Pharmaceutical Company; Lichtwer Pharma GmbH; Lorex Pharmaceuticals; Lundbeck; MedAvante, Inc.; Novartis; Nutrition 21; Organon Inc.; PamLab, LLC; Pfizer, Inc; PharmaStar; Pharmavite; Roche; Sanofi/Synthelabo; Sepracor; Solvay Pharmaceuticals, Inc.; Somerset Pharmaceuticals; and Wyeth-Ayerst Laboratories. He has equity holdings in Compellis and MedAvante. Dr. Niederehe, Dr. Lebowitz, and Mr. Luther report no competing interests”. [974]

The second example is the TADS trial, already discussed in the chapter “Flaws in antidepressant research”, which was also sponsored by the NIMH. This placebo-controlled trial assessed the efficacy of fluoxetine and cognitive-behavioural therapy alone and in combination in adolescents with depression [782]. The financial disclosures in the main article are shown below, and you will notice that many authors had financial relationships with Eli Lilly, the manufacturer of fluoxetine, including speaker, consultancy, and research payments:

“Dr March has served on the speaker’s bureau for Pfizer and Lilly and has received research support from Lilly, Pfizer, and Wyeth. Dr Findling has received research support from Bristol-Myers Squibb, Forest, GlaxoSmithKline, Lilly, Nature’s Herbs, Organon, Pfizer, Solvay, Somerset, and Wyeth; been a consultant for Bristol-Myers Squibb, Forest, GlaxoSmithKline, Lilly, Pfizer, Somerset, and Wyeth; and served on the speaker’s bureau for Bristol-Myers Squibb, GlaxoSmithKline, Lilly, and Wyeth. Dr Waslick has received research support from Lilly. Dr Walkup has received research support and honoraria from Lilly. Dr Kastelic has received honoraria from Pfizer. Dr Kratochvil has received reseach support from Lilly, Forest, and GlaxoSmithKline; been a consultant for Lilly; and served on the speaker’s bureau for Lilly. Dr Harrington has received research support from Lilly, Pfizer, and Astra-Zeneca. Dr Leventhal has been a consultant, received research support, and served on the speaker’s bureau for Lilly. Dr Emslie has received research support from Lilly, Organon, and RepliGen; been a consultant for Lilly, GlaxoSmithKline, Forest Laboratories, Pfizer, and Wyeth-Ayerst; and served on the speaker’s bureau for McNeil”. [782]

Now of course you may argue that I cherry picked a few extreme examples that confirm my argument. But that’s not the case. Extensive financial ties to industry are the rule rather than the exception among leading psychiatric academics. We just didn’t know (or recognised) for too long. But when reporting of financial conflicts of interest became mandatory in most medical journals in the early 2000s, it was revealed how pervasive financial conflicts of interest are among academic physicians. In 2000, Dr. Angell, then editor-in-chief of the New England Journal of Medicine, had a very hard time to find a psychiatric academic without financial relationships to the pharmaceutical industry for an editorial on an antidepressant trial published in the journal. “But as we spoke with research psychiatrists about writing an editorial on the treatment of depression, we found very few who did not have financial ties to drug companies that make antidepressants” she noted in consternation in an article titled “Is academic medicine for sale?” [59].

An analysis of conflict of interest disclosure forms filed by 273 speakers at the APA’s annual meeting in 2008 collectively told of 888 consulting contracts and 483 contracts to serve on speakers’ bureaus between academic psychiatrists and pharmaceutical companies [30]. Although disclosures of conflicts of interest at scientific conferences and on scientific publications give a first impression of how pervasive the financial relationships between academic medicine and industry are, they don’t show the whole picture. The monetary amount of industry payments is not stated in such declarations, and as detailed above, many academics do not (fully) disclose their ties to industry [972, 973]. The true extent (and amount) of physicians’ financial ties to industry became only fully known when the US Physician Payments Sunshine Act introduced by Senator Charles Grassley in 2007 was enacted in 2010. The Sunshine Act led to public databases that list all industry payments to US physicians (see Open Payments Database and Dollars for Docs). Although certainly a major breakthrough in a long quest for more transparency in medicine, the new legislation was strongly opposed by various stakeholders [975]. Probably for good self-serving reasons, because the Sunshine Act revealed not only how extensive the financial relationships of many leading academics are, but also how pervasive concealment and non-disclosure of industry payments is among academics, both in general medicine and in psychiatry. Let’s look at some disturbing findings.

Norris and colleagues assessed all US physicians who received more than US$ 100,000 from industry in the period 2009 to 2010 according to the public industry-payments database Dollars for Docs. There were in total 373 US physicians who had received more than $100,000 from the industry in that calendar year, of which 117 (31%) were psychiatrists (a disproportionally large rate). 147 of these 373 physicians who received large industry payments had published at least one scientific article between January 2009 and March 2011. On average, there were 8 publications per physician. Only 77% of all physicians provided a conflict of interest disclosure in the article, 23% did not. Even worse, among publications with disclosure, 41% falsely reported that the physician had no financial conflicts of interest to disclose, and in 28% of publications, a conflict other than the payments listed in the Dollars for Docs database was disclosed. Thus, in merely 31% of all publications with a conflict of interest statement the industry relationships were correctly disclosed [976].

Things have not changed since 2009/2010. In a more recent study, Tau and colleagues [973] compared financial relationships listed in the Open Payments Database with those disclosed by US-based academic physicians who were lead-authors of clinical drug trials published between 2016 and 2018 in three leading medical journals. Altogether 85% of lead-authors had received general (i.e. personal) payments from the industry (excludes research support) and the median annual sum received was $62,472. Only 5% of authors disclosed all financial relationships reported in the Open Payments Database, 60% disclosed only parts of the reported payments, and 20% disclosed none of the received payments. Moreover, in 8% of industry-sponsored trials, the lead authors had not disclosed personal payments from the study sponsor, which is a grave violation of publication ethics and a form of scientific misconduct [852]. The study authors thus concluded “These findings could raise concerns about the authors’ equipoise toward the trial results and influence the public perception of the credibility of reported data” [973].

The Sunshine Act also unveiled various instances of concealment and serious underreporting of industry payments among leading US psychiatry professors. For instance, in the early 2000s, Dr. Melissa DelBello was professor of psychiatry at the University of Cincinnati and lead author of an influential trial of quetiapine (an atypical antipsychotic) in adolescents with bipolar disorder sponsored by AstraZeneca (the manufacturer of quetiapine). When she was asked by a journalist how much industry funding she received, she responded, “Trust me, I don’t make much” [933]. However, towards her employer (the University of Cincinnati) she had disclosed $100,000 from AstraZeneca. Perhaps she indeed considered this substantial sum small in comparison to what her colleagues typically receive, which would not be reassuring. But it gets even worse, for AstraZeneca reported paying her $238,000, that is, more than double the amount Dr. DelBello had declared towards her employer.

Or take Dr. Joseph Biederman, who in the early 2000s, was an exceptionally prominent professor of psychiatry at Harvard University. He played a leading role in establishing both the diagnosis of paediatric bipolar disorder and aggressive antipsychotic treatment in kids with this diagnosis. He had received research support from 15 different pharmaceutical companies and served as speaker and adviser to 7 of them, including Eli Lilly and Janssen Pharmaceuticals (subsidiary of Johnson & Johnson), which produce the blockbuster antipsychotics olanzapine (Zyprexa) and risperidone (Risperdal). He was also director of the Johnson & Johnson Center for Pediatric Psychopathology Research at Massachusetts General Hospital and lead-author of various industry-sponsored drug trials in children and adolescents [977]. When asked by a journalist how much he receives from industry he refused to tell. But a Congressional investigation then revealed that he had failed to disclose towards his employer (Harvard University) large payments he had received from various pharmaceutical companies [933]. From 2000 to 2007, Dr. Biederman earned at least $1.6 million in consulting fees from various drug makers but failed to report all but $200,000 to Harvard University. Surely such large-scale deception is shocking and casts a dark shadow on his character. But how about this: in a deposition between Dr. Biederman and lawyers for the states, he was asked what rank he held at Harvard. “Full professor”, he answered. “What’s after that?” asked a lawyer. “God”, Dr. Biederman responded [977]. I leave this uncommented.

And then there is also the infamous case of Dr. Charles Nemeroff, who in the early 2000s was professor and chair of psychiatry department at Emory University. He had extensive financial ties to multiple pharmaceutical companies and was one of the world’s most influential psychiatrists (see also chapter “The transformation of depression”). The magazine The Economics of Neuroscience had Dr. Nemeroff on the cover of its September 2000 issue, designating him the “Boss of Bosses” and asking in the headline “Is the Brash and Controversial Charles Nemeroff the Most Powerful Man in Psychiatry?” [9]. Anyway, from 2000 to 2007, Dr. Nemeroff earned more than $2.8 million in consulting fees from various drug makers and failed to report at least $1.2 million of that income to his university. According to a Congressional investigation, he also violated federal research rules. For instance, Dr. Nemeroff signed a letter dated July 15, 2004, promising Emory University administrators that he would earn less than $10,000 a year from GlaxoSmithKline to comply with federal rules to act as principal investigator on NIH research projects investigating GlaxoSmithKline’s antidepressants. But in that year alone, he actually had earned $170,000 in income from GlaxoSmithKline [978]. From 2004 to 2008, while receiving NIH grants to study GlaxoSmithKline’s antidepressants, Dr. Nemeroff accepted and failed to report at least $500,000 in fees and expenses from GlaxoSmithKline [933].

If you think that this systematic concealment and nondisclosure of industry payments had negative consequences for these key players, then you might be surprised to hear that they still are prominent professors and chairs of psychiatry departments. Drs DelBello and Biederman remained in their positions at University of Cincinnati and Harvard University, respectively, as if nothing happened. Dr. Nemeroff was prohibited by Emory University from submitting NIH grants for two years and in 2009 he resigned from Emory University. However, just one month later, aided by the then-director of the NIMH, Dr. Insel (a good friend of Dr. Nemeroff), and with his guarantee that he could freely apply for NIH grants, Dr. Nemeroff became chair of psychiatry at the University of Miami [933]. So, all’s well that ends well for Dr. Nemeroff.

But leaving the deception of universities and federal research funders aside, do industry payments to academics influence their work? That is, do industry payments to medical academics bias the scientific evidence in favour of the industry? Yes, they most certainly do. As detailed by Antonuccio and colleagues, “Company-sponsored experts, whether they are researchers or educators, are by definition company employees. They will be retained only if they offer consistently favorable treatment to the company’s products” [24]. This view has been endorsed by various other experts (see, for instance, [29, 405, 406]). Most importantly, this is not just some kind of a controversial assumption, it is a conclusion strongly supported by the scientific literature. There is compelling evidence demonstrating that authors with financial conflicts of interest report more industry-favourable findings and conclusions than authors without ties to industry [592, 794, 799, 944, 946–948, 963]. Put differently, authors with financial conflicts of interest systematically overstate treatment benefits and minimise harms. This should come as no surprise, as you certainly don’t bite the hand that feeds you. And what holds true for individual academics probably also holds true for publishers of medical journals and journal editors [28, 29, 979]. So let’s have a look.

Medical journals, and the organisations that publish (or own) them, make a substantial proportion of their revenues from drug advertisements [979, 980]. For instance, in 1997, the Massachusetts Medical Society, owner of the leading New England Journal of Medicine, made 21.3% of its annual total revenue from drug advertisements in its main journal. The American Medical Association made 10.4% of total revenue from drug advertisements in its top-tier Journal of the American Medical Association (JAMA), and the American College of Physicians earned 12.9% of its annual total revenue from drug advertisements in its top-tier journal Annals of Internal Medicine [981]. The pharmaceutical companies thus can (and already did) exert pressure on medical journals, especially when they publish articles critical towards the effectiveness of blockbuster drugs, knowing that to some extent the owners of these journals financially depend on their advertisements and other profitable avenues such as industry-sponsored journal supplements [29, 979, 980].

But there is another important factor that makes some medical journals financially dependent on the pharmaceutical industry. Drug companies commonly order large amounts of expensive reprints of their articles when a trial yields favourable results that the company can efficiently promote to physicians to increase prescribing of their drug. According to Dr. Smith, former editor of the British Medical Journal (BMJ), Merck bought 900,000 reprints of an article about the effectiveness of rofecoxib (Vioxx) published in the New England Journal of Medicine at a cost estimated to be between $700,000 and $836,000 to promote the drug [982]. A comprehensive analysis of six top-tier medical journals by Lundh and colleagues showed that industry-sponsored trials were more frequently cited and thus significantly contributed to the journals’ high impact factors. In addition, income from the sales of article reprints contributed to 3% and 41% of the total income for the BMJ and the Lancet in 2005–2006 [983]. These findings confirm that some leading medical journals, and by consequence their publishers, strongly depend on the pharmaceutical industry. Influential industry-sponsored trials not only increase the journals’ impact factors, they also guarantee the owners of the journals substantial revenues. But the ties between medical journals and the pharmaceutical industry don’t end here.

A last important source of conflicts of interest in medical journals are financial relationships of journal editors with the industry. Liu and colleagues [984] reported that 51% of editors of influential US medical journals had received general payments in 2014 (including fees for consulting, speaking, travel, lodging, and consumption) from US pharmaceutical and medical device manufacturers. The mean general payment per editor in that year was $27,564. The five largest general payments to individual editors in the calendar year 2014 were $11.0 million, $1.3 million, $554,162, $355,923, and $325,860. The authors also found large differences between general medicine and various specialties. In high impact general medicine journals, the mean general payments to journal editors was $3899. In cardiology, the mean general payment per editor was $225,556, in orthopaedics it was $92,828, and in endocrinology $63,612. By contrast, in family medicine it was $690, in paediatrics $397, in surgery $246, in general internal medicine it was $54, and in pathology only $11. With a mean of $4371 in general industry-payments, editors of psychiatry journals ranked somewhere in the middle range. There were, in addition, a mean of $37,330 per editor for research support, but since research payments don’t count as direct personal income, they were not studied in detail.

An analysis for the year 2015 consistently confirmed these findings [985]. The study found that 46% of editors of influential US medical journals had personal financial ties to the US pharmaceutical and device industry. The median number of general payments per editor was 9 and the median amount of total payments received was $4364. Consulting fees contributed most to the total amount of general payments received. Among US journal editors with industry relationships, 48% received payments more than $5000 in that year, and altogether 38% made more than $10,000 from consulting fees alone. In sum, about half of US journal editors make a personal income from industry payments, and in about half of these, payments are substantial, that is, larger than $5000 a year. A few editors make even a tremendous income from general industry payments, tens thousands of dollars, mostly due to consulting fees. Therefore, various experts requested that journals should disclose the (financial) conflicts of interest of their editors, as is mandatory for authors, so that readers can appraise how strongly journal editors are tied to the industry [980, 986].

Medical Organisations, Medical Education, and Clinical Practice

Just as many academic departments and academic physicians have financial ties to industry, sometimes multiple and very strong ones, so do medical organisations. In 2015, UK professional healthcare organisations received 2189 payments worth in total $12.5 million from the pharmaceutical and device industry. These payments were mostly contributions to costs of events (67.6%) and donations and grants (29.7%) [987]. The leaders of medical organisations likewise have strong financial relationships with the industry. Moynihan and colleagues [988] analysed the financial relationships of leaders of US medical associations from 2017 to 2019 and found that, overall, 72% of these leaders had financial ties to the pharmaceutical and device industry (among leaders with a medical degree, the rate was even 80%). The median amount of industry payments among leaders was $31,805 for the period 2017–2019. The authors further found large differences between specialties. While the rate of leaders with financial ties to industry was 93% for both the Infectious Diseases Society of America and the Orthopaedic Trauma Association, it was 61% for the American College of Physicians and “only” 37% for the American Psychiatric Association (APA). But let’s look a bit closer at the latter organisation.

As detailed throughout this book, the APA has, quite understandably, frequently and intensively collaborated with the pharmaceutical industry (given that drugs are the centrepiece in psychiatric research and practice). The pharmaceutical companies are also omnipresent at the APA’s annual meetings where they promote their products in various sponsored symposia and huge exhibition halls [394, 399]. Alarmingly, quite often these drug promotions are in violation with APA or FDA rules, for example due to promotion of off-label prescribing [989]. But for the APA, the partnership with the industry in its annual meetings is highly profitable. With the pharmaceutical companies paying for symposia, the exhibition booths, and funding social activities, meeting revenues rose from $1 million in 1980 to $3.1 million in 1990, $11.3 million in 2000, and reaching $16.9 million in 2004, producing a net profit of $9.8 million for the APA in that latter year [30]. In 2000 and 2006, altogether 29% of the APA’s annual revenues came from industry, while in 2008 that percentage slightly dropped to 21%. The APA also has two affiliates, the American Psychiatric Foundation and the American Psychiatric Institute for Research and Education, which both are financially heavily supported by the pharmaceutical industry [30].

The arguably most influential achievement of the APA is its diagnostic manual, the DSM, which defines which behaviours and feelings are considered pathological and thus in need of treatment. Given that the pharmaceutical industry can legally promote psychiatric drugs only for approved indications as set out by DSM diagnoses, the manual had and still has a strong impact on the prescribing of drugs. Lowering the diagnostic threshold of a given disorder and/or introducing new disorders and diagnostic labels can massively broaden the market for psychiatric drugs and provide opportunities to expand lucrative drug patents [63, 67, 414, 429, 937, 990]. Understandably, the pharmaceutical industry has a huge interest in how and what the DSM defines as a mental disorder.

Although the pharmaceutical industry does not directly fund the DSM, drug company influence is pervasive in the DSM [991]. Altogether 57% of the DSM-IV task force members had financial ties to pharmaceutical companies. In the DSM-5, that rate even rose to 69%. Financial ties to industry are also the norm in the diagnostic work group members, that is, the experts responsible for the revision of disorder categories and inclusion of new disorders within a diagnostic category. For instance, 100% of the DSM-IV mood disorder work group members had financial relationships with industry. In the DSM-IV anxiety disorders work group, the rate was 81%, and in the DSM-IV schizophrenia and other psychotic disorders work group the rate was 100%. The rates for the corresponding DSM-5 work groups were 67%, 57%, and 83%. Thus, the proportion of work group members with financial ties to industry slightly dropped from DSM-IV to DSM-5, but the rates remained substantial. Overall, three-fourths of the DSM-5 work groups had a majority of members with financial ties to the pharmaceutical industry [991].

Another very influential publication of the APA are its treatment guidelines. Cosgrove and colleagues analysed three major APA clinical practice guidelines applicable in 2008—the schizophrenia, bipolar disorder, and major depressive disorder guidelines. Altogether 90% of guideline authors had a financial relationship with at least one pharmaceutical company. In both the bipolar disorder and schizophrenia guidelines, the rates were 100% each, whereas in the major depressive disorder guideline the rate was 60%. Strikingly, none of the authors’ financial conflicts of interest were disclosed in the clinical practice guidelines. Among the authors with industry relationships, most received research funding (78%) and consultancy fees (72%). Altogether 17% of guideline authors with industry relationships also held equity in a drug company that manufactured the drugs identified in the practice guidelines [992].

In a subsequent study, Cosgrove and colleagues analysed the financial conflicts of interest in 14 major depressive disorder clinical practice guidelines, including the APA’s most recent edition. In 6 guidelines (43%) no author had financial ties to the industry. In 5 guidelines (36%) a minority of authors had industry relationships and in 3 guidelines (21%) a majority of authors had industry relationships. The latter category also included the APA practice guideline, of which all 6 authors had multiple ties to the pharmaceutical industry. In accordance with the scientific evidence, 9 of 14 guidelines did not recommend antidepressants as first-line treatment in mild depression, but 5 did (among those the APA guideline). Most importantly, while 4 of 5 guidelines (80%) recommending antidepressants as first-line treatment in mild depression had significant financial conflicts of interest (defined as majority of members or the chair of the work group according to the Institute of Medicine), only 3 of 9 guidelines (33%) not making such a recommendation had significant financial conflicts of interest [993]. Thus, as consistently demonstrated in the literature detailed above, authors with financial ties to industry more often draw conclusions and make recommendations that favour the pharmaceutical companies’ commercial interests [592, 799, 944, 946–948].

But how extensive are the financial conflicts of interest in the APA depression practice guideline? Below you see the conflicts of interest disclosure of the six authors of the current APA depression practice guideline, Drs. Alan Gelenberg (chair), Marlene Freeman, John Markowitz, Jerrold Rosenbaum, Michael Thase, and Madhukar Trivedi:

“The Work Group on Major Depressive Disorder reports the following potentially competing interests for the period from May 2005 to May 2010: Dr. Gelenberg reports consulting for Eli Lilly and Company, Pfizer, Best Practice, AstraZeneca, Wyeth, Cyberonics, Novartis, Forest Pharmaceuticals, Inc., GlaxoSmithKline, ZARS Pharma, Jazz Pharmaceuticals, Lundbeck, Takeda Pharmaceuticals North America, Inc., eResearch Technology, Dey Pharma, PGxHealth, and Myriad Genetics. He reports serving on speakers bureaus for Pfizer, GlaxoSmithKline, and Wyeth. He reports receiving research grant funding from Eli Lilly and Company, Pfizer, and GlaxoSmithKline. He reports stock ownership in Healthcare Technology Systems. Dr. Freeman reports that she received research support from the Meadows Foundation, the National Institute for Mental Health, the U.S. Food and Drug Administration, the Institute for Mental Health Research, Forest, GlaxoSmithKline and Eli Lilly and Company (investigator initiated trials), and Pronova Biocare (research materials). She received an honorarium for case-based peer-reviewed material for AstraZeneca’s website. She reports consulting for Ther-Rx, Reliant, and Pamlab. She reports receiving an honorarium for speaking at an APA continuing medical education program that was sponsored by Forest and an honorarium for speaking at a continuing medication education program sponsored by KV Pharmaceuticals. She reports receiving an honorarium from Leerink Swann for participating in a focus group. Dr. Markowitz reports consulting for Ono Pharmaceutical Co., Ltd. (2005). He reports receiving research support from Forest Pharmaceuticals, Inc. (2005). He reports receiving grant support from the National Institute of Mental Health (2005–2013), the National Alliance for Research in Schizophrenia and Depression (2005), and MINT: Mental Health Initiative (2005). He reports receiving royalties from American Psychiatric Publishing, Inc. (2005–2010), Basic Books (2005–2010), Elsevier (2005–2010), and Oxford University Press (2007–2010). Dr. Rosenbaum reports attending advisory boards for Bristol-Myers Squibb, Cephalon, Cyberonics, Forest Pharmaceuticals, Inc., Eli Lilly and Company, MedAvante, Neuronetics, Inc., Novartis, Orexigen Therapeutics, Inc., Organon BioSciences, Pfizer, Roche Diagnostics, Sanofiaventis, Shire, and Wyeth. He reports consulting for Auspex Pharmaceuticals, Compellis Pharmaceuticals, EPIX Pharmaceuticals, Neuronetics, Inc., Organon BioSciences, Somaxon, and Supernus Pharmaceuticals, Inc. He reports receiving honoraria from lectureships for Boehringer Ingleheim, Bristol-Myers Squibb, Cyberonics, Forest Pharmaceuticals, Inc., Eli Lilly and Company, and Schwartz Pharma. He was involved in the creation of the Massachusetts General Hospital Psychiatry Academy (MGH-PA) and has served as a panelist in four satellite broadcast programs. MGH-PA programs that have industry support are always multi-sponsored, and curriculum development by the Academy is independent of sponsorship; the curricula from January 2005 to March 2009 included sponsorship support from AstraZeneca, Bristol-Myers Squibb, Cephalon, Eli Lilly and Company, Forest Pharmaceuticals, Inc., GlaxoSmithKline, Janssen Medical Affairs LLC, Ortho-McNeil Pharmaceutical, sanofiaventis, Shire, and Wyeth. He reports equity holdings in Compellis Pharmaceuticals, MedAvante, and Somaxon. Dr. Thase reports that he provided scientific consultation to AstraZeneca, Bristol-Myers Squibb, Eli Lilly & Company, Forest Pharmaceuticals, Inc., Gerson Lehman Group, GlaxoSmithKline, Guidepoint Global, H. Lundbeck A/S, MedAvante, Inc., Neuronetics, Inc., Novartis, Otsuka, Ortho-McNeil Pharmaceuticals, PamLab, L.L.C., Pfizer (formerly Wyeth-Ayerst Laboratories), Schering-Plough (formerly Organon), Shire U.S., Inc., Supernus Pharmaceuticals, Takeda (Lundbeck), and Transcept Pharmaceuticals. He was a member of the speakers bureaus for AstraZeneca, Bristol-Myers Squibb, Eli Lilly and Company, GlaxoSmithKline, Pfizer (formerly Wyeth-Ayerst Laboratories), and Schering-Plough (formerly Organon). He received grant funding from Eli Lilly and Company, GlaxoSmithKline, the National Institute of Mental Health, the Agency for Healthcare Research and Quality, and Sepracor, Inc. He had equity holdings in MedAvante, Inc., and received royalty income from American Psychiatric Publishing, Inc., Guilford Publications, Herald House, Oxford University Press, and W.W. Norton and Company. His wife was employed as the group scientific director for Embryon (formerly Advogent), which does business with Bristol-Myers Squibb and Pfizer/Wyeth. Dr. Trivedi reports that he was a consultant to or on speaker bureaus for Abbott Laboratories, Inc., Abdi Ibrahim, Akzo (Organon Pharmaceuticals, Inc.), AstraZeneca, Bristol-Myers Squibb Company, Cephalon, Inc., Cyberonics, Inc., Eli Lilly and Company, Evotec, Fabre Kramer Pharmaceuticals, Inc., Forest Pharmaceuticals, GlaxoSmithKline, Janssen Pharmaceutica Products, L.P., Johnson & Johnson P.R.D., Meade-Johnson, Medtronic, Neuronetics, Otsuka Pharmaceuticals, Parke-Davis Pharmaceuticals, Inc., Pfizer, Inc., Sepracor, Shire Development, Solvay Pharmaceuticals, VantagePoint, and Wyeth-Ayerst Laboratories. He received research support from the Agency for Healthcare Research and Quality, Corcept Therapeutics, Inc., Cyberonics, Inc., Merck, National Alliance for Research in Schizophrenia and Depression, National Institute of Mental Health, National Institute on Drug Abuse, Novartis, Pharmacia & Upjohn, Predix Pharmaceuticals (Epix), Solvay Pharmaceuticals, Inc., and Targacept”. [231]

The massive amount of industry relationships among authors of the APA major depressive disorder clinical practice guideline is by no means an exception. A systematic review found that the majority of clinical practice guidelines had authors with industry affiliations, including consultancies (authors with relationship, range 6–80%), research support (4–78%), equity/stock ownership (2–17%), or any financial conflict of interest (56–87%) [994]. In a seminal study by Choudhry and colleagues from 2002, 192 authors of 44 North American and European clinical practice guidelines on various conditions were surveyed. Altogether 87% of authors had financial relationships with the pharmaceutical industry. On average, the guideline authors had ties to 11 different companies. But these financial conflicts of interest were in the vast majority not disclosed in the guidelines. Moreover, while only 7% of the guideline authors thought that their own relationship with the pharmaceutical industry influenced their own recommendations made, 19% of authors (i.e. more than double) thought that their co-authors’ recommendations were influenced by their industry relationships [995].

A more recent analysis of 114 clinical practice guidelines from various countries by Kung and colleagues showed that the scientific quality of most clinical practice guidelines is poor [996]. Fewer than half of the guidelines surveyed met more than 50% of the Institute of Medicine quality standards. For instance, scientific evidence supporting recommendations was lacking in 35% of guidelines, 76% failed to include an information scientist and a formal quality of evidence rating was missing in 24%. Moreover, financial conflicts of interest were pervasive (71% of chairs and 91% of co-chairpersons had industry relationships) and often not disclosed. Thus, even when guidelines contain author conflict of interest disclosures, these are all too often incomplete. A recent analysis of 18 clinical practice guidelines providing recommendations for 10 high-revenue medications found that 26% of guidelines authors who received payments from industry did not fully disclose these payments. Altogether 7.5% of authors declared no financial conflicts of interest but were found to have industry relationships [997]. In another analysis of North American practice guidelines for hyperlipidaemia or diabetes, it was shown that among guideline authors who formally declared no conflicts of interest, 11% had one or more industry relationships [998]. Financial relationships between the medical organisations that produce the guidelines and the industry are also very common but rarely declared in the guidelines [999, 1000].

Bindslev and colleagues examined 45 guidelines from 14 Danish specialty societies published between July 2010 and March 2012 and found that 96% of guidelines had one or more authors with a conflict of interest. Of 254 guideline authors, 53% had a conflict of interest. The most common conflicts of interest were being a consultant, an advisory board member, or a company employee. Disturbingly, only one guideline (2%) disclosed author conflicts of interest and the quality of the guidelines was generally poor [1001]. The situation is no better in the depression domain. Based on their evaluation of 11 clinical practice guidelines for major depressive disorder, Zafra-Tanaka and colleagues concluded “Most of evaluated CPGs [clinical practice guidelines] did not take into account the patient’s viewpoints, achieved a low score in the rigor of development domain, and did not clearly state the process used to reach the recommendations” [1002]. In accordance, Bennet and colleagues found that only 4 of 17 (24%) practice guidelines for depression in children and adolescents met minimal quality standards and only 2 (12%) were rated high quality [1003].

Among the few notable exceptions of high-quality documents is the NICE depression guideline, for both adults and youth. It has comparably few conflicts of interest and adheres to high quality standards, including a grading of recommendations, assessment, development and evaluations, a risk of bias assessment, and discussion of the clinical significance of treatment effects [1002, 1003]. But unfortunately, the norms are highly conflicted guidelines of inadequate scientific rigour. A prototypical example of these poor-quality documents is the APA major depressive disorder clinical practice guideline [231]. It is among the many guidelines that have massive financial conflicts of interest (see above). It is also of poor scientific quality, as the methodology used to reach recommendations as well as to grade the strength of recommendations is based on expert consensus (which is problematic in general and especially in view of the authors’ extensive ties to the pharmaceutical industry), a literature search strategy is not mentioned, a list of included studies is not available, and a risk of bias assessment is not provided. Moreover, the APA depression guideline does not evaluate or discuss the clinical significance of treatment effects, and 20% of the references are incongruent with the recommendations [1002, 1004]. But there are even more limitations.

All six authors of the APA depression guideline are professors of psychiatry with a main research interest in psychopharmacology and biological psychiatry. General practitioners who treat the majority patients with depression were not included in the panel. Also missing on the panel were methodologists, public health experts, nurses, patients, as well as non-medical practitioners, including psychologists and social workers. Altogether 9% of all cited research and 13% of references supporting the recommendations were co-authored by the six guideline authors. Finally, the independent panel that reviewed the guideline for bias had undisclosed financial ties to pharmaceutical companies that manufacture antidepressants [1004]. Unsurprisingly, the APA depression guideline much more often recommends antidepressants as first-line treatment than other depression guidelines [1002], even for mild depression, where the effectiveness of antidepressants has not been demonstrated. In fact, the APA depression guideline is the only one among 14 guidelines studied that makes an explicit recommendation for antidepressants in mild depression, paradoxically giving this recommendation the highest rating of certainty level [1004]. Finally, the paediatric depression guideline of the American Academy of Child and Adolescent Psychiatry [1005] is equally hampered by massive conflicts of interest and poor quality standards [1003].

In sum, although clinical practice guidelines have become very influential in modern evidence-based medicine, they typically are of poor scientific quality and are subject to pervasive (and often undisclosed) conflicts of interest [996, 1000–1003]. Understandably, various experts expressed concern about the proliferation of and adherence to such unreliable practice guidelines [990, 1006, 1007]. As aptly summarised by Shaneyfelt and Centor in a JAMA editorial in 2009,

“The most widely recognized bias is financial. Guidelines often have become marketing tools for device and pharmaceutical manufacturers … Other biases are also important. The specialty composition of a guideline panel likely influences guideline development. Specialty societies can use guidelines to enlarge that specialty’s area of expertise in a competitive medical marketplace. Federal guideline committees may focus on limiting costs; committees influenced by industry are more likely to shape recommendations to accord with industry needs. Guidelines have other limitations. Guidelines are often too narrowly focused on single diseases and are not patient focused. Patients seldom have single diseases, and few if any guidelines help clinicians in managing complexity … Guidelines are not patient-specific enough to be useful and rarely allow for individualization of care. Most guidelines have a one-size-fits-all mentality and do not build flexibility or contextualization into the recommendations … Only when likely biases of industry and specialty societies have been either removed or overcome by countervailing interests can impartial recommendations be achieved … If all that can be produced are biased, minimally applicable consensus statements, perhaps guidelines should be avoided completely. Unless there is evidence of appropriate changes in the guideline process, clinicians and policy makers must reject calls for adherence to guidelines. Physicians would be better off making clinical decisions based on valid primary data”. [1008]

More recently, Dr. Ioannidis, a leading expert in evidence-based medicine, echoed these obvious limitations and biases of clinical practice guidelines:

“Thus, these guidelines writing activities are particularly helpful in promoting the careers of specialists, in building recognizable and sustainable hierarchies of clan power, in boosting the impact factors of specialty journals and in elevating the visibility of the sponsoring organizations and their conferences that massively promote society products to attendees. However, do they improve medicine or do they homogenize biased, collective, and organized ignorance? Well-conducted unbiased guidelines can be useful. However, most published guidelines have one or more red flags that either make them overtly unreliable or should at least raise suspicion among potential users. The list of red flags includes sponsoring by a professional society with substantial industry funding, conflicts of interest for chairs and panel members, stacking, insufficient methodologist involvement, inadequate external review, and noninclusion of nonphysicians, patients, and community members”. [1009]

Above I have detailed how the pharmaceutical industry can influence physicians’ prescribing behaviour by supporting academic departments, senior researchers, medical organisations, and both the clinical practice guidelines and diagnostic manuals they produce. Another powerful avenue for drug companies to influence prescribing is through sponsoring continuing medical education and the speakers at these events and by promoting their products directly to physicians through marketing lectures (typically sponsored lunches/dinners with slide presentations) and office visits from pharmaceutical sales representatives [428, 588, 1010–1012]. These are the topics I will now turn to.

To remain current with the rapidly changing healthcare practices and medical treatments, physicians regularly attend continuing medical education. The aim of these educational events is to ensure that physicians are up to date with the best practices in modern healthcare. The best practices (or standards of care) should of course be in the interest of patients and the public. Alas, continuing medical education is not exempt from unduly influences that serve the commercial interests of the pharmaceutical and device industry rather than the best interests of patients and the public. Corporate bias in continuing medical education is introduced through the industry’s financial support of educational events, the invited speakers at these events, and the academic departments, medical societies or specialised companies organising these events [428, 1013–1015]. Unfortunately, most physicians are not aware of these influences. As recently stressed by Dr. Fugh-Berman, “Although awareness of individual conflicts of interest and ethical problems with physician-industry relationships has increased, few people realise just how much continuing education is used for product promotion” [1012].

Internal industry documents released through litigation or whistleblowing clearly show that pharmaceutical companies misuse educational events to promote their drugs, including illegal off-label prescribing [29, 1016]. For instance, both Forest Laboratories and GlaxoSmithKline sponsored educational events to promote off-label prescribing of their antidepressant drugs escitalopram and paroxetine, respectively, for unapproved adolescent depression [29]. According to the scientific evidence there can be little doubt that industry-funding of educational events introduces bias, resulting in unbalanced assessments of drug treatments that overstate benefits and minimise harms [428, 1011, 1012, 1017]. Key opinion leaders, the top-ranked academic experts on industry payroll, are the preferred speakers at such educational events. However, their role is highly controversial, for they make a substantial personal income from promoting the pharmaceutical companies’ products to physicians and the public [29, 406, 428]. Various authors thus contend that key opinion leaders significantly contribute to the corruption of medicine [1018] and that all too often, they risk becoming “drug representatives in disguise” [405]. But what about the many practitioners? Do they also have financial relationships with the biomedical industry? And how are they influenced by the industry?

In 2014, 52% of all US physicians received at least one general payment from the industry for a total value of $1.94 billion (excludes research support). General payments slightly declined recently but remained considerable. In 2018, 45% of all US physicians received at least one general payment for a total value of $1.82 billion. The median annual payment per physician was $216 and the mean payment was $4606 [1019]. In addition, US pharmaceutical companies spend almost twice as much money on drug promotion compared to research and development [1020]. Marketing to physicians accounts for most promotional spending of the pharmaceutical industry. From 1997 through 2016, spending for direct-to-physician marketing increased from $15.6 billion to $20.3 billion, of which $5.6 billion were due to pharmaceutical detailing [433]. The latter is a euphemistic term for drug promotion, which often takes place at sponsored lunches or dinners. Physicians also frequently receive visits from pharmaceutical sales representatives who promote their company’s drugs and distribute free drug samples and gifts. Is this problematic? Yes, it is! Research has consistently shown that these promotions are often unbalanced and that risks/harms are rarely mentioned, even for drugs that carry serious safety warnings [588, 1021, 1022]. Perhaps you remember the promotional material GlaxoSmithKline provided to its sales representatives related to its fraudulent study 329, which stated that paroxetine “demonstrates remarkable efficacy and safety in the treatment of adolescent depression”, despite a lack of meaningful benefits and significantly increased risk of suicidal behaviour [29]. So you certainly get an impression of how unbalanced and biased such drug promotions by sales representatives often are.

Nevertheless, you may rightly argue that everybody (including physicians) knows that promotional messages are typically exaggerated, regardless of whether they come from a car manufacturer, watchmaker, or pharmaceutical company. You may thus rightly object that most physicians are well aware that pharmaceutical sales representatives exaggerate (or misrepresent) the benefit–harm ratio of the drugs they sell. However, the scientific evidence does not consistently support this view. According to the literature, most physicians perceive pharmaceutical sales representatives and industry-sponsored continuing medical education events as important and accurate sources of drug information and scientific knowledge, but views are divided and some physicians also have sceptical attitudes towards industry influence [1010, 1011, 1017, 1023, 1024]. The main issue is that physicians think they can discern biased promotional messages from accurate scientific evidence, but they often fail to do so [1012, 1025, 1026]. As emphasised by Sah and Fugh-Berman, “although physicians believe they can extract objective information from sales pitches, they routinely fail to distinguish between correct and incorrect information provided by sales representatives” [1027].

As I already detailed elsewhere, it comes without saying that most physicians aren’t intentionally corrupted by gifts and sponsored meals, even though there are certainly a few who willingly accepted bribes and kickbacks [1028–1030]. Nevertheless, the rare cases of deliberate overprescribing and/or mistreatment in exchange for money and/or gifts are not the main public health issue. The crucial question is whether all these shiny gifts, the sponsored dinners at fancy locations, sponsored conference travel and location, the constant interactions with handsome sales representatives delivering catchy marketing messages, and the regular attendance of industry-sponsored continuing medical education events, unconsciously influence physicians’ prescribing behaviours. That is, do physicians unintentionally prescribe more drugs, sometimes inappropriately, due to these pervasive pharmaceutical marketing strategies?

Let us first examine if physicians believe that their prescribing behaviour is influenced by the pharmaceutical industry. This question is easy to answer, for the scientific evidence is very clear and consistent about that. Only a minority of physicians think that their interactions with the pharmaceutical industry, including receipt of gifts and payments, have an impact on their own prescribing behaviour, but they consider their physician colleagues more susceptible to pharmaceutical marketing strategies than themselves [588, 1010, 1011, 1024]. For instance, in one study, 39% of physicians agreed that industry promotions and contacts did influence their own prescribing behaviour, but 84%, that is more than double, believed that other physicians are affected [1031]. As you might remember, we saw the same pattern in relation to perceived bias in clinical practice guidelines, where only 7% of authors thought that their own relationships with the pharmaceutical industry influenced their recommendations, but 19%, again more than double, conceded their co-authors were influenced by industry ties [995]. Of course, it cannot both be true that most physicians are unbiased and many other physicians are biased. So what’s happening here?

This inconsistency (i.e., “I’m not influenced by industry but my colleagues are”) is most likely the result of cognitive-motivational biases and suggests that individual physicians considerably underestimate the influence industry has over their own prescribing behaviour [1027, 1032]. If true, then there should be strong and consistent evidence of an association between industry relationships (e.g. gift receipt, contact with pharmaceutical sales representatives) and prescribing behaviour that subserves the industry’s commercial interest (i.e. more prescriptions, preference for costly, patented drugs). But before I detail the scientific evidence, let’s just quickly recapitulate that the pharmaceutical companies spend billions of dollars every year for direct-to-physician marketing [433, 1020]. They would certainly not spend such vast sums if there would be no return on investment. That is, their tremendous marketing efforts must result in increased drug sales (and profits), otherwise they would cut down spending on marketing. So you probably sense what’s coming …

A compelling body of scientific evidence indeed consistently shows that the more gifts physicians receive, the more they interact with pharmaceutical sales representatives, and the more they attend industry-sponsored continuing medical education and promotional events, the more drugs they prescribe (often off-label, i.e. for non-approved indications), the costlier the drugs they prescribe (patented drugs instead of much cheaper but equivalent generic drugs), and the more inappropriate the prescriptions are (i.e. nonadherent to best practice) [1033–1037]; for systematic reviews, see [796, 798, 943, 1010].

In conclusion, there can be little doubt that direct-to-physician pharmaceutical marketing alters physicians’ drug prescribing, increasing the rate of low-value, inappropriate and unnecessary prescriptions. Pharmaceutical marketing leads to harmful overprescribing and therefore can have a detrimental impact on public health and patient safety, as tragically evidenced by the US opioid epidemic [928, 929, 931]. Although few physicians think that their interactions with the industry influence their treatment decisions, the scientific evidence strongly indicates the opposite. Most physicians are not aware that pharmaceutical companies can change their prescribing behaviour through subtle marketing strategies, indicating that this is an unintentional, subconscious act [1012, 1027]. For the same reason, mere disclosure of industry relationships won’t prevent (or remove) the biases resulting from financial conflicts of interest [1038]. As succinctly summarised by Mitchell and colleagues in a systematic review recently published in the top-tier journal Annals of Internal Medicine:

“We present evidence that receipt of financial payments from industry is consistently associated with increased prescribing. This association has been identified across a broad range of physician specialties, drug classes, and prescribing decisions. In addition, evidence of a temporal association and dose-responsiveness strongly suggests a causal relationship. We also found evidence, consistent with prior studies, that industry payments are associated with increased use of lower-value drugs. Taken together, our results support the conclusion that personal payments from industry reduce physicians’ ability to make independent therapeutic decisions and that they may be harmful to patients. The medical community must change its historical opposition to reform and call for an end to such payments”. [796]

This leads us directly to the last chapter, “Solutions for reform”.