Medicine is supposed to make progress, to go forward in scientific terms so that each successive generation knows more and does better than previous generations. This hasn’t occurred by and large in psychiatry, at least not in the diagnosis and treatment of depression and anxiety, where knowledge has probably been subtracted rather than added. There is such a thing as real psychiatric illness, and effective treatments for it do exist. But today we’re seeing medicines that don’t work for ill-defined diagnoses of dubious validity. This has caused a crisis in psychiatry … In the rather ineffective drug treatments for depression known as the selective serotonin reuptake inhibitors (SSRIs)—the Prozac-style drugs—and in the triumph of such diagnoses as ‘major depression’ that exist more in the shadowland of artifact than in the world of Nature—academic psychiatry has a lot to answer for [1].

I believe that the enduring skepticism and distorted views about antidepressant drugs are due to the stigma of mental illness and prejudice toward the medical specialty responsible for its study and care. This historical stigma is perpetuated by lay and professional groups, who oppose the use and deny the efficacy of psychotropic medication for ideologic reasons or organizational biases. Psychiatry has the dubious distinction of being the only medical specialty with an anti-movement that is constantly challenging and undermining the field. The other source of opposition comes from anti-scientific or anti-medical groups who draw the battle lines along whether medical or psychotherapeutic treatment modalities can or should be used … Doctors should not be fooled by the pharmacologic naysayers, and no patient with major depressive disorder should be denied the effective treatment that can be hugely beneficial for them [2].

The provocative critique by Dr Edward Shorter, a medical historian, and the typical defence of the drug-centred treatment approach in depression by Dr Jeffrey Lieberman, eminent professor of psychiatry and former president of the American Psychiatric Association (APA), illustrate nicely the controversial questions this book will address. Dr Shorter is by no means the only one who has questioned the validity (and utility) of our current definition of “major” depression, and the opposition to this ill-conceived diagnosis comes from psychiatrists, general practitioners (GPs), and psychologists alike [3–7]. How the concept of depression has changed over time and how this mental disorder—its putative aetiology (causation), phenomenology (experience), prevalence (frequency), course (trajectory), and disability burden (impairment)—is portrayed in contemporary scientific discourse and the media is the first main topic of this book.

Like Dr Shorter, many authors, again psychiatrists, GPs, and psychologists alike, consider the effectiveness of the SSRIs and other new-generation antidepressants poor and of doubtful clinical relevance in the average patient with depression [8–13]. My own analyses of clinical trials and my scrutiny of the scientific literature led me to similar conclusions [14–20]. Yet, I’m fully aware that the scientific evidence is ambiguous and that there is considerable variability in how the data can be interpreted. Like Dr Lieberman in his quote above, most psychiatrists, especially academic leaders in the field, resolutely dismiss negative conclusions regarding the effectiveness of antidepressants as ideologically biased anti-psychiatric propaganda devote of scrutiny and scientific justification and contend that these drugs are highly effective and safe in depression (see, for example [21–23]). These polarised and angry debates over the scientific evidence for (or against) antidepressants in depression are the second main topic of this book.

And then there is the pervasive influence of the pharmaceutical industry that has made billions of dollars in revenues with the marketing of both “major” depression as a brain disease and its putative cure, the new-generation antidepressants [9, 14, 24–26]. By necessity, every critical analysis of antidepressant prescribing for depression must thus address conflicts of interest and corporate bias in psychiatry and primary care medicine [9, 27–30]. This is the third main topic of this book.

The Inconvenient Truth about Scientific Research

Over my academic career, I went into different stages of belief and disbelief. I studied clinical psychology and psychopathology at the University of Zurich, Switzerland. Back in the early 2000s, we were taught about psychiatric diagnoses as if these were clear-cut natural disease entities. Our curriculum strictly followed the disorder categories outlined by the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the APA. I can’t remember that we ever had a critical discussion about the validity and utility of these diagnoses. The DSM was like a bible, an authoritative and definite resource that contains wisdom and ultimate knowledge. When a student asked something about a specific mental disorder, the professors typically answered that this is true or not because the DSM says so. The DSM was axiomatic, dogmatic, and conclusive. We also learned that antidepressants were safe and effective, and within my memory there never was mention of limitations or biases in the evidence base. The validity and reliability of most research findings was barely questioned. In general, we were taught about psychological experiments as if these provided definitive answers to fundamental questions of human being. Findings from seminal studies were often (not always) considered irrefutable truths about human behaviour, cognition, and emotion. I was taught that this is how humans think, behave, and feel because it has been confirmed in this or that study. I never considered that many, perhaps most of these studies, could be simply wrong.

I had my first doubts about the credibility (and quality) of research findings when I was conducting the statistical analyses for my master thesis. I had been working on a project about testosterone, sensation seeking, and fatherhood. I soon realised that data from such an observational study was prone to many biases and confounders. I also learned that removing one or two outliers, using different operationalisations of the same construct, and adding certain control variables to a statistical model could produce completely divergent results. I started to wonder if the authors of the seminal psychological studies we were taught in class also had come up with completely divergent results had they not deliberately chosen to analyse their data in a specific way. Could it be that they also ran multiple statistical models and then selectively reported the results that provided the neatest and most persuasive answer, that is, the results that confirmed their preconceived beliefs?

In Spring 2009, soon after I had obtained my Master of Science in clinical psychology and psychopathology, I started to work as a research associate at the Department of General and Social Psychiatry of the Psychiatric University Hospital of Zurich. I started my PhD project on the epidemiology of personality disorders under the supervision of professor of psychiatry Dr. Wulf Rössler, the then-director of the clinic, and Dr. Vladeta Ajdacic-Gross, a senior researcher at the department. My supervisors quickly noted that I was a talented researcher, and in late Summer 2010, in addition to my appointment as a PhD student, I was employed as a research associate for Dr. Rössler. We were mostly working on the data of the prospective Zurich Cohort Study, a community cohort study of young adults from the canton of Zurich followed over 30 years [31, 32]. The same year I also became a research associate for professor Dr. Jules Angst, former research director of the Psychiatric University Hospital of Zurich and lead investigator of the Zurich Cohort Study. One main research interest of Dr. Rössler were sub-clinical psychotic symptoms in the general population [33], so together with Dr. Ajdacic-Gross, I was examining if we could replicate the association between cannabis use and the occurrence of psychotic symptoms reported in the literature [34].

Again, I soon realised that different statistical approaches yielded conflicting results. The association between cannabis use and psychotic symptoms was not evidently clear from the outset, but I was told that I must search the data to find the “truth”. Searching the data means to run various statistical models with changing definitions of predictor and outcome variables, checking the impact of different control variables on the results and thus deriving the “best” model. According to this exploratory approach, the “best” model is the statistical analysis that confirms your hypothesis. Thus, eventually we were able to demonstrate a prospective association between cannabis use and the occurrence of psychotic symptoms, and we published the results in a leading medical journal [35]. But was the “best” model necessarily the most accurate model? Did we really detect a true association, or did we rather adjust our statistical analysis until it confirmed our hypothesis (also referred to as data dredging or p-hacking)? As the saying goes, “if you torture the data long enough, it will confess to anything”. It is now widely accepted that this is poor scientific practice that substantially increases the risk of false positive results, that is, statistical artifacts or chance findings [36–40]. But back in late 2010, as a PhD student, I simply did what I was told by my supervisors (and taught in university), even though it didn’t feel right. I was not aware yet of the compelling scientific literature that strongly advised against such questionable research practices.

The publication of our cannabis-psychosis paper coincided with the proclamation of the replication crisis in psychology [41–44]. My intuition that psychological and psychiatric research, and by extension research in general, was often unreliable, irreproducible, and systematically biased towards spectacular but most likely false positive findings, became a prominently discussed hot topic. I was immediately intrigued by this flood of new research that exposed so many pernicious problems in contemporary psychological research. In my new position as research associate for Drs Rössler and Angst I prepared various research papers. I quickly gained experience in data analysis and I also acquired profound knowledge in statistics and research methodology. I dug deeper into the metascience literature and found deeply concerning evidence for what I have expected for so long, but never was told as a student and junior researcher. A large portion, presumably a majority, of research findings in psychology, psychiatry, and biomedicine, is most likely false or massively exaggerated [45–50]. I learned about p-hacking, the fabrication of statistically significant results [39], about the flaws and limitations of statistical significance testing [51], and about publication bias, the selective reporting of favourable (i.e. hypothesis-confirming) research findings [52].

In 2014, about a year after I had completed my PhD, I left the Psychiatric University Hospital of Zurich to assume a tenured position at the Zurich University of Applied Sciences. Metascience, research methodology, statistics, and the philosophy of science became my primary research interests. I started to write papers about methodological flaws and research biases and how these can easily produce false-positive results in biomedical and psychological research [53, 54]. I also wrote about the systematic biases in psychotherapy research [55]. But my biggest interest was in antidepressants for depression, and this preference changed my career fundamentally.

The Issue with Antidepressant Research

In my first year at the Zurich University of Applied Sciences I had been searching the scientific literature about publication bias in psychiatric research. This is when I opened Pandora’s box, for when you search the literature on publication bias in psychiatry, among the first research papers you’ll find are the seminal studies from the 2000s that demonstrated how selectively the results of antidepressant trials were published [56–58]. Next you’ll discover how pervasively the pharmaceutical industry has corrupted academic medicine and that drug manufacturers systematically bias the scientific evidence so that their products appear more effective and safer than they really are [28, 59–62]. You will probably also learn about the various flaws in the contemporary definition of depression and other mental disorders [7, 63–65] and how mental disorders were misleadingly marketed by the pharmaceutical industry to increase the sales of psychiatric drugs [26, 27, 66, 67].

I have always been interested in the research of depression and have co-authored various papers on mood disorders and negative affectivity (see, for example [68–76]). I also have lived experience of depression as a young adult, when I was doing my military service. I didn’t seek treatment back then and fortunately, my profound sadness, lack of interest and pleasure, and feelings of hopelessness that persisted for several weeks lifted soon after I was discharged from the army. After my recovery I flourished and, curiously, even felt emotionally stronger than before my depression episode. I gained a self-esteem and confidence I didn’t have before. Later I learned that this phenomenon, that is, high functioning after full recovery from a depression episode, is a widely neglected topic, but it is presumably not that rare as most psychiatrists and clinical psychologists would think [77]. Thus, the concept and outcome of depression were interesting to me not only from a scientific perspective, but also from my personal life story. In 2017, finally, I published my first research paper on antidepressants for depression, a critical review of methodological limitations in clinical trials, selective reporting of research findings, and corporate bias in the evidence base [14].

This publication, and the many others that would follow, to my surprise (or naivety) also completely changed my research career and position within the scientific community. There were already several papers on these issues in the scientific literature, of which most were published in leading general medical and psychiatric journals (see, for example, [57, 78, 79]), and various books had been written about it (see, for example [9, 11, 80]). So I didn’t think that my paper was really big news or that it would cause a stir. But I was proved wrong and in hindsight I also realise how naïve I was. Before long, other researchers contacted me and congratulated me on my first antidepressant paper. This is how I met Dr. Martin Plöderl, psychotherapist and senior researcher at the Christian Doppler Clinic, Paracelsus Medical University in Salzburg, Austria, who over time became a friend and close collaborator. Blog authors and journalists wrote and asked for interviews. With each additional paper on antidepressants I published there were more interview requests and invitations for talks at scientific meetings. Service users also wrote to me, mostly people harmed by antidepressants, who thanked me for my educational work on the risks of antidepressants and conflicts of interest in psychiatry. Soon I also communicated with influential researchers in this field, including, among others, Drs David Healy, Peter Gotzsche, Irving Kirsch, Joanna Moncrieff, Mark Horowitz, Janus Jakobsen, Tom Bschor, John Read, James Davies, and Giovanni Fava.

But there was not only delight and appreciation for my critical research on the benefit–harm ratio of antidepressants. Some people, especially psychiatrists, attacked me fiercely on social media, but also in their reviewer comments on articles I submitted for publication. Some accused me of spreading conspiracy theories and misinformation. Akin to Dr Lieberman in his blanket condemnation of researchers who question the clinical significance of antidepressants quoted at the beginning of this book [2], they alleged I was running an anti-scientific, ideologically biased crusade against psychiatry. My former PhD supervisor, Dr. Wulf Rössler, who also supported me as co-author on two controversial papers, once warned me that some leaders in the field would push back hard. He cautioned that by pursuing this kind of research I would soon make powerful enemies. He was right. There were times I felt exhausted and crestfallen, demoralised by insults on social media and irritating ad hominem attacks by anonymous reviewers.

I soon realised that the debate was highly polarised and hateful. Even scientific arguments could quickly turn into scathing and discrediting accusations. But there was also interest in my arguments and honest willingness to discuss the scientific evidence. A few debates were indeed constructive, especially when opponents were willing to discuss my scientific arguments instead of simply attacking me as a person for the kind of research I do. I also learned that many practicing psychiatrists and GPs were simply not aware of these pervasive issues that, to me, seemed so evident after my scrutiny of the literature. Some also admitted honestly to me, in confidence, that they were not allowed to raise these issues in the clinic, fearing disapproval and rejection from their colleagues and supervisors.

After I completed my habilitation (qualification for professorship and highest academic degree in Switzerland and various other countries) at the medical faculty of the University of Zurich, I gave my inaugural speech at the university in February 2019. I lectured about threats to evidence-based antidepressant prescribing. I detailed the selective reporting of efficacy and safety outcomes in antidepressant trials, which lead to a systematic overestimation of benefits and underestimation of harms. This is not just a provocative, misinformed opinion (as some psychiatrists had suggested), it is a well-established scientific fact consistently replicated over many years in antidepressant research [56–58, 81–83] and across various other therapeutic domains in general medical research [84–90]. The speech was attended by various colleagues from the Psychiatric University Hospital of Zurich, of which most were psychiatrists. After the talk, several came to me and confessed how concerned they were about these revelations. Never had they thought that the scientific literature was that biased and unreliable. Never during their training and continuing medical education had they heard about these seminal studies. I realised back then that this needs to change.

Since the publication of my first paper on antidepressants in 2017, I have given several talks at general hospitals, psychiatric clinics, and scientific conferences about the corruption of evidence-based medicine and systematically biased benefit–harm evaluations of antidepressants and other drugs. The reactions were always the same. The audiences, mostly physicians from various specialties and other healthcare providers (e.g. psychologists and social workers), were shocked about these findings. Quite often they were also embarrassed that they had been so ignorant about these pervasive issues. Several people asked me why I didn’t write a book about it. I have long hesitated, as there are already several books and many research papers about the medical construct of depression, corporate bias, and antidepressant over-prescribing. However, obviously most practitioners, but also researchers, service users, advocacy groups and policymakers, are not aware of the multiple flaws in the way we define, diagnose, and treat depression. That’s why I wrote this book on antidepressant prescribing in depression.