Keywords

1 Introduction

1.1 Tuskegee Syphilis Experiment

In 1932, the US Public Health Service and Tuskegee Institute in Alabama began an observational study of syphilis in African American men [6]. Called the “Tuskegee Study of Untreated Syphilis in the Negro Male,” the study was intended to demonstrate the need for a syphilis treatment program [6]. Approximately 600 subjects, of whom 400 had syphilis, were told nothing of their disease. Despite the availability of bismuth, arsenic, mercury, and later penicillin, as therapy, the subjects were offered no treatment [11]. Subjects suspected of receiving injections of arsenic or mercury were immediately replaced [32]. As reported in a paper read before the 14th Annual Symposium on Recent Advances in the Study of Venereal Diseases in January 1964, “Fourteen young, untreated syphilitics were added to the study to compensate for this” [32]. Following media outrage, the study concluded in 1972 when a nine-person panel found that no information had been provided to subjects before they agreed to participate [6]. This 40-year experiment on non-consenting, medically neglected subjects became the longest nontherapeutic study of humans in medical history [11]. The Tuskegee Syphilis Experiment illustrated the exploitation of vulnerable patients, the need for informed consent, and the misrepresentation of minority populations in clinical studies. Since then, great care has been taken at all levels of clinical research to employ ethical guidelines and regulatory committees to oversee studies involving human subjects.

2 Regulatory and Ethical Guidelines in Clinical Research

When proposing and conducting experiments involving human subjects, researchers must comply with international, federal, and institutional guidelines to protect participants. The US Department of Health and Human Services’ “Common Rule,” the Institutional Review Boards (IRBs) at individual institutions, and the Health Insurance Portability and Accountability Act of 1996 (HIPAA) are three primary regulatory measures for ethics in clinical research. All three were enacted after the 1964 adoption of the World Medical Association’s Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects.

2.1 Declaration of Helsinki

The Declaration of Helsinki (DOH) has been considered the gold standard for ethics in clinical research [26]. In its current form, the DOH applies to human subjects, data, and material (WMA). Central tenets of the DOH protect the health and rights of all patients involved in clinical research and advocate for the continuous evaluation of safety, effectiveness, efficiency, accessibility, and quality of human subject research [46]. It was originally composed of 14 short statements outlining ethical guidelines for conducting human subject research [47]. Since its inception, the DOH has been revised seven times, 1975 (Tokyo), 1983 (Venice), 1989 (Hong Kong), 1996 (Somerset, South Africa), 2000 (Edinburgh), 2008 (Seoul), and 2013 (Fortaleza, Brazil), and revised twice [46]. By 2014, the DOH included 37 detailed principles [46]. The basis of the declaration stems from the Nuremberg Code [5]. This seminal code of ethics was established at the conclusion of Nuremberg trials for Nazi war crimes, including horrifically violent human medical experiments on Holocaust victims [34].

2.2 Common Rule

In the United States, the Department of Health and Human Services (HHS) issues regulations on the ethical conduct of research on humans [22]. The HHS Code of Federal Regulations 45 Part 46 Protection of Human Subjects was developed in 1981 and updated in 2009 [40]. The policy is known colloquially as the “Common Rule” and protects human subjects in research conducted or supported by a federal department or agency [40]. It requires researchers to provide informed, written consent, full disclosure of the benefits and foreseeable risks of the proposed study, and a statement addressing subject rights to refuse participation at any point [40]. Considered vulnerable populations, pregnant women, human fetuses (definite as the “product of conception from implantation until delivery”), neonates, prisoners, and children are offered additional protections under the Common Rule [40]. Revisions to the Common Rule were proposed in 2015 by HHS and 15 other federal departments and agencies [22]. The updates are designed to reflect changes in research over the 35 years since the inception of the Common Rule [22]. Goals include enhancing respect and strengthening informed consent, particularly for the long-term use of de-identified biospecimens; enhancing safeguards by specifying privacy and security measures concerning identifiable information; streamlining Institutional Review Board (IRB) review by clarifying levels of risk and the IRB process for multisite studies; and calibrating oversight [40]. The HHS offered the opportunity for public comments on revisions until January 2016 [42].

2.3 Institutional Review Board

The Common Rule also states that research conducted or supported by organizations outside a federal department or agency must be compliant with the host’s Institutional Review Board (IRB) [40]. Like the Common Rule, IRBs aim to provide ethical and regulatory oversight for research with human subjects. On an institutional level, they ensure compliance with external laws, policies, and regulations [27]. Both the Common Rule and IRBs operate to uphold ethical principles defined in the Belmont Report of 1979 [40, 27]. The primary principles include respect for persons, beneficence, and justice [27]. Research approved by an IRB is subject to annual continuing reviews.

2.4 HIPAA

In addition to the protection of subjects themselves, information that identifies subjects must also be protected. The Health Insurance Portability and Accountability Act of 1996 seeks to increase privacy protection for human research participants [10]. This includes the privacy and security of information that could be used to identify subjects in a particular study such as name, medical record number, birthdate, social security number, address, or identifying photograph. As electronic medical records become more prevalent, and security and privacy issues extend to the online storage of identifiable data, regulations must also change. In 2000, 2004, 2009, and 2013, HIPAA has been modified and extended to reflect technological advances [41].

The HIPAA Privacy Rule, which protects identifiable health information, was introduced in 2000 and mandated nationally in 2003 [29]. By protecting ownership and transfer of specific protected health information (PHI), the Privacy Rule aims to safeguard health information associated with individuals while facilitating data flow to maximize the quality of health care [43]. PHI comprises any information that can be used to identify a study participant [43]. The implementation of the HIPAA Privacy Rule in clinical research has also provoked criticism. In 2007, JAMA published a study in which more than two-thirds of surveyed epidemiologists perceived “substantial, negative influence on the conduct of human subjects health research” after the implementation of the HIPAA Privacy Rule [29]. To encompass the protection of electronic PHI (e-PHI), the HIPAA Security Rule (HSR) was enacted in 2003 [44].

3 Ethical Population Representation

The lack of diverse racial and ethnic representation in clinical research prevents the best possible treatment for disease outcomes in heterogeneous populations [30]. To address this issue, Congress passed the National Institute of Health (NIH) Revitalization Act of 1993, intended to catalyze the diversification of participants in clinical research [28]. With minimal exceptions the Revitalization Act requires NIH-funded clinical research to include women and members of minority groups [28].

Twenty years after the introduction of regulatory laws to diversify participation in clinical research, the minority participation in cancer clinical trials remained minimal [7]. The lack of minority representation persists across many other fields in clinical research including cardiovascular disease, respiratory disease, mental health services, and substance abuse [2, 24, 33, 3].

Despite legislation, barriers in inclusion criteria may prevent trials from including minority populations. English language fluency is required for clinical trials more and more frequently [16]. The mistrust of healthcare professionals and the lack of understanding of clinical research also contribute to low rates of minority participation [2]. Decreased access to academic institutions pursuing clinical research by minorities influences recruitment diversity [14].

4 Ethical Publishing Practices

4.1 Data Fraud and Misconduct

One of the most highly publicized fraudulent studies is Dr. Andrew Wakefield’s case series suggesting an association between the MMR vaccine and autism [45]. Published in The Lancet, a British medical journal, in 1998, Dr. Wakefield’s paper caught the attention of mainstream media [31]. Consequently, the rate of MMR vaccines for toddlers in the United Kingdom decreased from 83.1% in 1997 to 69.9% in 1998 [39]. A one-page commentary titled, “Retraction of an Interpretation” was published in 2004 by 10 of the 13 authors of the original article [25]. Simultaneously, editors at The Lancet acknowledged a lack of financial disclosures by Dr. Wakefield et al. and reaffirmed “the paper’s suitability, credibility, and validity for publication” [21]. In 2010, The Lancet retracted Dr. Wakefield’s article.

This example demonstrates many important facets of the ethics of clinical research. The researchers failed to report accurate findings and drew speculative conclusions from a small, nonrepresentative case series [31]. These unethical actions by the authors were compounded by irresponsible publishing practices at The Lancet. The publishers failed to require proper disclosure of conflicts of interest, specifically those that revealed Dr. Wakefield’s financial gains related to the research conclusions [12]. Furthermore, The Lancet only retracted the fraudulent article 12 years after its initial publication [15]. After the retraction of the article, investigative journalist Brian Deer published multiple articles in the BMJ revealing Dr. Wakefield’s connection to lawyer Richard Barr [12]. Barr, who was working to file a lawsuit against vaccine manufacturing companies, provided Dr. Wakefield with £400,000 through the Legal Aid Fund while also representing the anti-vaccine organization Justice, Awareness, and Basic Support (JABS) [13]. Barr used his connection to JABS to find patients for Dr. Wakefield’s study [12].

However, cases such as Dr. Wakefield’s are not common. Although data fraud is difficult to monitor and likely underreported, confirmed cases of data fabrication, falsification, and plagiarism exist among 0.01% of scientists according to the US Public Health Service [19]. Dr. Wakefield’s willful deceit through data falsification is classified as fraud, while misconduct refers to honest errors in ethical research practices [19]. In addition to data fraud and misconduct, there are many other aspects of clinical research for which good clinical practices must be observed. They include conflict of interest disclosure, self-citation, and distinguishing predatory journals.

4.2 Conflict of Interest

Conflicts of interest in clinical orthopedic research are any instances of overlapping personal, financial, or academic involvement that may bias or influence a participant’s work. Investigators are required to submit conflict of interest statements for project proposals, manuscript publications, and presentations at conferences. This benefits consumers of the research as it provides context for the circumstances under which research is conducted. It is the responsibility of authors, editors, peer reviewers, and another other staff members who play a role in determining the publication or presentation of a study to disclose any relevant conflicts of interest [23]. Internationally, many orthopedics journals follow the “Recommendations for the Conduct, Reporting, Editing and Publication of Scholarly Work in Medical Journals” established by the International Committee of Medical Journal Editors (ICMJE). The conflicts of interest included by the ICJME conflict of interest form include financial activities related to the work as well as relevant financial activities outside the submitted work. Relevant financial activities may include relationships with a pertinent entity such as a government agency, foundation, academic institution, or commercial sponsor; grants; personal fees; royalties; leadership positions; and nonfinancial support [23].

4.3 Self-Citation

Self-citation refers to referencing an article from the same journal [8]. The rate of self-citation for a medical journal is defined by the number of self-citations divided by the number of total references made by that journal in a specified time period [20]. In orthopedics journals, many factors influence the differences in self-citation rates. Self-citation rates are highest in sub-specialized journals due to their specificity [36]. Specialized orthopedics journals including Spine, Arthroscopy, and FAI have self-citation rates two and three times higher than the general orthopedics journals CORR and JBJS (Am), respectively [36]. Rates of self-citation are categorized as “high” if they are at or above 20% (JCR).

Self-citation rates are relevant in the calculation of medical journal impact factors [17]. According to the Science Citation Index, the impact factor for medical journals “measures the average number of citations received in a particular year by papers published in the journal during the two preceding years” [8]. Practices surrounding the manipulation of self-citation rates introduce bias into impact factors [20]. For journals in which self-citations dominate the references, the true contribution to the journal’s discipline may be misrepresented [18].

4.4 Predatory Journals

An increase in the existence of open-access publications with minimal or no peer review has been accredited to the rise of spam emails [4]. Known as “predatory journals” [1], these publications often require authors to pay high fees for the processing and peer-review process with no follow-through [4]. Jeffrey Beall, a former Scholarly Initiatives Librarian at the University of Colorado Denver coined the term and proposed the first list of predatory journals [1]. He warned that these publishers “exploit the author-pays model, damage scholarly publishing, and promote unethical behavior by scientists” [1]. In 2017 Beall’s list of predatory journals, which had been used as a government standard, was removed from his webpage [38]. However, institutions such as the Yale University Library system continue to recommend it and other similar lists [48].

Articles submitted to predatory journals are published quickly due to the limited or nonexistent review process [35]. Additionally, published articles are often non-indexed despite advertisements to the contrary [9]. Non-indexed articles cannot be retrieved through an online search [35].

Human behavioral scientists in Poland aimed to shed light on the issue of predatory journals through a systematic study in which they created a profile for an imaginary scientist and applied for editorial positions at 360 journals [37]. False online accounts, journal and book publications, and faculty positions—none of which could be verified—were compiled into the fake application. Of the 360 editorial positions to which the fake application was submitted, 120 were for journals indexed by Journal Citation Reports (JCR), 120 were listed on the Directory of Open Access Journals (DOAJ), and 120 were included in Beall’s list of predatory journals. Acceptances came from 40 journals included on Beall’s list and 8 journals listed on the DOAJ [37]. Fittingly, the fake editor’s name was Dr. Anna O. Szust—similar to Oszust, the polish word for “fraud.” All offers for editorial positions were declined [37].

Take-Home Message

  • Ethical considerations are important at many levels and processes in clinical research.

  • Investigators, ethical review board members, publishers, and peer reviewers all contribute responsibility to maintaining high ethical standards in clinical research.