Abstract
The disclosure of cases of research misconduct in clinical trials, conventionally defined as fabrication, falsification or plagiarism, has been a disturbingly common phenomenon in recent years. Such cases can potentially harm patients enrolled on the trials in question or patients treated based on the results of those trials and can seriously undermine the scientific and public trust in the validity of clinical trial results. Here, I review what is known about the prevalence of research misconduct in general and the contributing or causal factors leading to the misconduct. The evidence on prevalence is unreliable and fraught with definitional problems and with study design issues. Nevertheless, the evidence taken as a whole seems to suggest that cases of the most serious types of misconduct, fabrication and falsification (i.e., data fraud), are relatively rare but that other types of questionable research practices are quite common. There have been many individual, institutional and scientific factors proposed for misconduct but, as is the case with estimates of prevalence, reliable empirical evidence on the strength and relative importance of these factors is lacking. However, it seems clear that the view of misconduct as being simply the result of aberrant or self-delusional personalities likely underestimates the effect of other important factors and inhibits the development of effective prevention strategies.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Cases of data fraud in clinical trials, defined as the fabrication or falsification of data, are uncovered on a regular basis [1, 2]. Some prominent recent examples are summarized in Table 1.
These cases include Roger Poisson, who falsified eligibility data for patients entered on multi-center breast cancer trials sponsored by the National Surgical Adjuvant Breast and Bowel Project (NSABP) [3, 4]; Werner Bezwoda, who reported strikingly positive findings from a single-institution trial using high-dose chemotherapy stem cell rescue in patients with high-risk breast cancer but the data on which the results were based could not be verified in an independent audit [5, 6]; Robert Fiddes, who was a lead investigator for a large number of clinical trials sponsored by pharmaceutical companies but was discovered to have committed a wide range of fraud and misconduct in these trials over many years [7, 8]; Harry Snyder and Renee Peugeot, a husband and wife team who falsified data on a clinical trial of a topical agent for the treatment of psoriasis and cutaneous T-cell lymphoma [9, 10]; Yoshitaka Fujii, an anesthesiologist who fabricated data on a large number of clinical trials of agents used to control postoperative nausea and vomiting in humans and animals [11–16]; Anil Potti, who developed predictive models for therapeutic agents in cancer that were used in subsequent clinical trials but the details underlying the development of those models could not be independently validated [17]; and Hiroaki Matsubara, who resigned his university position in the wake of allegations of data fabrication and falsification in clinical trials of valsartan [18, 19].
Since clinical trials are a special type of research study, such cases are part of the general problem of research misconduct, with the added risk of potentially serious consequences for patients treated on trials or treated based on the results of those trials. Here, I discuss the definitions of misconduct, ranging from the narrow definition of ‘fabrication, falsification or plagiarism’ to wider definitions which include other questionable research practices; evaluate the available evidence on the prevalence or incidence of misconduct; and discuss potential contributing or causal factors leading to misconduct and the implications for preventive measures.
Definitions of research misconduct and data fraud
A single universally accepted definition of research misconduct does not exist among the various professional societies, scientific journals, government agencies and regulatory bodies concerned with the issue. However, fabrication, falsification and plagiarism are so egregious that all definitions implicitly or explicitly include these practices. The US Public Health Service defines research misconduct specifically limited to these practices [20]:
“Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.
- (a)
Fabrication is making up data or results and recording or reporting them.
- (b)
Falsification is manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.
- (c)
Plagiarism is the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.
- (d)
Research misconduct does not include honest error or differences of opinion.”
The National Institutes of Health [21], National Science Foundation [22], American Psychological Association [23] and others use identical or nearly identical definitions. This is a very narrow definition, covering only the most serious unethical behaviors. For clinical investigators, the US Food and Drug Administration use a much broader definition of investigator misconduct, targeting practices that might create a safety risk for patients, including:
“Failure to report serious or life-threatening adverse events; serious protocol violations, such as enrolling subjects who do not meet the entrance criteria because they have conditions that put them at increased risk from the investigational drug, or failing to carry out critical safety evaluations; repeated or deliberate failure to obtain adequate informed consent, including falsification of consent forms or repeated or deliberate failure to disclose serious risks of the investigational drug in the informed consent process; falsification of study safety data; failure to obtain IRB review and approval for significant protocol changes; failure to adequately supervise the clinical trial such that human subjects are or would be exposed to an unreasonable and significant risk of illness or injury” [24].
There are also other organizations that take a broader perspective, including the Council of Scientific Editors, who in a white paper on integrity in scientific publications [25], defined research misconduct as
“Behaviour by a researcher, intentional or not, that falls short of good ethical and scientific standard.”
Similarly, Universities UK defines research misconduct to include “behaviour or actions that fall short of the standards of ethics, research and scholarship are required to ensure that the integrity of research is upheld” [26]. These definitions are both broader and vaguer than the PHS definition, leaving open the question of the definition of ‘good ethical and scientific standard’ or ‘research and scholarship standards’ and what exactly constitutes falling short of those standards.
There has been much discussion in the literature on questionable research practices other than fabrication, falsification or plagiarism that may nevertheless result in unreliable results and other serious problems [27–34]. Some of these practices relevant to clinical trials are listed in Table 2.
In Table 2, these questionable practices are grouped into several categories—Design and analysis, such as the use of improper design or analysis techniques, misrepresentation of the methodology used, or selective reporting; publication and authorship, such as failure to publish or gift authorship; patient safety, such as failure to follow protocol safety requirements or failure to obtain proper informed consent; and other practices, such as misuse of funds, conflicts of interest, or refusal to share data.
One view of data fraud in clinical trials is that it represents the extreme end of a spectrum of sources of data errors in clinical trials, ranging from the inevitable honest errors at one end of the spectrum to data fraud at the other end, with misunderstandings, incompetence and sloppiness in between. This spectrum is illustrated graphically in Fig. 1 where there is a clear dividing line between data fraud and other sources of error defined by intent. Other sources of data errors are regrettable but data fraud involves a deliberate intent to deceive or ‘intent to cheat’, a qualitatively different source of data errors.
It is arguable that in aggregate more damage is caused by the less serious forms of questionable research practices and from sloppiness or incompetence than from data fraud––largely because these other sources of data errors are more common.
Prevalence
There are fundamental difficulties in trying to estimate the prevalence of research misconduct in science in general and clinical trials in particular. First, there are definitional problems. Does ‘misconduct’ include only fabrication, falsification and plagiarism as in the PHS definition or should it include some of the other types of questionable research practices listed in Table 2?
Second, there are difficulties in the assessment of prevalence when applied to research misconduct. In epidemiology, prevalence is defined as the proportion of people in a defined population with a given condition at a specific time (point prevalence), or that have (or had) the condition during a specified time period (period prevalence), or that have ever had the condition at any time (lifetime prevalence). For assessing the prevalence of research misconduct there needs to be clarity in the type of prevalence being assessed as well as a clear statement of the population being studied, the defined population. Is it everyone engaged in research, or just primary investigators, or some other defined group of individuals? Even if the population can be defined in principle, how are the numbers of people in the population estimated? And how can we take a reasonable random sample, or some other reasonably representative sample, from the population in order to construct an estimate of prevalence?
Lastly, there is an ascertainment problem. Accurate responses to questions about misconduct may be difficult to obtain. This is a well-known problem when attempting to elicit responses from individuals for questions about behavior that is embarrassing, illegal or that is otherwise liable to result in evasive answers.
Because of these difficulties, the true prevalence of research misconduct in general or in clinical trials in particular is unknown, perhaps even unknowable. Nevertheless, there have been many attempts to address the issue. These efforts may be classified into studies providing indirect estimates of prevalence through assessing the detected cases [35–37] and surveys providing direct evidence by asking questions in some supposedly representative sample of subjects about knowledge of misconduct by others [28, 38–42] or about the respondent’s own behavior [29, 40, 43, 44]. The detected cases alone are obviously less than the actual number of cases and thus lead to an unreliable and biased underestimate of prevalence. Such indirect evidence on prevalence has resulted in speculations that range from the ‘tip of the iceberg’ metaphor, often favored by science journalists, at one extreme to the conclusion that fraud is extremely rare at the other extreme (‘99.9999 % of all reports are accurate and truthful’ [45]).
The evidence from sample surveys has the advantage of producing a direct estimate of prevalence, despite the caveats noted earlier. However, these surveys differ greatly in study designs, sample sizes, questions asked, and other features, resulting in inconsistent outcomes. In addition, as noted above, the surveys ask questions about topics for which respondents might be expected not to be truthful. Although there is a 50-year history of using randomized response designs in this setting to minimize this problem [46, 47], only one survey of misconduct actually used this type of design to address the issue [48].
In order to make some sense of the published survey results, Fanelli [49] conducted a meta-analysis of 21 surveys published in 1987–2008, restricting attention to those surveys asking direct questions about the misconduct of researchers or about the misconduct of their colleagues. Only studies addressing fabrication, falsification or other questionable research practices that could produce biased or misleading results in the analysis were included (e.g., plagiarism was not included). In addition, only studies that clearly separated fabrication/falsification from other questionable practices were included.
Figure 2 gives a Forest plot of the results from the meta-analysis of self-reported fabrication or falsification. Figure 3 gives similar results for personal knowledge of fabrication or falsification by others (i.e., of the respondent’s colleagues).
The pooled weighted estimate of the self-reported admission rate, rounded to one decimal, was 2.0 % (95 % CI 0.9–4.5) and the pooled weighted estimate of those who reported fabrication or falsification by others was 14.1 % (95 % CI 9.9–19.7). Significant heterogeneity among the surveys was observed. It is likely that the self-reported admission rates are biased (low) for the reasons noted above.
Other questionable research practices are likely to be much more prevalent than fabrication, falsification or plagiarism. For example, in a large survey of US scientists funded by the National Institutes of Health, Martinson et al. [29] reported that 15.5 % of the respondents admitted to ‘changing the design, methodology or results of a study in response to pressure from a funding source’ and 33 % admitted to at least one of the ‘top 10’ questionable practices.
Causal factors and prevention
“Why does research misconduct happen? The answer that researchers love is ‘pressure to publish’, but my preferred answer is ‘Why wouldn’t it happen?’ All human activity is associated with misconduct. Indeed, misconduct may be easier for scientists because the system operates on trust. Plus scientists may have been victims of their own rhetoric: they have fooled themselves that science is a wholly objective enterprise unsullied by the usual human subjectivity and imperfections. It is not. It is a human activity.”
R. Smith [50]
Unfortunately, in common with estimates of prevalence, reliable data concerning the possible causes of misconduct do not exist [30]. We are left largely with expert opinions and speculation. The lack of data is problematic for formulation of effective prevention strategies; however, in most cases of research misconduct it is reasonable to assume that the motivation for the perpetrator lies at least partly in the potential for personal gain. In some cases there may be financial advantages, either direct personal financial gain or indirect financial gain for research funding. In other cases seeking promotion or tenure or scientific prestige may be primary motivating factors. Finally, there is always the possibility of some type of psychiatric condition or illness behind the misconduct. In any specific case, even when the perpetrator has admitted to the misconduct and offered some explanations for it, the testimony itself may be unreliable and the true motivating factors may remain unclear. Generalizations from individual cases are unreliable at best.
Despite the lack of reliable empirical evidence, there is a considerable literature addressing the contributing factors in misconduct, with three broad general narratives about three primary contributing factors—individual traits, institutional issues, and structural problems in science itself [28, 51, 52].
Individual traits include characteristics of the individual researcher that may lead to misconduct, including the inability to handle the ‘publish or perish’ and other competitive pressures, personal ambition, the desire for personal recognition or the wish for direct or indirect financial gain [53]. Some ascribe the presence of research misconduct or fraud primarily to ‘bad apples’ since, as in all human endeavor, there are individuals who violate established norms of behavior. Some of these individuals may have self-delusional, even self-destructive, tendencies.
Institutional issues include the ‘publish or perish’ pressures inherent in the promotion and tenure requirements, inadequacies of training and mentoring, lack of detailed oversight of research, competition for federal support and other issues [54, 55]. Structural issues in the way modern science is conducted may also contribute to the problem [51].
In discussing the causes of research misconduct and their implication for potentially effective prevention methods, information from the broader context of other illegal, immoral, inappropriate or unethical behavior in society may be useful. Adams and Pimple [56] address this directly. Reduction of criminal/deviant behavior has proven resistant to strategies based on rational decision analysis and to setting appropriate norms and values, however laudable these may be. This approach can be called the ‘individualist’ approach, based on the ‘bad actor’ or ‘bad apple’ assumption. A new approach from recent theories in criminology, ‘opportunity theory’, starts with the assumption that the population of potential offenders is essentially everyone (see Ariely [57] for support of this assumption). If this is so, then we need to create physical settings or situations that reduce the opportunity for misconduct and encourage appropriate behavior, especially via effective supervision and internal controls. As Adams and Pimple state: “It is sometimes far easier and more effective to control or change situations than it is to control or change individuals” [56].
In the case of clinical trials, especially multi-center clinical trials, institutional issues and structural issues in science in general are likely to be less important than individual factors. In addition, with a few exceptions such as the Fiddes case and the Snyder−Peugeot case noted in the introduction, direct financial gain does not appear to be a major motivating factor. One intriguing suggestion is that physician-scientists simply may be less rigorous than other scientists in their approach to clinical trials:
“It is our sense, primarily experiential and impressionistic in nature, that honesty in research work as a fundamental rule is valued more strongly among scientists than among physicians… Physicians tend to evaluate research in terms of harm or benefit to patients rather than in terms of adherence to the rigorous norms of scientific investigation” [58].
Years after this speculation was published, some support for this view was inadvertently supplied by Poisson in his explanation of why he falsified eligibility data on NSABP trials:
“I believed I understood the reasons behind the study rules, and I felt that the rules were meant to be understood as guidelines and not necessarily followed blindly. My sole concern at all times was the health of my patients. I firmly believed that a patient who was able to enter into an NSABP trial received the best therapy and follow-up treatment… Maintaining the proper balance between good clinical care and rigid research methods is not an easy task” [59].
In addition to the usual suggestions for preventing misconduct implied by consideration of the various factors involved (training in the ethics of research, improved mentoring, increased supervision, etc.), none of which have been proven to be effective, statistical procedures may also play a role. In particular, central statistical monitoring, an effective tool for detecting data fraud in clinical trials as part of a general data quality assurance program, may also function as a deterrent to committing such fraud in the first place for trials in which such monitoring is known to be in place [1, 60, 61]. Such procedures should be applied more commonly in multi-center clinical trials.
Summary
Despite the large and growing literature on the prevalence, causes and prevention of research misconduct in science in general and in clinical trials in particular, reliable empirical evidence to support the discussion remains in short supply. This situation exists in part because of the difficulties in definitions and in part because of the difficulties in designing and conducting studies in this area. However, the available evidence taken as a whole suggests that the most serious forms of misconduct, fabrication and falsification, are relatively rare, albeit perhaps higher than assumed by most scientists, whereas other questionable research practices are quite common. In addition, most discussions of the causal factors in misconduct are not based on reliable empirical evidence. Thus, prevention measures which are based on assumptions about causal factors are also liable to be misguided. More rigorous studies of the prevalence, causal factors and potential prevention strategies for research misconduct are needed.
References
George SL, Buyse M (2015) Data fraud in clinical trials. Clin Investig (Lond.) 15(2):161–173
George SL (1997) Perspectives on scientific misconduct and fraud in clinical trials. Chance 10(4):3–5
Fisher B, Redmond CK (1994) Fraud in breast-cancer trials. N Engl J Med 330(20):1458–1460
Weir C, Murray G (2011) Fraud in clinical trials. Significance 8(4):164–168. doi:10.1111/j.1740-9713.2011.00521.x
Horton R (2000) After Bezwoda. Lancet 355(9208):942–943. doi:10.1016/s0140-6736(00)90006-0
Weiss RB, Rifkin RM, Stewart FM et al (2000) High-dose chemotherapy for high-risk primary breast cancer: an on-site review of the Bezwoda study. Lancet 355(9208):999–1003
Eichenwald K, Kolata G (1999) A doctor’s drug studies turn into fraud. The New York times on the Web, A1–A16
Swaminathan V, Avery M (2012) FDA enforcement of criminal liability for clinical investigator fraud. Hastings Sci Tech Law J 4:325–356
Birch DM, Cohen G (2001) How a cancer trial ended in betrayal. http://www.baltimoresun.com/bal-te.research24jun24-story.html#page=1. Accessed 12 Jan 2015
Grant B (2009) Biotech’s baddies. Scientist 23(4):48
Carlisle J (2012) The analysis of 168 randomised controlled trials to test data integrity. Anaesthesia 67(5):521–537
Fujii Y (2000) Reply to “Reported data on granisetron and postoperative nausea and vomiting by Fujii et al. are incredibly nice!” [letter]. Anesth Analg 90(4):1004
Fujii Y (2012) Reply to “The analysis of 168 randomised controlled trials to test data integrity” [letter]. Anaesthesia 67(6):669–670
Kranke P, Apfel CC, Roewer N (2000) Reported data on granisetron and postoperative nausea and vomiting by Fujii et al. are incredibly nice! [letter]. Anesth Analg 90(4):1004
Miller D (2012) Retraction of articles written by Dr. Yoshitaka Fujii. Can J Anesth/J Can Anesth 59(12):1081–1088. doi:10.1007/s12630-012-9802-9
Normile D (2012) A new record for retractions? http://news.sciencemag.org/education/2012/04/newrecord-retractions. Accessed 13 Aug 2015
Baggerly KA, Coombes KR (2009) Deriving chemosensitivity from cell lines: Forensic bioinformatics and reproducible research in high-throughput biology. Ann Appl Stat 3:1309–1334
Oransky I (2014) Novartis Diovan scandal claims two more papers. http://retractionwatch.com/2014/04/02/novartis-diovan-scandal-claims-two-more-papers/. Accessed 4 May 2015
Husten L (2013) Diovan data was fabricated, say Japanese Health Minister and university officials. http://www.forbes.com/sites/larryhusten/2013/07/12/diovan-data-was-fabricated-say-japanese-health-minister-and-university-officials/. Accessed 3 July 2015
Federal Register (2005) Public health service policies on research misconduct final rule (42 CFR part 93.103). http://www.ecfr.gov/cgi-bin/text-idx?SID=0b07ed68cf889962cae6c2b45d89150b&node=pt42.1.93&rgn=div5. Accessed 4 July 2015
National Institutes of Health (2015) Research integrity. http://grants.nih.gov/grants/research_integrity/research_misconduct.htm. Accessed 4 July 2015
Federal Register (2002) National science foundation policies on research misconduct (45 CFR part 689) http://www.nsf.gov/oig/regulations/. Accessed 4 July 2015
American Psychological Association (2015) Research misconduct. https://apa.org/research/responsible/misconduct/index.aspx. Accessed 4 July 2015
U.S. Food and Drug Administration (2004) Guidance for industry: the use of clinical holds following clinical investigator misconduct. http://www.fda.gov/downloads/regulatoryinformation/guidances/ucm126997.pdf. Accessed 25 July 2015
Scott-Lichter D, Editorial Policy Committee, Council of Scientific Editors (2006) CSE’s white paper on promoting integrity in scientific journal publications. CSE, Reston
Universities UK (2012) The concordat to support research integrity http://www.universitiesuk.ac.uk/highereducation/Pages/Theconcordattosupportresearchintegrity.aspx#.VZgN03J3EdU. Accessed 4 July 2015
Breen KJ (2003) Misconduct in medical research: whose responsibility? Intern Med J 33(4):186–191
Habermann B, Broome M, Pryor ER et al (2010) Research coordinators’ experiences with scientific misconduct and research integrity. Nurs Res 59(1):51–57
Martinson BC, Anderson MS, De Vries R (2005) Scientists behaving badly. Nature 435(7043):737–738. doi:10.1038/435737a
Weed DL (1998) Preventing scientific misconduct. Am J Public Health 88(1):125–129
Wilmshurst P (1997) The code of silence. Lancet 349(9051):567–569
Sarwar U, Nicolaou M (2012) Fraud and deceit in medical research. J Res Med Sci 17(11):1077–1081
Steneck NH (2006) Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics 12(1):53–74
Claxton LD (2005) Scientific authorship: part 1. A window into scientific fraud? Mutat Res 589(1):17–30
Hone J (1993) Combating fraud and misconduct in medical research. Scrip Mag 14(March):14–15
Reynolds SM (2004) ORI findings of scientific misconduct in clinical trials and publicly funded research, 1992–2002. Clin Trials 1(6):509–516. doi:10.1191/1740774504cn048oa
Weiss RB, Vogelzang NJ, Peterson BA et al (1993) A successful system of scientific data audits for clinical trials. A report from the Cancer and Leukemia Group B. J Am Med Assoc 270(4):459–464
Hamilton D (1992) In the trenches, doubts about scientific integrity. Science 255(5052):1636. doi:10.1126/science.11642983
Ranstam J, Buyse M, George SL et al (2000) Fraud in medical research: an international survey of biostatisticians. Control Clin Trials 21(5):415–427. doi:10.1016/s0197-2456(00)00069-6
Kalichman MW, Friedman PJ (1992) A pilot study of biomedical trainees’ perceptions concerning research ethics. Acad Med 67(11):769–775
Swazey JP, Anderson MS, Lewis KS (1993) Ethical problems in academic research. Am Sci 81:542–553
Titus SL, Wells JA, Rhoades LJ (2008) Repairing research integrity. Nature 453(7198):980–982
John LK, Loewenstein G, Prelec D (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci 23(5):524–532. doi:10.1177/0956797611430953
Martinson BC, Crain AL, Anderson MS et al (2009) Institutions’ expectations for researchers’ self-funding, federal grant holding, and private industry involvement: manifold drivers of self-Interest and researcher behavior. Acad Med 84(11):1491–1499
Koshland DE (1987) Fraud in science. Science 235:141–142
Blair G, Imai K, Zhou Y-Y (2015) Design and analysis of the randomized response technique (in press). J Am Stat Assoc
Warner SL (1965) Randomized response: a survey technique for eliminating evasive answer bias. J Am Stat Assoc 60(309):63–69
List JA, Bailey CD, Euzent PJ et al (2001) Academic economists behaving badly? A survey on three areas of unethical behavior. Econ Inq 39(1):162–170
Fanelli D (2009) How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 4(5):e5738. doi:10.1371/journal.pone.0005738
Smith R (2006) Research misconduct: the poisoning of the well. J R Soc Med 99(5):232–237. doi:10.1258/jrsm.99.5.232
Sovacool BK (2008) Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? Bioeth Inq 5(4):271–282. doi:10.1007/s11673-008-9113-6
Davis MS, Riske-Morris M, Diaz SR (2007) Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics 13(4):395–414. doi:10.1007/s11948-007-9045-2
Chubin DE (1985) Misconduct in research: an issue of science policy and practice. Minerva 23(2):175–202. doi:10.1007/bf01099941
De Vries R, Anderson MS, Martinson BC (2006) Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Eth 1(1):43–50
Gaddis B, Helton-Fauth W, Scott G et al (2003) Development of two measures of climate for scientific organizations. Account Res 10(4):253–288
Adams D, Pimple KD (2005) Research misconduct and crime lessons from criminal science on preventing misconduct and promoting integrity. Account Res 12(3):225–240
Ariely D (2012) The honest truth about dishonesty. Harper Collins Publishers, New York
Swazey JP, Scher SR (1982) Whistleblowing in biomedical research. Government Printing Office, Washington
Poisson R (1994) Fraud in breast-cancer trials [letter]. N Engl J Med 330(20):1460
Buyse M, George SL, Evans S et al (1999) The role of biostatistics in the prevention, detection and treatment of fraud in clinical trials. Stat Med 18(24):3435–3451
Knatterud GL, Rockhold FW, George SL et al (1998) Guidelines for quality assurance in multicenter trials: a position paper. Control Clin Trials 19(5):477–493. doi:10.1016/s0197-2456(98)00033-6
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that he has no conflict of interest.
About this article
Cite this article
George, S.L. Research misconduct and data fraud in clinical trials: prevalence and causal factors. Int J Clin Oncol 21, 15–21 (2016). https://doi.org/10.1007/s10147-015-0887-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10147-015-0887-3