Abstract
Psychological injuries concern conditions produced by negligent actions, such as in motor vehicle accidents, and that result in claims for damages, such as in tort. In the psychological injury context, malingering refers to fabrications or gross exaggerations of psychological conditions for purposes of monetary gain. As much as the diagnoses that might result in such cases (e.g., posttraumatic stress disorder, PTSD), chronic pain, persistent postconcussive syndrome (PPCS) after a mild traumatic brain injury (mTBI) are considered contentious, so is the attribution of malingering and related negative response biases. This paper reviews the literature since the publication of the book by Young (2014) on the topic of malingering in the context of psychological injury cases. In particular, it examines the recent literature on the definition of malingering, and its prevalence or base rate, in the forensic disability and related context. The paper reviews not only recent articles, but also the 2015 Institute of Medicine book on the topic of use of validity tests in social security disability examinations. It examines the seminal work of Larrabee, Millis, and Meyers (2009) on the prevalence of malingering and indicates its consistencies. The paper concludes that (a) the definition of malingering can be improved and (b) the prevalence of malingering according to the recent research, as well as in Young’s 2014 book on the topic, is less than Larrabee’s catchy phrase that is prominent in some circles in the literature of 40 ± 10 %. More likely, it gravitates around 10 to 20 % so that, instead of 40 ± 10 %, the most appropriate percentage of malingering and related negative response biases in the disability and forensic context, as well as the clinical context, might be 15 ± 15 %, in general, with the percentage possibly being more than that for cases such as mTBI involving PPCS. These findings are pertinent to practice and for court.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
The article presents recent conceptualization and research related to the area of malingering in forensic disability and related cases. This area has been referred to as involving psychological injuries and law. Psychological injuries refer to liable harms that result in actionable claims for which legal suits are launched because of the negligence involved, the aim of which is to obtain damages (Young & Drogin, 2014). The critical complicating factor in such cases concerns the possibility of malingering (Young, 2014). However, there is no universally accepted definition of malingering, nor a conclusive estimate of its prevalence in these types of cases.
Complicating matters, the venues involved in disability determinations are confronted with a huge number of cases, for example, involving PTSD in disability tort cases, in worker compensation, among military veterans, and within the social security administration files (e.g., Bass & Halligan, 2014; Chafetz & Underhill, 2013; Russo, 2014).
There is no gold standard or best way to determine its presence, nor agreement on the best tests to use toward its determination as well as the best malingering detection systems that collate information over tests. Despite this uncertainty in the field, a widely cited estimate of the prevalence of malingering is the one of 40 ± 10 % (Larrabee et al., 2009; Larrabee, 2012). Aside from reviewing the literature on the definition and prevalence of malingering in these types of cases in forensic disability and related contexts, an important goal of the present paper is to show that this widely cited estimate of malingering is more “murky” than the stated “magical” quality attributed to it.
For relevant background information on malingering, the reader should consult Rogers (2008) and Carone and Bush (2013). Also, Young (2014, 2015c) has developed a malingered PTSD detection system that might be helpful. It was constructed based on prior malingering detection systems for the other major psychological injuries (respectively, for Slick, Sherman, and Iverson (1999) on a system for malingered neurocognitive dysfunction (MND) and Bianchini, Greve, and Glynn (2005) on a system for malingered pain-related disability (MPRD), but it stands as the first one dedicated to the detection of malingered PTSD.
Malingering Definition and Prevalence in Young (2014)
Definition
Review
The DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision; American Psychiatric, 2000) defines malingering as the “intentional production” of “grossly exaggerated” or “false” “psychological” and “physical” symptoms that derives from “motivation by external incentives,” for example, in obtaining financial compensation. However, Kane and Dvoskin (2011) argued for the separation of mild exaggeration from malingering, which is not the universal approach (e.g., Mittenberg, Patton, Canyock, & Condit 2002). For Kane and Dvoskin, exaggeration concerns a “relatively mild overstatement” of injury sequelae, and it is not the same as the gross exaggeration that is part of the DSM definition.
According to Young (2014), an improved definition of malingering would involve removal of the term “production” and replacing it by the term of “presentation.” Therefore, malingering should be defined as: the intentional presentation with false or grossly exaggerated symptoms [physical, mental health, or both; full or partial; mild, moderate, or severe], for purposes of obtaining an external incentive, such as monetary compensation for an injury and/ or avoiding/ evading work, military duty, or criminal prosecution.
Comment
On the question of defining malingering, Miller (2015) adopted a very similar position to that of Young (2014) and of Kane and Dvoskin (2011). Miller (2015) posited that malingering could be: (a) outright fabrication without any real symptoms; (b) exaggeration so that symptoms are far worse than they really are; (c) false “extension” of recovered/ improved symptoms as still being present, e.g., at initial or even worse levels; and (d) maintaining that genuine symptoms are due to a compensable negligent act when there is no linkage to it. Clinically, malingered PTSD might present with dramatic flashbacks, atypical nightmares (e.g., stereotypic), and exaggerations/contradictions, among other indicators. The approach taken by Miller (2015) underscores several important issues. First, malingering might involve an absent of “real symptoms.” Second, if exaggeration of real symptoms is involved, it is at a level that makes them “far worse.” The American Academy of Psychiatry and Law (2015) took a similar position.
The 2015 Institute of Medicine book authors (IOM 2015) defined malingering as the intentional presentation of false or exaggerated symptoms, or intentionally poor performance, or both, for purposes of external incentives. This definition is consistent with my approach to define malingering in terms of presentation rather than production (Young, 2014). However, it conflates outright malingering with any level of exaggeration, which may set the bar too low, and so lead to false positives.
Base Rate
Review
Mittenberg et al. (2002) conducted a survey of forensic workers in the field on their estimates of malingering. They surveyed practitioners about their approaches to the matter, and the respondents had dealt with over 30,000 cases of neuropsychological assessment that took place in the prior year. In describing Mittenberg et al. (2002), Boone (2011) noted that the rate at issue in Mittenberg et al. could be up to 41 % for cases of mild traumatic brain injury (mTBI). Inspection of Mittenberg et al. (2002) indicates that this percentage is for an adjusted rate, with the unadjusted one being 39 %. Also, for personal injury cases and disability/ worker compensation cases, the respective reported and adjusted rates are 29 and 30 %, and 30 and 33 %.
Evidence
Reference to the actual research conducted by Mittenberg et al. (2002) revealed several inconsistencies. For example, in the survey, the definitions of malingering and exaggeration were not provided to the respondents. Moreover, not only was malingering conflated with exaggeration in the study but also exaggeration was not specified for severity. Further, the survey involved questions about “probable” exaggeration or malingering.
Furthermore, reference to other research after the publication by Mittenberg et al. (2002) on the topic of base rate of malingering and related negative response biases in the forensic disability and related context gives a mixed picture. In this regard, Chafetz (2011) examined performance of social security disability claimants. Of 161 claimants in the sample, 38.5 % were classified as either probable or definite malingers. However, I examined the breakdown of the two categories, which reveals that only 15 % were classified as definite malingerers (N = 24).
In Young (2014), I sought the prevalence rate for malingering in more recent research (into 2013) that offered percentages on the question in forensic disability and related examinations. I had to retrieve the percentages from the data within the studies because getting these percentages did not constitute a primary goal of the studies.
In this regard, Greve, Ord, Bianchini and Curtis (2009b) examined the prevalence of malingered disability in compensation-seeking chronic pain patients. Of the 508 patients, up to 36 % were classified as probable or definite malingerers, but with only 10 % as definite malingerers.
Wygant, Anderson, Sellbom, Rapier, Allgeier, and Granacher (2011) examined the results of 251 individuals who had undergone compensation-seeking evaluations. The percentage of definite malingering was only 8 % in this study.
Lee, Graham, Sellbom, and Gervais (2012) investigated claimants who had undergone non-neurological medico-legal disability assessments. Of 1209 patients who met inclusion criteria, only 19 met the criteria for definite malingering. This works out to a percentage of about 2 %. These results were not the main ones to which the study was aimed, and I had to calculate the latter percentage myself, as I had done for Wygant et al. (2011).
Rogers and Bender (2012) noted that in a survey of National Academy of Neuropsychology (NAN) members, of the respondents, Sharland and Gfeller (2007) found that the median for definite malingering was only 1 % (in their Table 3). Similarly, Slick, Tan, Strauss, and Hultsch (2004) surveyed published researchers on malingering. Only 13 % rated the prevalence of the category “definite” malingering at 30 % or more.
Comment
Of the percentages listed in the prior research analyzed in Young (2014) for the base rate of malingering or related feigning, the average involved is only 7 %. Larrabee et al. (2009) had argued that the standard malingering base rate in the field of forensic disability assessments could be characterized as 40 ± 10 %. Furthermore, Larrabee, Greiffenstein, Greve, and Bianchini (2007) had even argued that, in neuropsychological assessments of mTBI in which neuropsychological deficits persist, the rate of malingering might be as high as 88 %!
However, the evidence reviewed in this paper does not suggest such extreme proportions for the base rate of malingering. Nevertheless, in terms of problematic cases, in general, the percentage might be higher than 40 ± 10 %! Young (2014) concluded that the estimates of the rate of outright malingering could be as high as 15 %, with problematic presentations and performances to lesser degrees than outright malingering higher than that.
To buttress my conclusion that the estimate of base rate of malingering in Larrabee’s work is much higher than what is found in the actually data in the field, I returned to the original Larrabee sources and analyzed carefully the literature he cited and his conclusions based on them. This analysis, that is presented in the next section, shows that the magical number of 40 ± 10 % for the base rate of malingering is not the magical number that has been attributed to it. Rather, it is more like a murky number that is unjustified, is exaggerated, and that can lead to errors and distortions in court.
The Murky Prevalence Estimate of Malingering of 40 ± 10 %
Larrabee et al. (2009) heralded a new era in the estimate of malingering prevalence in forensic disability and related assessments by referring to the new magical number of 40 ± 10 %. In support of this claim, they referred to the survey of forensic neuropsychologists conducted by Mittenberg et al. (2002), which, as mentioned, showed that these practitioners estimated probable malingering or response exaggeration as up to 41.2 % for evaluees assessed for mild head injury (the estimate corrected for referral source). Other categories of evaluees also showed values in this range (29 % for personal injury, 30 % for disability).
Next, Larrabee et al. (2009) cited the review of 11 studies by Larrabee (2003) on mTBI cases in which 40 % “failed SVTs” (symptom validity tests) Larrabee (2012) referred to the performance of the evaluees in these studies as “motivated performance deficit suggestive of malingering.” Larrabee et al. (2009) then cited other research since the ones cited in Larrabee (2003) to support their contention that the base rate of malingering is the magical number of 40 ± 10 %. In this regard, I note that the percentage of malingerers was either very low (6.7 %) or based on failing even just one SVT, which is a highly problematic decision.
In this regard, Miller, Boyd, Cohn, Wilson, and McFarland (2006) evaluated Social Security Administration (SSA) disability applicants and found that 54 % failed one or the other of two SVTs that were administered. Van Hout, Schmand, Wekking, and Deelman (2006) evaluated effects of suspected neurotoxic injury and 57 % failed one or more of three SVTs that were administered. Using the MND criteria of Slick et al. (1999), Greve, Bianchini, Black, Heinly, Love, Swift and Ciota (2006a) found a rate of 6.7 % of definite malingering in evaluees who were exposed to occupational/ environmental substances. Larrabee et al. (2009) added that the prevalence for malingering equaled 40 % in that study because they added in the 33.3 % of the sample that scored at the level of probable malingering on the MND. They concluded that for “invalid neuropsychological data/ probable malingering,” over the studies cited, there is a “remarkable consistency” in finding a base rate of “40–50 % or more.”
To conclude, I examined the original data in Larrabee (2003), both for his own study described and for the 11 studies cited. According to him, in the sample of 95 cases scrutinized in his personal study on the matter, the malingering base rate was 43 %. However, for the 41 cases labeled as malingerers, I note that 24 were definite according to the MND, and 17 were probable. Therefore, the percentage for definite malingering in this study arrives at 24/95 or 25.3 %, and not 43 %!
In terms of the 11 studies that Larrabee (2003) cited in his literature review, although the average was 40 % of evaluees who performed with “motivated performance deficit suggestive of malingering,” the range in the studies was 15 % to 64 %. Further, I note that Larrabee (2003) did not describe in detail the studies and how his percentages of the base rate of malingering were derived in them. However, given the inconsistencies found, in general, in how he and others define malingering and establish its base rate, it is probable that the percentage of alleged malingering in the 11 cited studies was either not directly assessed or was assessed in a manner that conflated with other negative response biases, such as finding even one SVT failure or the mildest of exaggeration. To summarize, the research that Larrabee has conducted or cited to justify his estimated prevalence rate of malingering of 40 ± 10 % does not support this value.
Literature Review in the IOM (2015) Book
Introduction
The Institute of Medicine (IOM, 2015) examined the utility of the validity tests of SVTs and performance validity tests (PVTs) in SSA disability examinations. The authors were Pardes, Barsky, Daly, Geisinger, Gerber, Jette, Koop, Suzuki, Twamley, Ubel, and Wall. With respect to the question of the prevalence of malingering, the literature reviewed gave one impression, my analysis of it another, and their conclusions a third.
Malingering
In terms of prevalence of malingering in SSA disability examinations according to the 2015 IOM book authors, the studies reviewed suggested a range from 14 to 60 % for either performing below capacity according to PVTs or inaccurately reporting one’s symptoms, or according to other indicators. In particular, in Griffin, Normington, May, and Glassmire (1996), the percentage for malingering was 19 %. In Chafetz and Abrahams (2005), below chance test failure was 14 % and failing two or more validity indicators was 59 %. For Miller et al. (2006), test failure rate was 54 %. For Chafetz, Abrahams, and Kohlmaier (2007), a malingering index over tests gave percentages of 21–30 for below-chance performance and 52–59 for test failure. In Chafetz (2008), the percentages for two test failures and below chance performance were 46 to 60 % and 37 to 47 %, respectively.
Chafetz and Underhill (2013) noted that the frequency of feigning of disabling illness in evaluation of adult disability compensation in the Social Security Disability (SSD) is 46 % to 60 %.
Comment
Despite the literature review data, the IOM authors preferred to estimate that about 10 % of applicants would be excluded from benefits should more rigorous screening be undertaken for malingering, using validity tests and consideration of the whole files involved. This percentage is consistent with the one in Young (2014), as well as the one found in the present review, as per the following.
2014–2015 Literature Review
Prevalence: Preparing the Analysis Undertaken
Introduction
Next, I examine all the recent research that relates to validity testing in the area of forensic disability and related determinations, including presentation of 13 recent studies not yet mentioned in the article that give percentages of malingering or related negative response biases. The primary goal of these latter studies was not to arrive at the percentage of malingering in forensic disability and related evaluations, so that often I had to ferret out the percentages that they described in their data. Before giving these percentages, I review the studies for their primary goals, as well as others that speak to the issue of malingering and its detection.
Review
Buddin, Schroeder, Hargrave, Von Dran, Campbell, Brockman, Heinrichs, and Baade (2014) argued that the Test of Memory Malingering (TOMM; Tombaugh, 1996) needs a measure of response consistency. Gunner, Miele, Lynch, and McCaffrey (2012) developed the Albany Consistency Index (ACI). Buddin et al. (2014) created the Invalid Forgetting Frequency Index (IFFI) for the same purpose.
Kulas, Axelrod, and Rinaldi (2014) investigated the new indices that have been developed for the TOMM. They examined the performance of these measures in a mixed clinical sample of military veterans. All five TOMM measures (e.g., just using Trial 1 or a reduced number of items (Denning, 2012)) helped discriminate examinees who had failed two/three alternate measures of performance validity.
Bashem, Rapport, Miller, Hanks, Axelrod, and Millis (2014) examined the discriminability of five PVTs: the TOMM, Medical Symptom Validity Test (MSVT; Green, 2004), Reliable Digit Span (RDS; Schroeder, Twumasi-Ankrah, Baade, & Marshall 2012), Word Choice Test (WCT; Wechsler, 2009), and California Verbal Learning Test – Forced Choice (CVLT-FC; Delis, Kramer, Kaplan, & Ober, 2000). The results showed better discrimination ability for the TOMM, MSVT, and CVLT-FC and relative to the WCT and RDS.
Mossman, Miller, Lee, Gervais, Hart, and Wygant (2015) developed a Bayesian approach to mixed group validation that did not involve groups as in prior designs. The findings were consistent with the ones in the general literature on the value of the TOMM.
Axelrod, Meyers, and Davis (2014) examined three different scoring systems for the Finger Tapping Test (FTT; Reitan & Wolfson, 1985). They determined the test could function as an index of performance validity in neuropsychological assessments of veterans and of evaluees in IMEs (independent medical examinations).
Guise, Thompson, Greve, Bianchini, and West (2014) examined traumatic brain injury (TBI) claimants (mild, severe; designated as MND, not) and non-head injured patients for performance validity according to measures on the Stroop Color and Word Test (Stroop, 1935) given as part of neuropsychological examination. Word (more than Color and Word-Color) residual raw scores differentiated best examinees having been categorized as malingerers and those not so categorized.
Henry, Heilbronner, Mittenberg, Hellemann, and Myers (2014) developed a new 13-item cognitive complaints scale on the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2; Butcher, Graham, Ben-Porath, Tellegen, Dahlstrom, & Kaemmer, 2001) as an embedded neuropsychological measure of symptom validity. The subscale differentiated mTBI patients who failed or passed performance validity indices (controls also were tested).
Lindley, Carlson, and Hill (2014) found that, among 30 Vietnam combat veterans with severe and chronic PTSD, 20 expressed psychotic-like symptoms. Among the measures used in the study, the PCL-C (PTSD Symptom Checklist-Civilian; Weathers, Litz, Herman, Huska, & Keane, 1993) was used to help assess PTSD.
Nguyen, Green, and Barr (2015) found that the F family of tests on the Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) differentiated pass and fail groups according to the malingered detection system MND (Slick et al., 1999). The F-r (Infrequent Responses), FBS-r (Symptom Validity), and RBS (Response Bias Scale) differentiated the pass-fail groups within two of the three evaluee groups (neurological, psychiatric; not medical complaints), with the Fs scale differentiating those who passed and failed the MND for the psychiatric group only.
Proto, Pastorek, Miller, Romesser, Sim, and Linck (2014) examined veterans with claimed mTBI both with PVTs and a neuropsychological battery. Even failing one PVT led to differential neuropsychological performance, and failing two of them gave almost identical results.
Whiteside, Kogan, Wardin, Phillips, Franzwa, Rice, Basso, and Roper (2015) studied the Boston Naming Test (BNT) and the Verbal Fluency test (FAS and Animal Fluency) as PVTs in neuropsychological assessment using a compensation-seeking mTBI sample that was especially compared to a serious TBI one. Only a logistically-derived combined measure was acceptable in differentiating mTBI case who had failed two or more PVTs and serious TBI counterparts who had not failed any.
Whitney and Davis (2015) studied the ability of measures on the Rey Auditory Verbal Learning Test (RAVLT) to predict credible/non-credible neuropsychological test performance in veterans. The recognition score of the test produced better classification accuracy than its “non-credible” score. The non-credible group involved meeting MND criteria while failing either the TOMM or MSVT (except if the test gave a Genuine Memory Impairment Profile, GMIP).
Comment
Overall, these studies reveal that rapid proliferation of quality research aimed at establishing valid test practices in differentiating credible and non-credible performance in forensic disability and related assessment contexts. Commonly, these studies use the MND or a variant as a malingering detection system and two or more PVTs in the assessments.
Some would argue that even one PVT failure is informative (e.g., Proto et al., 2014), while others refer to the criterion of at least two such failures (e.g., Slick et al., 1999), if not three of them (e.g., Boone & Lu, 2007). Also, context or type of forensic disability-related evaluation is important to consider (Cottingham, Victor, Boone, Ziegler, & Zeller, 2014). There are as yet no gold standards in the field.
One critical variable in establishing whether an examinee in putting forth sufficient effort on PVTs is the cut score used for the measure at issue. These can vary in terms of recommendations of what is the critical dividing line (or lines) even within one instrument depending on the study, as shown in the following.
Cut scores
Introduction
Haynes, Smith, and Hunsley (2011) defined cut scores as specific scores on a measure that serve to divide the distribution of scores for the measure into categories (two or more, depending on needs). Cut scores might be statistically influenced but also could be rationally-derived (experientially, and they might change as the research changes on the measure at issue). Haynes et al. (2011) added that cut score determination is not “wholly objective,” it requires judgment, and they are “conditional.” Their goal is to facilitate accurate decision-making. The literature does not provide “firm” guidelines in establishing them. Ben-Porath, Greve, Bianchini, and Kaufmann (2009) added that in the forensic context, cut-scores might vary with the “facts” of the evaluation, and the context.
To illustrate the difficulties in establishing cut scores, consider the next section of two studies that try to decipher for the same two tests the best cut-score to use in forensic disability and related evaluations. The results were not the same. They show that you can get different cut scores based on the context, population, etc. Moreover, these scores will vary based on the respective value placed on false positives vs. false negatives.
Review
Crighton, Wygant, Applegate, Umlauf, and Granacher (2014) asked whether two brief measures, the Modified Somatic Perception Questionnaire (MSPQ; Main, 1983) and the Pain Disability Index (PDI; Pollard, 1984), screen effectively for malingering in relation to the MPRD criteria. The authors concluded that in screening in clinical settings individuals in evaluation for disability for pain, scores of ≥14 on the MSPQ or ≥54 on the PDI should be used.
Bianchini, Aguerrevere, Guise, Ord, Etherton, Meyers, Soignier, Greve, Curtis, and Bui (2014) examined the accuracy of the MSPQ and PDI in relation to classification of examinees according to the MPRD. Their Table 7 showed the following for cut scores on the MSPQ and PDI as screeners for comprehensive psychological evaluation and/ or functional capacity evaluation: Score Levels: MSPQ≥17; PDI≥62.
Comment
The results of two recent studies conducted independently on the same question found similar but different results in relation to cut scores for the tests involved. This type of confound or inconsistency in the research could lead to similar inconsistencies in how the tests are used in practice.
Prevalence or Base Rate of Malingering in the 2014/2015 Research Cited
Introduction
In the literature review in the prior section of this article, I have described 13 studies (Table 1). In none of the cases was the intention to address specifically the prevalence or base rate of malingering or a related negative response bias. Therefore, I conducted careful analysis of what the prevalence of malingering appears to be in these studies. The 13 studies in this analysis are the ones by: Axelrod et al. (2014); Bianchini et al. (2014); Buddin et al. (2014); Crighton et al. (2014); Guise et al. (2014); Henry et al. (2014); Kulas et al. (2014); Larrabee (2014); Lindley et al. (2014); Nguyen et al. (2015); Proto et al. (2014); Whiteside et al. (2015) and Whitney and Davis (2015).
Review
For Axelrod et al. (2014), percentages related to malingering type behavior was 9 % for veterans in neuropsychological assessments and 25 % for IMEs, or an average of 17 %. For Bianchini et al. (2014), who examined pain complainants, the percentage was at 5 % for definite malingers. For Buddin et al. (2014), the equivalent group involved the assignment of definite malingering in neuropsychological evaluations, and the percentage for this group was at 3 %. As for Crighton et al. (2014), the percentage for a combined group of probable and definite malingering in was at 24 % for their forensic disability cases. In Guise et al. (2014), the percentage was 33 % among mild and moderate-severe TBI cases, either probable MND or not (41/126). For Henry et al. (2014), the percentage appears to be 50 % among personal injury litigants. Kulas et al. (2014) included a suboptimal group in PVT measure performance, and the percentage of the sample of veterans in neuropsychological assessment at this level was at 10 %. For Larrabee (2014), who tested mTBI complainants, the figure rose to 59 % for his definite malingering group. For Lindley et al. (2014), among PTSD cases the percentage of questionable Rey test performance was 6 %. For Nguyen et al. (2015), the average for MND failure was 32 %, with the percentages for psychiatric, neurological, and medical complaint groups being 17, 42, and 39 %, respectively. In Proto et al. (2014), failure on three PVTs happened in 16 % of veterans in neuropsychological assessment. For Whiteside et al. (2015), the percentage for mTBI cases failing two or more PVTs was 23 % (the equivalent percentage for serious TBI was not made available). Finally, Whitney and Davis (2015) conducted neuropsychological assessments of veterans, and 37/175 (21 %) failed both TOMM and MSVT, while being designated probable/ definite MND.
Comment
If we take the 13 obtained and estimated results for the percentages of definite malingering or its equivalent in these 13 studies in 2014–2015 that have been reviewed and average them, the percentage is 23 %. This average estimate of malingering in the forensic disability and related context seems high relative to the 10 % as estimated as non-credible in the 2015 IOM book but low relative to other estimates toward 40 % or more (e.g., Larrabee et al., 2009).
Note that the range of estimates of malingering or related feigning findings in the 13 studies reviewed in this paper since the review of studies in the book by Young (2014) is 3 to 59 %. This range attests to the differences in methods across the studies, and fuels the controversies associated with malingering in both research and practice.
Overall, the more recent estimates in the literature of the base rate of malingering or related feigning appear to hover around 15 %. In this regard, in the IOM 2015 book, in Young (2014), and in present 2015 review, the respective percentages that are favored or found are about 10, 15, and 20 (plus) %.
Note that these percentages related to malingering in forensic disability and related cases are not necessarily about malingering itself, but generally they are either only about test failures related to it that might indicate it or about negative response biases, generally, but not malingering, per se. Moreover, it appears that the findings in all these reviews of the literature vary over type of evaluee (e.g., neuropsychological/ mTBI; psychiatric), definition of malingering, malingering and related feigning criteria used (e.g., MND, amount of PVTs failed), and tests used (which PVTs, SVTs).
Therefore, it is difficult to arrive at one percentage or range of percentages that are definitive about the proportion of malingering found in forensic disability and related examinations. However, the 15 % value could serve as one axis in these regard, with higher percentages possible, given problematic cases such as mTBI leading to PPCS.
For example, the results in Nguyen et al. (2015), as described above, are quite telling in that the estimated rate relative to malingering was 17 % for psychiatric evaluees but 42 % for neurological ones. These two values are quite consistent with the present suggestion to consider as having a 15 % rate of malingering on average but more problematic cases related to mTBI as having one towards 40 %.
That being said, Wisdom, Pastorek, Miller, Booth, Romesser, Linck, and Sim (2014) examined 134 military veterans with a history of mTBI who had been referred for neuropsychological evaluation. The results showed that WMT failure was associated with worse cognitive test performance on many of the cognitive measures administered. These results do not refer to malingering, per se, but they do indicate that even one PVT failure can be informative in forensic disability and related contexts.
Conclusions
Forensic disability and related assessments are replete with contested areas that make it difficult to conduct them. Among the most contentious is the area of malingering, which is debated even in terms of its basic definition. Moreover, the research on its prevalence suggests that is it in the neighborhood of 40 ± 10 % in this context, in general, and perhaps higher in cases of assessment mTBI cases with persistent complaints. However, both in the literature review conducted by Young (2014), as well as in the present literature review updating it, the percentage of malingering that has been found in the empirical research appears far less than these elevated proportions, which is consistent with the IOM (2015) review of the topic.
Perhaps a good way of summarizing the range of possibilities in various contexts, from the clinical and genuinely hurt to the high stakes neuropsychological examinations involving mTBI leading to PPCS, is that we can expect a rate of malingering generally at the percentage level of 15 ± 15, with the percentage possibly higher, especially for mTBI leading to PPCS. Also, the rate of quite problematic presentations and feigning, in general, could be much higher than the rate of malingering, per se.
Bigler (2014) went beyond querying the value of one test or the other in malingering determinations by questioning the whole testing enterprise. He argued that neuroimaging results may be “key” in understanding better the meaning of grey-zone performance. However, he used case studies to support his contention. Clearly, further research is warranted but I do not doubt that PVTs and SVTs will be found useful, as do the authors of the 2015 IOM book.
Finally, the present review has relevant practice implications and innovations because it has examined carefully the purported malingering base rate of 40 ± 10 % and found it severely wanting. My analysis of Larrabee’s (2003) own research shows a rate of 25 %, which is just about the average of the recent research that I analyzed in this article (23 %). Furthermore, this latter value reflects data in prior studies that include problematic presentation, in general, and not just of malingering itself. That is, the 40 ± 10 % value of the base rate of malingering is not a magical number but a murky one that has no place in court. Further research directly on the question is needed and, in this sense, there will be nothing magical about the value that is found because it will be science-informed rather than mysteriously derived.
References
American Academy of Psychiatry and Law. (2015). AAPL practice guideline for the forensic assessment. The Journal of the American Academy of Psychiatry and the Law, 43, S3–S53.
American Psychiatric Association. (2000). Diagnostic and Statistical Manual of Mental Disorders: DSM-IV-TR (4th ed., text rev.). Washington, DC: American Psychiatric Association.
Arnold, G., Boone, K. B., Lu, P., Dean, A., Wen, J., Nitch, S., & Mcpherson, S. (2005). Sensitivity and specificity of Finger Tapping Test scores for the detection of suspect effort. The Clinical Neuropsychologist, 19, 105–120.
Axelrod, B. N., Meyers, J. E., & Davis, J. J. (2014). Finger Tapping Test performance as a measure of performance validity. The Clinical Neuropsychologist, 28, 876–888.
Babikian, T., Boone, K. B., Lu, P., & Arnold, G. (2006). Sensitivity and specificity of various digit span scores in the detection of suspect effort. The Clinical Neuropsychologist, 20, 145–159.
Bashem, J. R., Rapport, L. J., Miller, J. B., Hanks, R. A., Axelrod, B. N., & Millis, S. R. (2014). Comparisons of five performance validity indices in Bona fide and simulated traumatic brain injury. The Clinical Neuropsychologist, 28, 851–875.
Bass, C., & Halligan, P. (2014). Factitious disorders and malingering: Challenges for clinical assessment and management. The Lancet, 383, 1422–1432.
Ben-Porath, Y. S., & Tellegen, A. (2008/2011). MMPI-2-RF: Manual for administration, scoring, and interpretation. Minneapolis, MN: University of Minnesota Press.
Ben-Porath, Y. S., Greve, K. W., Bianchini, K. J., & Kaufmann, P. M. (2009). The MMPI-2 Symptom Validity Scale (FBS) is an empirically validated measure of overreporting in personal injury litigants and claimants: Reply to Butcher et al. (2008). Psychological Injury and Law, 2, 62–85.
Benton, A. L., Sivan, A. B., Hamsher, K. deS., Varney, N. R., & Spreen, O. (1994). Contributions to neuropsychological assessment: A clinical manual (2nd ed.). New York: Oxford University Press.
Bianchini, K. J., Greve, K. W., & Glynn, G. (2005). Review article: On the diagnosis of malingered pain-related disability: Lessons from cognitive malingering research. The Spine Journal, 5, 404–417.
Bianchini, K. J., Aguerrevere, L. E., Guise, B. J., Ord, J. S., Etherton, J. L., Meyers, J. E., ... Bui, J. (2014). Accuracy of the modified somatic perception questionnaire and pain disability index in the detection of malingered pain-related disability in chronic pain. The Clinical Neuropsychologist, 28, 1376–1394.
Bigler, E. D. (2014). Effort, symptom validity testing, performance validity testing and traumatic brain injury. Brain Injury, 28, 1623–1638.
Binder, L. M. (1993). Portland Digit Recognition Test manual (2nd ed.). Portland: Private Publication.
Boone, K. B. (2011). Clarification or confusion? A review of Rogers, Bender, and Johnson’s a critical analysis of the MND criteria for feigned cognitive impairment: Implications for forensic practice and research. Psychological Injury and Law, 4, 157–162.
Boone, K. B., & Lu, P. H. (2007). Non-forced choice effort measures. In G. Larrabee (Ed.), Assessment of malingered neuropsychological deficits (pp. 27–43). New York: Oxford University Press.
Boone, K. B., Lu, P., Back, C., King, C., Lee, A., Philpott, L., ... Warner-Chacon, K. (2002). Sensitivity and specificity of the Rey Dot Counting Test in patients with suspect effort and various clinical samples. Archives of Clinical Neuropsychology, 17, 625–642.
Buddin, W. H., Jr., Schroeder, R. W., Hargrave, D. D., Von Dran, E. J., Campbell, E. B., Brockman, C. J., ... Baade, L. E. (2014). An examination of the frequency of invalid forgetting on the test of memory malingering. The Clinical Neuropsychologist, 28, 525–542.
Busse, M., & Whiteside, D. (2012). Detecting suboptimal cognitive effort: Classification accuracy of the Conner’s Continuous Performance Test-II, brief test of attention, and trial making test. The Clinical Neuropsychologist, 26, 1–13.
Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Manual for the restandardized Minnesota Multiphasic Personality Inventory: MMPI-2. An interpretive guide. Minneapolis: University of Minnesota Press.
Butcher, J. N., Graham, J. R., Ben-Porath, Y. S., Tellegen, A., Dahlstrom, W. G., & Kaemmer, G. (2001). Minnesota Multiphasic Personality Inventory-2: Manual for administration and scoring (2nd ed.). Minneapolis: University of Minnesota Press.
Carone, D. A., & Bush, S. S. (2013). Mild traumatic brain injury: System validity assessment and malingering. New York: Springer.
Chafetz, M. D. (2008). Malingering on the Social Security disability consultative examination: Predictors and base rates. The Clinical Neuropsychologist, 22, 529–546.
Chafetz, M. D. (2011). Reducing the probability of false positives in malingering detection of social security disability claimants. The Clinical Neuropsychologist, 25, 1239–1252.
Chafetz, M., & Abrahams, J. (2005). Green’s MACT helps identify internal predictors of effort in the social security disability exam. Archives of Clinical Neuropsychology, 20, 889–890.
Chafetz, M., & Underhill, J. (2013). Estimated costs of malingered disability. Archives of Clinical Neuropsychology, 28, 633–639.
Chafetz, M. D., Abrahams, J. P., & Kohlmaier, J. (2007). Malingering on the social security disability consultative examination: A new rating scale. Achieves of Clinical Neuropsychology, 22, 1–14.
Cottingham, M. E., Victor, T. L., Boone, K. B., Ziegler, E. A., & Zeller, M. (2014). Apparent effect of type of compensation seeking (disability versus litigation) on performance validity test scores may be due to other factors. The Clinical Neuropsychologist, 28, 1030–1047.
Crighton, A. H., Wygant, D. B., Applegate, K. C., Umlauf, R. L., & Granacher, R. (2014). Can brief measures effectively screen for pain and somatic malingering? Examination of the modified somatic perception questionnaire and pain disability index. The Spine Journal, 14, 2042–2050.
Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (1987). California Verbal Learning Test: Manual. San Antonio: Psychological Corporation.
Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (2000). California Verbal Learning Test (2nd ed.). San Antonio: Psychological Corporation.
Denning, J. H. (2012). The efficiency and accuracy of the Test of Memory Malingering trial 1, errors on the first 10 items of the test of memory malingering, and five embedded measures in predicting invalid test performance. Archives of Clinical Neuropsychology, 27, 417–432.
Green, P. (2003). Green’s Word Memory Test for Microsoft Windows. Edmonton: Green’s.
Green, P. (2004). Manual for Medical Symptom Validity Test (MSVT) user’s manual and program. Edmonton: Green’s.
Green, P. (2005). Word Memory Test for Windows: User’s manual and program. Edmonton: Green’s Publishing.
Green, P., Allen, L. M., & Astner, K. (1996). The Word Memory Test [manual]. Durham: Cognisyst.
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment, 6, 218–224.
Greve, K. W., Bianchini, K. J., Mathias, C. W., Houston, R. J., & Crouch, J. A. (2002). Detecting malingered performance with the Wisconsin Card Sorting Test: A preliminary investigation in traumatic brain injury. The Clinicial Neuropsychologist, 16, 179–191.
Greve, K. W., Bianchini, K. J., Mathias, C. W., Houston, R. J., & Crouch, J. A. (2003). Detecting malingered performance on the Wechsler Adult Intelligence Scale Validation of Mittenberg’s approach in traumatic brain injury. Archives of Clinical Neuropsychology, 18, 245–260.
Greve, K. W., Bianchini, K. J., Black, F. W., Heinly, M. T., Love, J. M., Swift, D. A., & Ciota, M. (2006a). The prevalence of cognitive malingering in persons reporting exposure to occupational and environmental substances. NeuroToxicology, 27, 940–950.
Greve, K. W., Bianchini, K. J., Love, J. M., Brennan, A., & Heinly, M. T. (2006b). Sensitivity and specificity of MMPI-2 validity scales and indicators to malingered neurocognitive dysfunction in traumatic brain injury. The Clinical Neuropsychologist, 20, 491–512.
Greve, K. W., Ord, J. S., Curtis, K. L., Bianchini, K. J., & Brennan, A. (2008). Detecting malingering in traumatic brain injury and chronic pain: A comparison of three forced choice symptom validity tests. The Clinical Neuropsychologist, 22, 896–918.
Greve, K. W., Heinly, M. T., Bianchini, K. J., & Love, J. M. (2009a). Malingering detection with the Wisconsin Card Sorting Test in mild traumatic brain injury. The Clinical Neuropsychologist, 23, 343–362.
Greve, K. W., Ord, J. S., Bianchini, K. J., & Curtis, K. L. (2009b). Prevalence of malingering in patients with chronic pain referred for psychologic evaluation in a medico-legal context. Archives of Physical Medicine and Rehabilitation, 90, 1117–1126.
Griffin, G. A., Normington, J., May, R., & Glassmire, D. (1996). Assessing dissimulation among social security disability income claimants. Journal of Consulting and Clinical Psychology, 64, 1425–1430.
Guise, B. J., Thompson, M. D., Greve, K. W., Bianchini, K. J., & West, L. (2014). Assessment of performance validity in the Stroop Color and Word Test in mild traumatic brain injury patients: A criterion-groups validation design. Journal of Neuropsychology, 8, 20–33.
Gunner, J. H., Miele, A. S., Lynch, J. K., & McCaffrey, R. J. (2012). The Albany Consistency Index for the Test of Memory Malingering. Archives of Clinical Neuropsychology, 27, 1–9.
Hannay, H. J., Levin, H. S., & Grossman, R. G. (1979). Impaired recognition memory after head injury. Cortex, 15, 269–283.
Haynes, S. N., Smith, G. T., & Hunsley, J. D. (2011). Scientific foundations of clinical assessment. New York: Taylor & Francis Group.
Heaton, R. K., Grant, I., & Matthews, C. G. (1991). Comprehensive norms for an expanded Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa: Psychological Assessment Resources.
Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtiss, G. (1993). Wisconsin Card Sorting Test manual. Revised and expanded. Odessa: Psychological Assessment Resources.
Henry, G. K., Heilbronner, R. L., Mittenberg, W., Hellemann, G., & Myers, A. (2014). Development of the MMPI-2 cognitive complaints scale as an embedded measure of symptom validity. Brain Injury, 28, 357–363.
Inman, T. H., Vickery, C. D., Berry, D. T. R., Lamb, D. G., Edwards, C. L., & Smith, G. T. (1998). Development and initial validation of a new procedure for evaluating adequacy of effort given during neuropsychological testing: The Letter Memory Test. Psychological Assessment, 10, 128–139.
Institute of Medicine (IOM). (2015). Psychological testing in the service of disability determination. Washington: The National Academies Press.
Iverson, G. L., & Franzen, M. D. (1998). Detecting malingered memory deficits with the Recognition Memory Test. Brain Injury, 12, 275–282.
Iverson, G. L., Lange, R. T., Green, P., & Franzen, M. D. (2002). Detecting exaggeration and malingering with the trail making test. The Clinical Neuropsychologist, 16, 398–406.
Kane, A. W., & Dvoskin, J. A. (2011). Evaluation for personal injury claims. New York: Oxford University Press.
Kulas, J. F., Axelrod, B. N., & Rinaldi, A. R. (2014). Cross-validation of supplemental test of memory malingering scores as performance validity measures. Psychological Injury and Law, 7, 236–244.
Larrabee, G. J. (2003). Detection of malingering using atypical performance patterns on standard neuropsychological tests. The Clinical Neuropsychologist, 17, 410–425.
Larrabee, G. J. (2012). Assessment of malingering. In G. J. Larrabee (Ed.), Forensic neuropsychology: A scientific approach (2nd ed., pp. 116–159). New York: Oxford University Press.
Larrabee, G. J. (2014). False-positive rates associated with the use of multiple performance and symptom validity tests. Archives of Clinical Neuropsychology, 29, 364–373.
Larrabee, G. J., Greiffenstein, M. F., Greve, K. W., & Bianchini, K. J. (2007). Refining diagnostic criteria for malingering. In G. J. Larrabee (Ed.), Assessment of malingered neuropsychological deficits (pp. 334–372). New York: Oxford University Press.
Larrabee, G. J., Millis, S. R., & Meyers, J. E. (2009). 40 plus or minus 10, a new magical number: Reply to Russell. The Clinical Neuropsychologist, 23, 841–849.
Lee, T. T. C., Graham, J. R., Sellbom, M., & Gervais, R. O. (2012). Examining the potential for gender bias in the prediction of symptom validity test failure by MMPI-2 Symptom Validity Scale scores. Psychological Assessment, 24, 618–627.
Lees-Haley, P. R., English, L. T., & Glenn, W. J. (1991). A Fake Bad Scale for the MMPI-2 for personal injury claimants. Psychological Reports, 68, 203–210.
Lindley, S. E., Carlson, E. B., & Hill, K. R. (2014). Psychotic-like experiences, symptom expression, and cognitive performance in combat veterans with posttraumatic stress disorder. The Journal of Nervous and Mental Disease, 202, 91–96.
Main, C. J. (1983). The Modified Somatic Perception Questionnaire (MSPQ). Journal of Psychosomatic Research, 27, 503–514.
Mathias, C. W., Greve, K. W., Bianchini, K. B., Houston, R. J., & Crouch, J. A. (2002). Detecting malingered neurocognitive dysfunction using the Reliable Digit Span in traumatic brain injury. Assessment, 9, 301–308.
Meyers, J. E., & Volbrecht, M. E. (2003). A validation of multiple malingering detection methods in a large clinical sample. Archives of Clinical Neuropsychology, 18, 261–276.
Miller, H. A. (2005). The Miller-Forensic Assessment of Symptoms Test (M-FAST): Test generalizability and utility across race, literacy, and clinical opinion. Criminal Justice and Behavior, 32, 591–611.
Miller, L. (2015). PTSD and forensic psychology: Applications to civil and criminal law. New York: Springer Science + Business Media.
Miller, L. S., Boyd, M. C., Cohn, A., Wilson, J. S., & McFarland, M. (2006, February). Prevalence of sub-optimal effort in disability applicants. Poster session presented at the Annual meeting of the International Neuropsychological Society, Boston, MA.
Millis, S. R. (2002). Warrington’s recognition memory test in the detection of response bias. Journal of Forensic Neuropsychology, 2, 147–166.
Millis, S. R., Putnam, S. H., Adams, K. M., & Ricker, J. H. (1995). The California Verbal Learning Test in the detection of incomplete effort in neuropsychological evaluation. Psychological Assessment, 7, 463–471.
Mittenberg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology, 24, 1094–1102.
Mossman, D., Miller, W. G., Lee, E. R., Gervais, R. O., Hart, K. J., & Wygant, D. B. (2015). A Bayesian approach to mixed group validation of performance validity tests. Psychological Assessment. doi:10.1037/pas0000085
Nguyen, C. T., Green, D., & Barr, W. B. (2015). Evaluation of the MMPI-2-RF for detecting over-reported symptoms in a civil forensic and disability setting. The Clinical Neuropsychologist, 29, 255–271.
Pollard, C. A. (1984). Preliminary validity study of the Pain Disability Index. Perceptual and Motor Skills, 59, 974.
Proto, D. A., Pastorek, N. J., Miller, B. I., Romesser, J. M., Sim, A. H., & Linck, J. F. (2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology, 29, 614–624.
Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan Neuropsychological Test Battery. Tucson: Neuropsychology Press.
Rey, A. (1941). L’Examen psychologie dans les cas d’encephalopathie traumatique [Psychological examination in the case of traumatic encephalopathy]. Archives de Psychologie, 23, 286–340.
Rey, A. (1964). L’examen clinique en psychologie. Paris: Presses Universitaires de France.
Rogers, R. (Ed.). (2008). Clinical assessment of malingering and deception (3rd ed.). New York: Guilford Press.
Rogers, R., & Bender, S. D. (2012). Evaluation of malingering and related response styles. In I. B. Weiner & R. K. Otto (Eds.), Handbook of psychology (Forensic psychology 2nd ed., Vol. 11, pp. 517–540). New York: Wiley.
Rogers, R., Kropp, P. R., Bagby, R. M., & Dickens, S. E. (1992). Faking specified disorders: A study of the Structured Interview of Reported Symptoms (SIRS). Journal of Clinical Psychology, 48, 643–648.
Rogers, R., Sewell, K. W., & Gillard, N. D. (2010). Structured Interview of Reported Symptoms, professional manual (2nd ed.). Lutz: Psychological Assessment Resources.
Russo, A. C. (2014). Assessing veteran symptom validity. Psychological Injury and Law, 7, 178–190.
Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. (2012). Reliable digit span: A systematic review and cross-validation study. Assessment, 19, 21–30.
Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists’ beliefs and practices with respect to the assessment of effort. Archives of Clinical Neuropsychology, 22, 213–223.
Slick, D. J., Hopp, G., Strauss, E., & Thompson, G. B. (1997/2005). Victoria Symptom Validity Test: Professional manual. Odessa, FL: Psychological Assessment Resources.
Slick, D. J., Sherman, E. M., & Iverson, G. L. (1999). Diagnostic criteria for malingered neurocognitive dysfunction: Proposed standards for clinical practice and research. The Clinical Neuropsychologist, 13, 545–561.
Slick, D. J., Tan, J. E., Strauss, E. H., & Hultsch, D. F. (2004). Detecting malingering: A survey of experts’ practices. Archives of Clinical Neuropsychology, 19, 465–473.
Stroop, J. R. (1935). Studies of interference in serial verbal reaction. Journal of Experimental Psychology, 18, 643–662.
Tombaugh, T. N. (1996). TOMM: Test of Memory Malingering. North Tonawanda: Multi-Health Systems.
Trahan, D. E., & Larrabee, G. J. (1988). Continuous Visual Memory Test. Odessa: Psychological Assessment Resources.
Van Hout, M. S. E., Schmand, B., Wekking, E. M., & Deelman, B. G. (2006). Cognitive functioning in patients with suspected chronic toxic encephalopathy: Evidence for neuropsychological disturbances after controlling for insufficient effort. Journal of Neurology, Neurosurgery and Psychiatry, 77, 296–303.
Weathers, F. W., Litz, B. T., Herman, D. S., Huska, J. A., & Keane, T. M. (1993). The PTSD Checklist (PCL): Reliability, validity, and diagnostic utility. Paper presented at the annual meeting of the International Society for Traumatic Stress Studies, San Antonio, TX.
Wechsler, D. (1997). Wechsler Adult Intelligence Scale (3rd ed.). San Antonio: The Psychological Corporation.
Wechsler, D. (2008). Wechsler Adult Intelligence Scale (4th ed.). San Antonio: Pearson.
Wechsler, D. (2009). Wechsler Memory Scale (4th ed.). San Antonio: The Psychological Corporation.
Whiteside, D., Wald, D., & Busse, M. (2011). Classification accuracy of multiple visual spatial measures in the detection of suspect effort. The Clinical Neuropsychologist, 25, 287–301.
Whiteside, D. M., Kogan, J., Wardin, L., Phillips, D., Franzwa, M. G., Rice, L., ... Roper, B. (2015). Language-based embedded performance validity measures in traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 37, 220–227.
Whitney, K. A., & Davis, J. J. (2015). The non-credible score of the Rey Auditory Verbal Learning Test: Is it better at predicting non-credible neuropsychological test performance than the RAVLT recognition score? Archives of Clinical Neuropsychology, 30, 130–138.
Wisdom, N. M., Pastorek, N. J., Miller, B. I., Booth, J. E., Romesser, J. M., Linck, J. F., & Sim, A. H. (2014). PTSD and cognitive functioning: Importance of including performance validity testing. The Clinical Neuropsychologist, 28, 128–145.
Wygant, D. B., Anderson, J. L., Sellbom, M., Rapier, J. L., Allgeier, L. M., & Granacher, R. P. (2011). Association of the MMPI-2 Restructured Form (MMPI-2-RF) validity scales with structured malingering criteria. Psychological Injury and Law, 4, 13–23.
Young, G. (2014). Malingering, feigning, and response bias in psychiatric/ psychological injury: Implications for practice and court. Dordrecht: Springer Science + Business Media.
Young, G. (2015a). Psychological injury, law, causality, malingering. Session part of Psychological injuries, law, malingering, and disability presented with I. Schultz at the Continuing Education Workshop presented at the Annual convention of the American Psychological Association, APA, Toronto, August 6.
Young, G. (2015b). Malingering, feigning, and negative response bias in psychological injury and law. Continuing Education Workshop presented at the Annual convention of the Ontario Psychological Association, OPA, Toronto, February 21.
Young, G. (2015c). Detection system for malingered PTSD and related response bias. Psychological Injury and Law, 8, 169–183.
Young, G., & Drogin, E. Y. (2014). Psychological injury and law I: Causality, malingering, and PTSD. Mental Health Law and Policy Journal, 3, 373–417.
Young, J. C., Sawyer, R. J., Roper, B. L., & Baughman, B. C. (2012). Expansion and re-examination of Digit Span Effort Indices on the WAIS-IV. The Clinical Neuropsychologist, 26, 147–159.
Acknowledgment
The author does mostly rehabilitation and some plaintiff work, with isolated insurer cases. This paper was prepared for Continuing Education Workshops on the author’s 2014 malingering book (Young, 2014) and on its update. The workshops involved the American Psychological Association in August, 2015 (Young, 2015a) and the Ontario Psychological Association in February, 2015 (Young, 2015b).
Conflict of interest
The author declares that he has no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Young, G. Malingering in Forensic Disability-Related Assessments: Prevalence 15 ± 15 %. Psychol. Inj. and Law 8, 188–199 (2015). https://doi.org/10.1007/s12207-015-9232-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12207-015-9232-4