The article presents recent conceptualization and research related to the area of malingering in forensic disability and related cases. This area has been referred to as involving psychological injuries and law. Psychological injuries refer to liable harms that result in actionable claims for which legal suits are launched because of the negligence involved, the aim of which is to obtain damages (Young & Drogin, 2014). The critical complicating factor in such cases concerns the possibility of malingering (Young, 2014). However, there is no universally accepted definition of malingering, nor a conclusive estimate of its prevalence in these types of cases.

Complicating matters, the venues involved in disability determinations are confronted with a huge number of cases, for example, involving PTSD in disability tort cases, in worker compensation, among military veterans, and within the social security administration files (e.g., Bass & Halligan, 2014; Chafetz & Underhill, 2013; Russo, 2014).

There is no gold standard or best way to determine its presence, nor agreement on the best tests to use toward its determination as well as the best malingering detection systems that collate information over tests. Despite this uncertainty in the field, a widely cited estimate of the prevalence of malingering is the one of 40 ± 10 % (Larrabee et al., 2009; Larrabee, 2012). Aside from reviewing the literature on the definition and prevalence of malingering in these types of cases in forensic disability and related contexts, an important goal of the present paper is to show that this widely cited estimate of malingering is more “murky” than the stated “magical” quality attributed to it.

For relevant background information on malingering, the reader should consult Rogers (2008) and Carone and Bush (2013). Also, Young (2014, 2015c) has developed a malingered PTSD detection system that might be helpful. It was constructed based on prior malingering detection systems for the other major psychological injuries (respectively, for Slick, Sherman, and Iverson (1999) on a system for malingered neurocognitive dysfunction (MND) and Bianchini, Greve, and Glynn (2005) on a system for malingered pain-related disability (MPRD), but it stands as the first one dedicated to the detection of malingered PTSD.

Malingering Definition and Prevalence in Young (2014)

Definition

Review

The DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision; American Psychiatric, 2000) defines malingering as the “intentional production” of “grossly exaggerated” or “false” “psychological” and “physical” symptoms that derives from “motivation by external incentives,” for example, in obtaining financial compensation. However, Kane and Dvoskin (2011) argued for the separation of mild exaggeration from malingering, which is not the universal approach (e.g., Mittenberg, Patton, Canyock, & Condit 2002). For Kane and Dvoskin, exaggeration concerns a “relatively mild overstatement” of injury sequelae, and it is not the same as the gross exaggeration that is part of the DSM definition.

According to Young (2014), an improved definition of malingering would involve removal of the term “production” and replacing it by the term of “presentation.” Therefore, malingering should be defined as: the intentional presentation with false or grossly exaggerated symptoms [physical, mental health, or both; full or partial; mild, moderate, or severe], for purposes of obtaining an external incentive, such as monetary compensation for an injury and/ or avoiding/ evading work, military duty, or criminal prosecution.

Comment

On the question of defining malingering, Miller (2015) adopted a very similar position to that of Young (2014) and of Kane and Dvoskin (2011). Miller (2015) posited that malingering could be: (a) outright fabrication without any real symptoms; (b) exaggeration so that symptoms are far worse than they really are; (c) false “extension” of recovered/ improved symptoms as still being present, e.g., at initial or even worse levels; and (d) maintaining that genuine symptoms are due to a compensable negligent act when there is no linkage to it. Clinically, malingered PTSD might present with dramatic flashbacks, atypical nightmares (e.g., stereotypic), and exaggerations/contradictions, among other indicators. The approach taken by Miller (2015) underscores several important issues. First, malingering might involve an absent of “real symptoms.” Second, if exaggeration of real symptoms is involved, it is at a level that makes them “far worse.” The American Academy of Psychiatry and Law (2015) took a similar position.

The 2015 Institute of Medicine book authors (IOM 2015) defined malingering as the intentional presentation of false or exaggerated symptoms, or intentionally poor performance, or both, for purposes of external incentives. This definition is consistent with my approach to define malingering in terms of presentation rather than production (Young, 2014). However, it conflates outright malingering with any level of exaggeration, which may set the bar too low, and so lead to false positives.

Base Rate

Review

Mittenberg et al. (2002) conducted a survey of forensic workers in the field on their estimates of malingering. They surveyed practitioners about their approaches to the matter, and the respondents had dealt with over 30,000 cases of neuropsychological assessment that took place in the prior year. In describing Mittenberg et al. (2002), Boone (2011) noted that the rate at issue in Mittenberg et al. could be up to 41 % for cases of mild traumatic brain injury (mTBI). Inspection of Mittenberg et al. (2002) indicates that this percentage is for an adjusted rate, with the unadjusted one being 39 %. Also, for personal injury cases and disability/ worker compensation cases, the respective reported and adjusted rates are 29 and 30 %, and 30 and 33 %.

Evidence

Reference to the actual research conducted by Mittenberg et al. (2002) revealed several inconsistencies. For example, in the survey, the definitions of malingering and exaggeration were not provided to the respondents. Moreover, not only was malingering conflated with exaggeration in the study but also exaggeration was not specified for severity. Further, the survey involved questions about “probable” exaggeration or malingering.

Furthermore, reference to other research after the publication by Mittenberg et al. (2002) on the topic of base rate of malingering and related negative response biases in the forensic disability and related context gives a mixed picture. In this regard, Chafetz (2011) examined performance of social security disability claimants. Of 161 claimants in the sample, 38.5 % were classified as either probable or definite malingers. However, I examined the breakdown of the two categories, which reveals that only 15 % were classified as definite malingerers (N = 24).

In Young (2014), I sought the prevalence rate for malingering in more recent research (into 2013) that offered percentages on the question in forensic disability and related examinations. I had to retrieve the percentages from the data within the studies because getting these percentages did not constitute a primary goal of the studies.

In this regard, Greve, Ord, Bianchini and Curtis (2009b) examined the prevalence of malingered disability in compensation-seeking chronic pain patients. Of the 508 patients, up to 36 % were classified as probable or definite malingerers, but with only 10 % as definite malingerers.

Wygant, Anderson, Sellbom, Rapier, Allgeier, and Granacher (2011) examined the results of 251 individuals who had undergone compensation-seeking evaluations. The percentage of definite malingering was only 8 % in this study.

Lee, Graham, Sellbom, and Gervais (2012) investigated claimants who had undergone non-neurological medico-legal disability assessments. Of 1209 patients who met inclusion criteria, only 19 met the criteria for definite malingering. This works out to a percentage of about 2 %. These results were not the main ones to which the study was aimed, and I had to calculate the latter percentage myself, as I had done for Wygant et al. (2011).

Rogers and Bender (2012) noted that in a survey of National Academy of Neuropsychology (NAN) members, of the respondents, Sharland and Gfeller (2007) found that the median for definite malingering was only 1 % (in their Table 3). Similarly, Slick, Tan, Strauss, and Hultsch (2004) surveyed published researchers on malingering. Only 13 % rated the prevalence of the category “definite” malingering at 30 % or more.

Comment

Of the percentages listed in the prior research analyzed in Young (2014) for the base rate of malingering or related feigning, the average involved is only 7 %. Larrabee et al. (2009) had argued that the standard malingering base rate in the field of forensic disability assessments could be characterized as 40 ± 10 %. Furthermore, Larrabee, Greiffenstein, Greve, and Bianchini (2007) had even argued that, in neuropsychological assessments of mTBI in which neuropsychological deficits persist, the rate of malingering might be as high as 88 %!

However, the evidence reviewed in this paper does not suggest such extreme proportions for the base rate of malingering. Nevertheless, in terms of problematic cases, in general, the percentage might be higher than 40 ± 10 %! Young (2014) concluded that the estimates of the rate of outright malingering could be as high as 15 %, with problematic presentations and performances to lesser degrees than outright malingering higher than that.

To buttress my conclusion that the estimate of base rate of malingering in Larrabee’s work is much higher than what is found in the actually data in the field, I returned to the original Larrabee sources and analyzed carefully the literature he cited and his conclusions based on them. This analysis, that is presented in the next section, shows that the magical number of 40 ± 10 % for the base rate of malingering is not the magical number that has been attributed to it. Rather, it is more like a murky number that is unjustified, is exaggerated, and that can lead to errors and distortions in court.

The Murky Prevalence Estimate of Malingering of 40 ± 10 %

Larrabee et al. (2009) heralded a new era in the estimate of malingering prevalence in forensic disability and related assessments by referring to the new magical number of 40 ± 10 %. In support of this claim, they referred to the survey of forensic neuropsychologists conducted by Mittenberg et al. (2002), which, as mentioned, showed that these practitioners estimated probable malingering or response exaggeration as up to 41.2 % for evaluees assessed for mild head injury (the estimate corrected for referral source). Other categories of evaluees also showed values in this range (29 % for personal injury, 30 % for disability).

Next, Larrabee et al. (2009) cited the review of 11 studies by Larrabee (2003) on mTBI cases in which 40 % “failed SVTs” (symptom validity tests) Larrabee (2012) referred to the performance of the evaluees in these studies as “motivated performance deficit suggestive of malingering.” Larrabee et al. (2009) then cited other research since the ones cited in Larrabee (2003) to support their contention that the base rate of malingering is the magical number of 40 ± 10 %. In this regard, I note that the percentage of malingerers was either very low (6.7 %) or based on failing even just one SVT, which is a highly problematic decision.

In this regard, Miller, Boyd, Cohn, Wilson, and McFarland (2006) evaluated Social Security Administration (SSA) disability applicants and found that 54 % failed one or the other of two SVTs that were administered. Van Hout, Schmand, Wekking, and Deelman (2006) evaluated effects of suspected neurotoxic injury and 57 % failed one or more of three SVTs that were administered. Using the MND criteria of Slick et al. (1999), Greve, Bianchini, Black, Heinly, Love, Swift and Ciota (2006a) found a rate of 6.7 % of definite malingering in evaluees who were exposed to occupational/ environmental substances. Larrabee et al. (2009) added that the prevalence for malingering equaled 40 % in that study because they added in the 33.3 % of the sample that scored at the level of probable malingering on the MND. They concluded that for “invalid neuropsychological data/ probable malingering,” over the studies cited, there is a “remarkable consistency” in finding a base rate of “40–50 % or more.”

To conclude, I examined the original data in Larrabee (2003), both for his own study described and for the 11 studies cited. According to him, in the sample of 95 cases scrutinized in his personal study on the matter, the malingering base rate was 43 %. However, for the 41 cases labeled as malingerers, I note that 24 were definite according to the MND, and 17 were probable. Therefore, the percentage for definite malingering in this study arrives at 24/95 or 25.3 %, and not 43 %!

In terms of the 11 studies that Larrabee (2003) cited in his literature review, although the average was 40 % of evaluees who performed with “motivated performance deficit suggestive of malingering,” the range in the studies was 15 % to 64 %. Further, I note that Larrabee (2003) did not describe in detail the studies and how his percentages of the base rate of malingering were derived in them. However, given the inconsistencies found, in general, in how he and others define malingering and establish its base rate, it is probable that the percentage of alleged malingering in the 11 cited studies was either not directly assessed or was assessed in a manner that conflated with other negative response biases, such as finding even one SVT failure or the mildest of exaggeration. To summarize, the research that Larrabee has conducted or cited to justify his estimated prevalence rate of malingering of 40 ± 10 % does not support this value.

Literature Review in the IOM (2015) Book

Introduction

The Institute of Medicine (IOM, 2015) examined the utility of the validity tests of SVTs and performance validity tests (PVTs) in SSA disability examinations. The authors were Pardes, Barsky, Daly, Geisinger, Gerber, Jette, Koop, Suzuki, Twamley, Ubel, and Wall. With respect to the question of the prevalence of malingering, the literature reviewed gave one impression, my analysis of it another, and their conclusions a third.

Malingering

In terms of prevalence of malingering in SSA disability examinations according to the 2015 IOM book authors, the studies reviewed suggested a range from 14 to 60 % for either performing below capacity according to PVTs or inaccurately reporting one’s symptoms, or according to other indicators. In particular, in Griffin, Normington, May, and Glassmire (1996), the percentage for malingering was 19 %. In Chafetz and Abrahams (2005), below chance test failure was 14 % and failing two or more validity indicators was 59 %. For Miller et al. (2006), test failure rate was 54 %. For Chafetz, Abrahams, and Kohlmaier (2007), a malingering index over tests gave percentages of 21–30 for below-chance performance and 52–59 for test failure. In Chafetz (2008), the percentages for two test failures and below chance performance were 46 to 60 % and 37 to 47 %, respectively.

Chafetz and Underhill (2013) noted that the frequency of feigning of disabling illness in evaluation of adult disability compensation in the Social Security Disability (SSD) is 46 % to 60 %.

Comment

Despite the literature review data, the IOM authors preferred to estimate that about 10 % of applicants would be excluded from benefits should more rigorous screening be undertaken for malingering, using validity tests and consideration of the whole files involved. This percentage is consistent with the one in Young (2014), as well as the one found in the present review, as per the following.

2014–2015 Literature Review

Prevalence: Preparing the Analysis Undertaken

Introduction

Next, I examine all the recent research that relates to validity testing in the area of forensic disability and related determinations, including presentation of 13 recent studies not yet mentioned in the article that give percentages of malingering or related negative response biases. The primary goal of these latter studies was not to arrive at the percentage of malingering in forensic disability and related evaluations, so that often I had to ferret out the percentages that they described in their data. Before giving these percentages, I review the studies for their primary goals, as well as others that speak to the issue of malingering and its detection.

Review

Buddin, Schroeder, Hargrave, Von Dran, Campbell, Brockman, Heinrichs, and Baade (2014) argued that the Test of Memory Malingering (TOMM; Tombaugh, 1996) needs a measure of response consistency. Gunner, Miele, Lynch, and McCaffrey (2012) developed the Albany Consistency Index (ACI). Buddin et al. (2014) created the Invalid Forgetting Frequency Index (IFFI) for the same purpose.

Kulas, Axelrod, and Rinaldi (2014) investigated the new indices that have been developed for the TOMM. They examined the performance of these measures in a mixed clinical sample of military veterans. All five TOMM measures (e.g., just using Trial 1 or a reduced number of items (Denning, 2012)) helped discriminate examinees who had failed two/three alternate measures of performance validity.

Bashem, Rapport, Miller, Hanks, Axelrod, and Millis (2014) examined the discriminability of five PVTs: the TOMM, Medical Symptom Validity Test (MSVT; Green, 2004), Reliable Digit Span (RDS; Schroeder, Twumasi-Ankrah, Baade, & Marshall 2012), Word Choice Test (WCT; Wechsler, 2009), and California Verbal Learning Test – Forced Choice (CVLT-FC; Delis, Kramer, Kaplan, & Ober, 2000). The results showed better discrimination ability for the TOMM, MSVT, and CVLT-FC and relative to the WCT and RDS.

Mossman, Miller, Lee, Gervais, Hart, and Wygant (2015) developed a Bayesian approach to mixed group validation that did not involve groups as in prior designs. The findings were consistent with the ones in the general literature on the value of the TOMM.

Axelrod, Meyers, and Davis (2014) examined three different scoring systems for the Finger Tapping Test (FTT; Reitan & Wolfson, 1985). They determined the test could function as an index of performance validity in neuropsychological assessments of veterans and of evaluees in IMEs (independent medical examinations).

Guise, Thompson, Greve, Bianchini, and West (2014) examined traumatic brain injury (TBI) claimants (mild, severe; designated as MND, not) and non-head injured patients for performance validity according to measures on the Stroop Color and Word Test (Stroop, 1935) given as part of neuropsychological examination. Word (more than Color and Word-Color) residual raw scores differentiated best examinees having been categorized as malingerers and those not so categorized.

Henry, Heilbronner, Mittenberg, Hellemann, and Myers (2014) developed a new 13-item cognitive complaints scale on the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2; Butcher, Graham, Ben-Porath, Tellegen, Dahlstrom, & Kaemmer, 2001) as an embedded neuropsychological measure of symptom validity. The subscale differentiated mTBI patients who failed or passed performance validity indices (controls also were tested).

Lindley, Carlson, and Hill (2014) found that, among 30 Vietnam combat veterans with severe and chronic PTSD, 20 expressed psychotic-like symptoms. Among the measures used in the study, the PCL-C (PTSD Symptom Checklist-Civilian; Weathers, Litz, Herman, Huska, & Keane, 1993) was used to help assess PTSD.

Nguyen, Green, and Barr (2015) found that the F family of tests on the Minnesota Multiphasic Personality Inventory, Second Edition, Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) differentiated pass and fail groups according to the malingered detection system MND (Slick et al., 1999). The F-r (Infrequent Responses), FBS-r (Symptom Validity), and RBS (Response Bias Scale) differentiated the pass-fail groups within two of the three evaluee groups (neurological, psychiatric; not medical complaints), with the Fs scale differentiating those who passed and failed the MND for the psychiatric group only.

Proto, Pastorek, Miller, Romesser, Sim, and Linck (2014) examined veterans with claimed mTBI both with PVTs and a neuropsychological battery. Even failing one PVT led to differential neuropsychological performance, and failing two of them gave almost identical results.

Whiteside, Kogan, Wardin, Phillips, Franzwa, Rice, Basso, and Roper (2015) studied the Boston Naming Test (BNT) and the Verbal Fluency test (FAS and Animal Fluency) as PVTs in neuropsychological assessment using a compensation-seeking mTBI sample that was especially compared to a serious TBI one. Only a logistically-derived combined measure was acceptable in differentiating mTBI case who had failed two or more PVTs and serious TBI counterparts who had not failed any.

Whitney and Davis (2015) studied the ability of measures on the Rey Auditory Verbal Learning Test (RAVLT) to predict credible/non-credible neuropsychological test performance in veterans. The recognition score of the test produced better classification accuracy than its “non-credible” score. The non-credible group involved meeting MND criteria while failing either the TOMM or MSVT (except if the test gave a Genuine Memory Impairment Profile, GMIP).

Comment

Overall, these studies reveal that rapid proliferation of quality research aimed at establishing valid test practices in differentiating credible and non-credible performance in forensic disability and related assessment contexts. Commonly, these studies use the MND or a variant as a malingering detection system and two or more PVTs in the assessments.

Some would argue that even one PVT failure is informative (e.g., Proto et al., 2014), while others refer to the criterion of at least two such failures (e.g., Slick et al., 1999), if not three of them (e.g., Boone & Lu, 2007). Also, context or type of forensic disability-related evaluation is important to consider (Cottingham, Victor, Boone, Ziegler, & Zeller, 2014). There are as yet no gold standards in the field.

One critical variable in establishing whether an examinee in putting forth sufficient effort on PVTs is the cut score used for the measure at issue. These can vary in terms of recommendations of what is the critical dividing line (or lines) even within one instrument depending on the study, as shown in the following.

Cut scores

Introduction

Haynes, Smith, and Hunsley (2011) defined cut scores as specific scores on a measure that serve to divide the distribution of scores for the measure into categories (two or more, depending on needs). Cut scores might be statistically influenced but also could be rationally-derived (experientially, and they might change as the research changes on the measure at issue). Haynes et al. (2011) added that cut score determination is not “wholly objective,” it requires judgment, and they are “conditional.” Their goal is to facilitate accurate decision-making. The literature does not provide “firm” guidelines in establishing them. Ben-Porath, Greve, Bianchini, and Kaufmann (2009) added that in the forensic context, cut-scores might vary with the “facts” of the evaluation, and the context.

To illustrate the difficulties in establishing cut scores, consider the next section of two studies that try to decipher for the same two tests the best cut-score to use in forensic disability and related evaluations. The results were not the same. They show that you can get different cut scores based on the context, population, etc. Moreover, these scores will vary based on the respective value placed on false positives vs. false negatives.

Review

Crighton, Wygant, Applegate, Umlauf, and Granacher (2014) asked whether two brief measures, the Modified Somatic Perception Questionnaire (MSPQ; Main, 1983) and the Pain Disability Index (PDI; Pollard, 1984), screen effectively for malingering in relation to the MPRD criteria. The authors concluded that in screening in clinical settings individuals in evaluation for disability for pain, scores of ≥14 on the MSPQ or ≥54 on the PDI should be used.

Bianchini, Aguerrevere, Guise, Ord, Etherton, Meyers, Soignier, Greve, Curtis, and Bui (2014) examined the accuracy of the MSPQ and PDI in relation to classification of examinees according to the MPRD. Their Table 7 showed the following for cut scores on the MSPQ and PDI as screeners for comprehensive psychological evaluation and/ or functional capacity evaluation: Score Levels: MSPQ≥17; PDI≥62.

Comment

The results of two recent studies conducted independently on the same question found similar but different results in relation to cut scores for the tests involved. This type of confound or inconsistency in the research could lead to similar inconsistencies in how the tests are used in practice.

Prevalence or Base Rate of Malingering in the 2014/2015 Research Cited

Introduction

In the literature review in the prior section of this article, I have described 13 studies (Table 1). In none of the cases was the intention to address specifically the prevalence or base rate of malingering or a related negative response bias. Therefore, I conducted careful analysis of what the prevalence of malingering appears to be in these studies. The 13 studies in this analysis are the ones by: Axelrod et al. (2014); Bianchini et al. (2014); Buddin et al. (2014); Crighton et al. (2014); Guise et al. (2014); Henry et al. (2014); Kulas et al. (2014); Larrabee (2014); Lindley et al. (2014); Nguyen et al. (2015); Proto et al. (2014); Whiteside et al. (2015) and Whitney and Davis (2015).

Table 1 Details of recent studies (2014 and 2015) on prevalence of malingering

Review

For Axelrod et al. (2014), percentages related to malingering type behavior was 9 % for veterans in neuropsychological assessments and 25 % for IMEs, or an average of 17 %. For Bianchini et al. (2014), who examined pain complainants, the percentage was at 5 % for definite malingers. For Buddin et al. (2014), the equivalent group involved the assignment of definite malingering in neuropsychological evaluations, and the percentage for this group was at 3 %. As for Crighton et al. (2014), the percentage for a combined group of probable and definite malingering in was at 24 % for their forensic disability cases. In Guise et al. (2014), the percentage was 33 % among mild and moderate-severe TBI cases, either probable MND or not (41/126). For Henry et al. (2014), the percentage appears to be 50 % among personal injury litigants. Kulas et al. (2014) included a suboptimal group in PVT measure performance, and the percentage of the sample of veterans in neuropsychological assessment at this level was at 10 %. For Larrabee (2014), who tested mTBI complainants, the figure rose to 59 % for his definite malingering group. For Lindley et al. (2014), among PTSD cases the percentage of questionable Rey test performance was 6 %. For Nguyen et al. (2015), the average for MND failure was 32 %, with the percentages for psychiatric, neurological, and medical complaint groups being 17, 42, and 39 %, respectively. In Proto et al. (2014), failure on three PVTs happened in 16 % of veterans in neuropsychological assessment. For Whiteside et al. (2015), the percentage for mTBI cases failing two or more PVTs was 23 % (the equivalent percentage for serious TBI was not made available). Finally, Whitney and Davis (2015) conducted neuropsychological assessments of veterans, and 37/175 (21 %) failed both TOMM and MSVT, while being designated probable/ definite MND.

Comment

If we take the 13 obtained and estimated results for the percentages of definite malingering or its equivalent in these 13 studies in 2014–2015 that have been reviewed and average them, the percentage is 23 %. This average estimate of malingering in the forensic disability and related context seems high relative to the 10 % as estimated as non-credible in the 2015 IOM book but low relative to other estimates toward 40 % or more (e.g., Larrabee et al., 2009).

Note that the range of estimates of malingering or related feigning findings in the 13 studies reviewed in this paper since the review of studies in the book by Young (2014) is 3 to 59 %. This range attests to the differences in methods across the studies, and fuels the controversies associated with malingering in both research and practice.

Overall, the more recent estimates in the literature of the base rate of malingering or related feigning appear to hover around 15 %. In this regard, in the IOM 2015 book, in Young (2014), and in present 2015 review, the respective percentages that are favored or found are about 10, 15, and 20 (plus) %.

Note that these percentages related to malingering in forensic disability and related cases are not necessarily about malingering itself, but generally they are either only about test failures related to it that might indicate it or about negative response biases, generally, but not malingering, per se. Moreover, it appears that the findings in all these reviews of the literature vary over type of evaluee (e.g., neuropsychological/ mTBI; psychiatric), definition of malingering, malingering and related feigning criteria used (e.g., MND, amount of PVTs failed), and tests used (which PVTs, SVTs).

Therefore, it is difficult to arrive at one percentage or range of percentages that are definitive about the proportion of malingering found in forensic disability and related examinations. However, the 15 % value could serve as one axis in these regard, with higher percentages possible, given problematic cases such as mTBI leading to PPCS.

For example, the results in Nguyen et al. (2015), as described above, are quite telling in that the estimated rate relative to malingering was 17 % for psychiatric evaluees but 42 % for neurological ones. These two values are quite consistent with the present suggestion to consider as having a 15 % rate of malingering on average but more problematic cases related to mTBI as having one towards 40 %.

That being said, Wisdom, Pastorek, Miller, Booth, Romesser, Linck, and Sim (2014) examined 134 military veterans with a history of mTBI who had been referred for neuropsychological evaluation. The results showed that WMT failure was associated with worse cognitive test performance on many of the cognitive measures administered. These results do not refer to malingering, per se, but they do indicate that even one PVT failure can be informative in forensic disability and related contexts.

Conclusions

Forensic disability and related assessments are replete with contested areas that make it difficult to conduct them. Among the most contentious is the area of malingering, which is debated even in terms of its basic definition. Moreover, the research on its prevalence suggests that is it in the neighborhood of 40 ± 10 % in this context, in general, and perhaps higher in cases of assessment mTBI cases with persistent complaints. However, both in the literature review conducted by Young (2014), as well as in the present literature review updating it, the percentage of malingering that has been found in the empirical research appears far less than these elevated proportions, which is consistent with the IOM (2015) review of the topic.

Perhaps a good way of summarizing the range of possibilities in various contexts, from the clinical and genuinely hurt to the high stakes neuropsychological examinations involving mTBI leading to PPCS, is that we can expect a rate of malingering generally at the percentage level of 15 ± 15, with the percentage possibly higher, especially for mTBI leading to PPCS. Also, the rate of quite problematic presentations and feigning, in general, could be much higher than the rate of malingering, per se.

Bigler (2014) went beyond querying the value of one test or the other in malingering determinations by questioning the whole testing enterprise. He argued that neuroimaging results may be “key” in understanding better the meaning of grey-zone performance. However, he used case studies to support his contention. Clearly, further research is warranted but I do not doubt that PVTs and SVTs will be found useful, as do the authors of the 2015 IOM book.

Finally, the present review has relevant practice implications and innovations because it has examined carefully the purported malingering base rate of 40 ± 10 % and found it severely wanting. My analysis of Larrabee’s (2003) own research shows a rate of 25 %, which is just about the average of the recent research that I analyzed in this article (23 %). Furthermore, this latter value reflects data in prior studies that include problematic presentation, in general, and not just of malingering itself. That is, the 40 ± 10 % value of the base rate of malingering is not a magical number but a murky one that has no place in court. Further research directly on the question is needed and, in this sense, there will be nothing magical about the value that is found because it will be science-informed rather than mysteriously derived.