To the Editor:

Zambon et al. [1] appraised the quality of a series of meta-analyses, and found that “Internal validity appeared largely robust, as most (50.5 %) reviews were at low risk for bias.” To conclude that there is a low risk of bias, a comprehensive review is required, so that all potential biases are considered. Otherwise, we might as well notice that some raindrops have missed us as we run through the rain, and conclude, on that basis, that therefore we must be dry. This wishful thinking provides a false sense of security that interferes with required reforms, and is potentially quite harmful. So how many raindrops were observed to miss us? The authors assessed internal validity based on: (1) search strategies; (2) study selection; (3) inclusion of only (masked) randomized trials; (4) evaluation of study homogeneity; and (5) reporting of conflicts and funding. It is rather unnerving, given that randomized trials are the worst possible design except for all the rest [2], that fully half of the reviews could not meet even these minimal requirements, but what about the ones that did? We are told nothing about how well or poorly the trials were randomized, or even if they truly were randomized at all. The risk of bias depends critically on the precise methods of randomization [3], and not every trial that is labeled as randomized actually is [4]. Nor are we told how successful the masking effort was; the risk of bias is clearly high if masking is unsuccessful, and the effort should never be confused with completion of the mission (Section 1.8 of [3]). Beyond that, nothing is mentioned of the myriad numbers of other potential biases, including improper enrichment, improper surrogate endpoints, changing endpoints, post-randomization exclusions, and analyses whose validity is predicated on untenable assumptions.

So given what we know, and paying particular attention to what we do not know, what can we conclude? Half of the meta-analyses should be dismissed out of hand, because they were unable to meet even the barest minimal requirements of validity. The other half are at a rather high risk of bias if we know nothing more about them than that they used the words “randomized” and “masked”, without qualification. And, unfortunately, the risk of bias in future trials and systematic reviews can only be expected to increase if we see more articles that turn a blind eye and contort themselves to somehow find something to praise. Valid trials need to address and rule out all the aforementioned potential biases, and valid appraisals of trial quality and internal validity need to do the same [5].