January 25, 2017

Is absence of evidence; evidence of absence? “Negative” findings in trials and systematic reviews

Is absence of evidence; evidence of absence? “Negative” findings in trials and systematic reviews

I sometimes feel frustrated when I read a trial and the authors report that there is no difference between the treatments and further research is needed. But the interpretation of these negative findings is far from straightforward. I hope to address this in this post.

It is easy to interpret these “negative” findings by suggesting that the treatment did not have an effect. While this may be the case, this is not always correct. This has been discussed over many years and various researchers have stated that “absence of evidence does not mean evidence of absence”. In other words, if we do not find difference in a study, then it is not correct to state that the treatment “does not work”. The only thing that we can conclude is that the study did not detect any differences between the treatments.

Why do “negative” findings occur?

I will now consider the possible reasons for “negative” findings. Firstly, the new treatment may indeed be no better than the other treatments under investigation. Alternatively, the study may not have sufficient power to detect a difference, even if it existed. That is the study was not well-designed.

You may now ask “how do I know if this is the case in the study was not sufficiently powered”?

This is relatively straightforward. When you read a trial you should look to see if the authors have carried out a sample size calculation. If they did, then you should look closely at the following.

  • Were the assumptions that they made in their calculation realistic and clinically significant?
  • Did they clearly quote the source of the data that they used in their calculation?
  • Was the sample size based upon the same outcome measure as the one that was tested in the study?

It is quite surprising to find that these three mistakes are made in trials that are published. If these factors are not clear then you may conclude that the study could be underpowered and this may be a more compelling explanation for the finding of no difference between the treatments under investigation.

 uncertainty

 What if the “no difference” finding was true?

I will now consider the situation in which the finding of “no difference” may in fact be true. We may come to this conclusion if the study is sufficiently powered. Nevertheless,  we still need to be cautious in our conclusions. If I look back at some of my earlier work into Class II treatment, I concluded that;

“Early orthodontic treatment with the Twin-block appliance followed by further treatment in adolescence, at the appropriate time, does not result in any meaningful long-term differences when compared with one course of treatment started in the late mixed or early permanent dentition”.

If we look at this carefully I feel that this conclusion is correct, because I stated that we did not detect a difference. It would have been very easy for me to conclude that early orthodontic treatment was not effective. Unfortunately, I know that I have said this in several presentations in the early days following our studies and I have fallen into the common mistake that I have described above.

How do we increase our certainty of “negative” findings

We need to remember that research aims to reduce uncertainty.  In this respect, combining the results of several large well carried out studies into a systematic review can increase the power of our study and enable us to be more certain. For example, when several studies provide data in a systematic review that shows “no difference”. Then we can conclude with greater certainty that the treatment was not effective.  This is was approach in a systematic review of early Class II treatment, when we concluded:

“There are no advantages for providing a two-phase treatment i.e. early from age seven to 11 years and again in adolescence compared to one phase in adolescence”.

In my last blog post, I discussed a systematic review that found little evidence to underpin early orthodontic treatment.  There were very few studies.  Again, this raises the question of absence of evidence.  We can interpret this is in the same way.  We cannot conclude that “early treatment does not work”. All we can say is that we have no evidence that it works.

What are the clinical implications?

Finally, it is worth consider the clinical implications of this discussion.  When the evidence of “no effect” is clear, then we can explain to our patients that the one treatment does not have an advantage over another. However, if there are no studies or the findings are not robust because of bias or lack of statistical power, then we should inform our patients that we do not know if one treatment is better than another. This information then helps them take an informed decision.

I hope that you find this discussion useful in interpreting “negative” studies.

Related Posts

Have your say!

  1. Hi Kevin,

    Again I find myself agreeing with so much of what you state, except for a key component (or two) that leads me to a different conclusion to your blog.

    So why is that ?

    Am I simply wrong – agreed, it’s possible and nobody should think they know it all.

    Are you wrong? Well you are a very clever fellow for sure, but not infallible I suppose……

    Or could it even be something else – are we BOTH right in our own ways ??

    Evidence-based Medicine as endorsed by the Cochrane Collaboration is very clear that best–practise EBM considers 3 aspects equally valid:
    1. The published scientific Evidence
    2. The Clinician’s Experience + Expertise
    3. The Patient’s choice + wishes

    If No.1 is weak, inconclusive or non-existant THEN 2+3 become ALL important.

    It is then perfectly acceptable that say you and I discuss all the options, point out the published evidence isn’t much help in a certain case, but MY experience using technique X is over 100 cases and an Audit of my results shows I can treat similar cases to you in 7.3 months with minimal side-effects, but I don’t have as much experience using technique Y as my colleague KOB who has also qualified as a Specialist and is even a Professor and a lovely fellow, so I can refer you to him if you’d like.

    You may quite rightly say in my experience of treating 1000+ cases using technique Y an Audit of my results shows I can treat cases similar to you in 15.6 months, but I don’t have as much experience using technique X as TK but if you wish I can refer you for X if you’d prefer it to Y.

    It’s just an example Kevin, but in the era of EBM where much (too much?) published evidence is poor, inconclusive or unreliable, such that it’s only a minor or negligible factor in the EBM trilogy of ‘best-practise’ patient consent and treatment, isn’t the patient best served by what we KNOW works in our experienced hands evidently ?!?

    So we could be both different and right 😉

    Yours reflectively,

    Tony.

    • Hi Tony,

      I am a little confused by this argument.

      I can understand that two treatments can be the same in outcome and that the degrees of expertise and experience play a role in the success and duration of treatment and ultimately the long term outcome of that treatment.

      I also understand that the patient has a right to choose given the best available information on the other two factors (evidence and expertise). This is also with the proviso that how and by who that advice is given spawns its own bias and so is producing a whole new range of clinical tools in medicine.

      BUT “If No.1 is weak, inconclusive or non-existant THEN 2+3 become ALL important…..” seems to miss the point of the system which is to reduce bias initially.

      In your example you have what is a relatively rare clinical scenario in general practice where there is an audit of results of some kind. This would still play a role in the top circle of the EBD triad “ evidence .“ It is clinical expertise and has to be weighed against other evidence which is less prone to the many biases individuals and even Presidents of prominent nations succumb to.

      Throwing evidence away and declaring “in my hands” has been the position medicine and more slowly dentistry is seeking to move away from. Is there a chance that removing evidence from the equation completely, for some dentists, is the atavistic desire to relive those glory days? Though I do realise that is not your aim here.

      I certainly take your larger point that evidence based practice is a difficult (but fundamental) and complex cognitive clinical skill which is not meant to formulaically come up with one answer. That belongs to the “Hitchhikers Guide” and the meaning of life .

      • Hi Paul,

        It is difficult when we are SO academically trained, to realise when one ACTUALLY analyses the published scientific evidence in Dentistry generally and Ortho/Restorative specifically, that most ( 90%+) should NOT be relied upon. It simply does NOT stand up to proper scrutiny. Indeed apart from the rarer cases, there is simply no good evidence one way or another for ANY approach, so what are we left with, logically ???

        The number of highly skilled clinicians I have come across who say swear by direct posts and cores (takes less time and less complex), yet others swear by indirect lab-made posts (takes longer and more stages) and both can demonstrate GOOD techniques/results via Audits, should they change because the ‘evidence’ is inconclusive, poorly designed or frankly unreliable ???

        Now a Specialist in Prosthodontics/Restorative may be great at both and a clinical trial may show no clear differences, however in less experienced but still competent hands, it may be the technique/system that is simpler, has less stages and thus less prone to errors, is more suited to those with less experience and who may treat the more routine cases in Practice, for example. A Specialist may prefer the lab-made aspects which have more stages and complexities but their training and additional skills like the perceived qualitative difference that brings, even IF it takes longer or has a few higher risks.

        As regards Audit, I agree it’s an under-utilised tool and need not be complex. I teach this so our young dentists can complete most Audits in a 30-60min session. It’s a continuous spiral not a one-off research project the latter making it onerous and thus rarely done.
        What I’m talking about are useful, fast and practical Audit applications that help monitor objectively and improve one as a Professional over time. It is REAL and EVIDENT for the individual and can be applied to clinical and non-clinical domains too!

        So in summary, if No.1 published evidence is weak or inconclusive, then No.2 Clinical Experience/Expertise and No.3 Patient Wishes become dominant in Best Practice/Practise EBM.

        Yours logically,

        Tony.

        • Hi Tony,
          Sorry but still not getting it possibly due to my imprecise language and explanation or I may be disagreeing with you.

          I understand the point that a simpler process may work better in less experienced hands but don’t see the link to your conclusion which seems to imply that when published evidence is weak we should ignore it as it does not stand up to scrutiny, but you still then rely entirely on expert opinion which is itself evidence.

          We know expert opinion is generally more biased usually due to all the cognitive biases such as confirmation, recency, framing and self-serving etc we as humans are subject to. If these biases are not taken into account as part of the decision process, we fall back entirely on heuristics.

          Though the evidence on endodontic posts is weak, for example, when I graduated posts were placed to strengthen teeth, we now understand the opposite to be true and remove as little tooth as possible to accommodate this. Ferrule, post types and lengths have been looked at since then and have informed our treatment.
          That came mostly from weak, in vitro studies but these are weighed against expert opinion in the battle you describe. That occurs at the evidence level and does so essentially to reduce bias.

          We assume these changes are leading to less failure or at least allowing us to communicate the deficiencies of the treatments to patients. Until better clinical data mining is available we may not be truly sure but until then does it still remain as expert opinion?
          Young dentists tend to bloat the “evidence” ring of the EBD triad because they don’t have as much expertise of experience. Older dentists tend to do the opposite because they are not so academically trained but have greater experience. I think we need to be aware that expert opinion varies in objectivity but is still evidence which is generally very weak and needs to be dealt with mindfully using all three elements of the process.

          The reason I like Kevins Blog so much is that, as a fairly orthodontically illiterate dentist, it lights the way to not only what is certain but what is not, what is not what it is claimed and where the uncertainties lie. This assists in deciphering the multiple opinions out there where presentation and spin seems to leave people even more certain they are right the less evidence they have……other than their own opinion.

          All the best,
          Paul

  2. I want to thank you for your effort to give us the resoults of your study, I think is important to read the materials and methods for to know what does the search meen.
    have a nice day
    Roberto

  3. Dear Prof O’Brien,

    Thank you once again for an inspiring blog!

    Your text reminds me of what I once was tipped to read in the Lancet about the often forgotten limitations of RCTs. Particularly regarding the danger of generalising the positive or negative result of an RCT outside of the study context of the RCT in question:
    “Double-blind RCTs, when properly done and analysed, unquestionably provide confidence in the internal validity of the results in so far as the benefits of the intervention are concerned; and the more so if replicated by subsequent studies. Consequently, RCTs are often called the gold standard for demonstrating (or refuting) the benefits of a particular intervention. Yet the technique has important limitations of which four are particularly troublesome: the null hypothesis, probability, generalisability, and resource implications.”

    Rawlins M. Lancet. 2008 Dec 20;372(9656):2152-61
    http://dx.doi.org/10.1016/S0140-6736(08)61930-3
    (Available without paywall at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.548.637&rep=rep1&type=pdf and as a more elaborate version at http://www.amcp.org/WorkArea/DownloadAsset.aspx?id=12451)

    The Cochrane Handbook chapter 12.7.4. also summarizes your point nicely: “A common mistake when there is inconclusive evidence is to confuse ‘no evidence of an effect’ with ‘evidence of no effect’.” and further “Another common mistake is to reach conclusions that go beyond the evidence. Often this is done implicitly, without referring to the additional information or judgements that are used in reaching conclusions about the implications of a review for practice.”

    Respectfully,
    Valter

  4. Dear Prof Kevin
    What about if there is a significant difference and lack of power? I think interpreting like this finding is important as the negative finding , hope you can write a new blog about this .
    Many thanks

Leave a Reply

Your email address will not be published. Required fields are marked *