Do I still believe in evidence based orthodontics?
Do I still believe in evidence based orthodontics?
Every now and then a paper comes along that makes me stop and think about how I practice and what I have taught over the past thirty years. Professor Trisha Greenhalgh and colleagues published one of these papers in the British Medical Journal several weeks ago. I have followed her work for a while because she adopts a brilliant questioning approach to medical research. She also runs a really interesting Twitter account @trishgreenhalgh and has written a great book on interpreting research called “how to read a paper”.
They based their paper on the content and discussions of a symposium into a reappraisal of evidence based medicine. It is an open access paper, so youshould be able to access it.
T Greenhalgh et al
British Medical Journal 2014:348:g3725
Over the years I have spent a large amount of time carrying out trials, agonising over the details of producing systematic reviews and speaking about evidence based orthodontics. I have really done my best to help develop my orthodontic research to a higher level. As a result, when I saw the title of this paper on Twitter, I was concerned that all our efforts had been misdirected. However, when I read the paper, I was reassured that we were heading in the right direction. Nevertheless, some of the issues that they raised were very relevant to evidence based orthodontics and I would like to discuss them in this post.
Firstly, not all of the paper, particularly the section on industry sponsored trials, is relevant to orthodontics. This is because the orthodontic industry does not bother carrying out trials, they just release the product, advertise the new paradigm and we all buy it! (see this post)
I also read this paper with the knowledge that medicine is light years ahead of orthodontics in trials, evidence and guidelines and the authors spent some time considering these areas. However, other sections of the paper are relevant to the current status of evidence based orthodontics. I shall highlight some of these and make suggestions on what you can do to ensure that we strike the right balance between evidence based orthodontics and the characteristics and values of our individual patients.
The application of evidence based orthodontics
One of their central arguments is that “real EBM has the care of the individual patients as a top priority”. As a result, we should synthesise research evidence that is relevant to the individual patient and explain this to them in terms that they can readily understand. In other words, we should not be so rigid that we simply apply the results of the most recent relevant systematic review to their condition. For example, we have good evidence from trials and systematic reviews on the most effective methods and optimum timing for the treatment of a child with prominent teeth. Nevertheless, we should not simply treat every child with prominent teeth with a functional appliance. We must go back to basic diagnosis and identify the aetiology of the problem. Furthermore, it would be wrong to suggest treatment for all these patients in order to reduce the chances of incisal trauma, without a discussion with our patients of the relative risk and numbers needed to treat.
Another area of our work that is relevant to this discussion is the renaissance of screening children at a young age to detect and intercept developing malocclusion. While these recommendations are laudable, we currently have no strong evidence on the effectiveness of “interceptive” treatment or even how many children we need to screen to intercept one orthodontic problem. As a result, we need to appraise the evidence for this new movement.
You should also be more critical of guidelines and their sources of evidence and not necessarily take them at face value. For example the Royal College of Surgeons guideline on the management of impacted canines still recommends that we should remove a primary canine to encourage eruption of the permanent canine. But it also state in the same section that ” Further well-reported randomised controlled trials are required to assess the full effectiveness of this clinical intervention”. This means we do not know if this intervention is effective, so why do we include it in a guideline?
Training to ask the questions of research.
The authors of the paper point out that this is essential. I suggest that we all need to learn to interpret research. I cannot help feeling that we are all too concerned about data presented as means of only a few millimetres or degrees. Consequently, we do not consider the effect size and confidence intervals. Unlike most high quality medical journals, I am not sure if any of our journals make the reporting of confidence intervals a requirement of publication. As a result, we tend to take the numbers at face value without an assessment of the small “differences” that excite only us along with an evaluation of the level of uncertainty. We simply cannot practice evidence based care if we do not take these into account.
Interestingly, they also emphasise that we need to pay more attention to the individual patient and we can learn from case discussions. One way for us to do this is by asking journal editors to publish discussions on “real world” cases. This would take us away from marveling at the perfectly treated “collections of precious things” in our journals that simply signify that the operator was brilliant or simply got lucky!
I think that we all should ask the publishers to raise the bar and raise their standards. I wonder if the journals are still too full of retrospective studies and cephalometric tables that result in confusing low quality evidence that fails to answer the “so what” question. It would be a major step if we could persuade the journal editors to only publish high quality trials and systematic reviews in a form that was understandable to both our patients and us. But if we took this step, would we have too few papers to read and a few less journals? But would this be a bad thing? We are interested in a small, but important, part of the human body, but we have many journals reporting studies of mixed quality and doubtful relevance. This simply adds to the confusion of evidence based orthodontics.
Get the researchers to be more ambitious and imaginative
Their final point was that more imaginative research is needed. This was a great section of the paper. Essentially, we need to move away from our values, which mean very little to anyone but orthodontists, towards outcomes that are relevant to patients. For example, we are wide open to the use of qualitative research techniques which we have always ignored because you cannot interview plaster models and radiographs. I can guarantee that if a young academic is starting their research career and they concentrated on introducing qualitative methods into their research, they would achieve a high profile and make a difference to patient care very quickly.
In summary, borrowing very heavily from the paper. I still have a great belief in evidence based orthodontics, but at this stage of our development (miles behind medicine), I think that we need to do the following.
- Interpret our limited high quality literature carefully to avoid treating our patients as a set of teeth characterised by features that we place into categories and treat accordingly.
- Learn to interpret the literature and continuously question the recommendations of both Professors and industry sponsored advocates spouting “evidence”.
- Encourage our journals to raise their quality and not simply concentrate on filling up the space.
- Be prepared to ask our researchers to be more imaginative, and no matter who they are they should be able to answer the “so what” question (me included).
Have a look at the paper and join in any discussion below.
Emeritus Professor of Orthodontics, University of Manchester, UK.