Do orthodontic trials benefit patients?
It is well accepted that randomised trials are the best way of testing treatments. But do they influence patient care?
I have mentioned several times on this blog that the most rigorous method of evaluating orthodontic treatment is the clinical trial. It is also disappointing that trials do not seem to always translate into clinical practice and help our patients. I was, therefore, very interested to come across this paper. This was written by a team from the Centre for Evidence Based Medicine in Oxford. Academics are very clever in Oxford and I have decided to interpret their points in terms of orthodontic research.
Carl Henegan et al
Trials journal published this paper and it is open access, so anyone can read it.
In their introduction, they point out that problems with the outcomes of in clinical trials may be grouped into the following main categories; (i) study design (ii) Methods (iii) Publication (iv) Interpretation. Consequently, if a trial has these problems, it is unlikely to lead to patient benefit.
I have decided to interpret their main points with relevant orthodontic examples, where I can! In doing this, I hope that I can provide information that may help you interpret the results of clinical trials.
Investigators use surrogate outcomes in an attempt to predict another more complex or final outcome. They do this because these are available before the final or ultimate outcome. This means that they can publish their findings earlier than if they had to wait for the completion of all treatment. An orthodontic example, is measuring the effects of functional appliances at the end of the functional phase. While this is interesting, it may not bear any relationship to the final result of treatment. The most useful data collection point is at the end of all treatment, because this is what matters to the patient.
This is when an observer uses their judgement to assess an outcome. One example, is the assessment of facial attractiveness by a small group of raters. This is particularly important if the group is made up of orthodontists. If the examiners are not blinded to treatment allocation this effect is more marked.. I highlighted this potential problem in a trial of early Class III treatment.
Lack of relevance to patients
This problem is concerned with the effect size and confusion with statistical significance. We are great at this in orthodontic research. For example, we are all familiar with the paper that reports small amounts of cephalometric change that the authors get excited about because the difference is “highly statistically significant”. Yet the effect size of the treatment has no clinical relevance. We need to be very aware of this issue.
All studies have missing data. This results in a loss of statistical power and potentially false conclusions. In their paper, the authors reminded me of the “5 and 20 rule”. This simple guide states:
“if more than 20% of the data is missing then the study is biased. If less than 5% is missing, there is low risk of bias”.
We can all apply this when we are reading papers. A good example of this issue is a low response rate to a survey that measures an outcome in a trial.
This occurs when a trial is not published because the authors, or funders, do not like the conclusions. This is one of the main reasons why all trials should be registered in a clinical trials registry before they start. If you go to Clinical trials.gov and put in orthodontic studies that are completed or just follow this link (https://clinicaltrials.gov/ct2/results?cond=Orthodontics) you will see that there are 55 trials completed and potentially awaiting publication. It is not possible to work out how many of these are not published, but you can see that some have been completed for sometime and may not be published.
Under reporting of adverse events
This is simple, all trials should report the benefits and harms of treatment. There are many orthodontic trials that just report the benefits of treatment but do not evaluate the harms. These are important to patients and should always be recorded and reported. I have been guilty of this in what I think have been my best papers. We simply did not think about recording harms and this was a mistake.
This is when the authors present their data in more positive way than the actual results. The best recent examples in orthodontics are those papers that claim a certain percentage increase in the rate of tooth movement with MOPs or vibration. For example, in a study on MOPS the authors reported a 62% increase in the rate of tooth movement but this was only 0.5mm per month. This is a difference that is not clinically significant.
This is when a trial reports multiple outcomes that are frequently related. We need to remember that the greater the number of outcome, the more chance of us finding a false positive result.
Again, we are great at this. The best example is the cephalometric festival with multiple statistical testing across 20-30 uninteresting measurements. If you open any journal you will find one of these. When we run simple multiple tests we are going to find at least one significant difference and then we can write excitedly about it. The authors do not acknowledge the fact that these findings could have occurred by chance.
What is the solution?
Orthodontic clinical trials have come a long way in the last 20 years. Currently, investigators are doing some really good trials. But as with most things in life, we need to improve. We can do this by:
- Registering and publishing all trials
- Using outcomes that are relevant to patients
- Identify one or two ceph measures that have a clinical meaning, if we must use them.
- Do not put a spin on the findings
- Report outcomes at the end of treatment, includes benefits and harms.
If we simply did this we would make research easier to read, understand and make a greater difference to both orthodontists and our patients.