June 18, 2018

Do orthodontic trials benefit patients?

It is well accepted that randomised trials are the best way of testing treatments.  But do they influence patient care?

I have mentioned several times on this blog that the most rigorous method of evaluating orthodontic treatment is the clinical trial. It is also disappointing that trials do not seem to always translate into clinical practice and help our patients.  I was, therefore, very interested to come across this paper.  This was written by a team from the Centre for Evidence Based Medicine in Oxford.  Academics are very clever in Oxford and I have decided to interpret their points in terms of orthodontic research.

Why clinical trial outcomes fail to translate into benefits for patients

Carl Henegan et al

Trials2017 18:122

https://doi.org/10.1186/s13063-017-1870-2

Trials journal published this paper and it is open access, so anyone can read it.

In their introduction, they point out that  problems with the outcomes of in clinical trials may be grouped into the following main categories; (i) study design (ii) Methods (iii) Publication (iv) Interpretation. Consequently, if a trial has these problems, it is unlikely to lead to patient benefit.

I have decided to interpret their main points with relevant orthodontic examples, where I can!  In doing this, I hope that I can provide information that may help  you interpret the results of clinical trials.

Surrogate Outcomes

Investigators use surrogate outcomes in an attempt to predict another more complex or final outcome.   They do this because these are available before the final or ultimate outcome.  This means that they can publish their findings earlier than if they had to wait for the completion of all treatment. An orthodontic example, is measuring the effects of functional appliances at the end of the functional phase. While this is interesting, it may not bear any relationship to the final result of treatment. The most useful data collection point is at the end of all treatment, because this is what matters to the patient.

Subjective Outcomes

This is when an observer uses their judgement to assess an outcome. One example, is the assessment of facial attractiveness by a small group of raters.  This is particularly important if the group is made up of orthodontists.  If the examiners are not blinded to treatment allocation this effect is more marked.. I highlighted this potential problem in a trial of early Class III treatment.

Lack of relevance to patients

This problem is concerned with the effect size and confusion with statistical significance. We are great at this in orthodontic research.  For example, we are all familiar with the paper that reports small amounts of cephalometric change that the authors get excited about because the difference is “highly statistically significant”. Yet the effect size of the treatment has no clinical relevance.  We need to be very aware of this issue.

Missing data

All studies have missing data. This results in a loss of statistical power and potentially false conclusions.  In their paper, the authors reminded me of the “5 and 20 rule”.  This simple guide states:

“if more than 20% of the data is missing then the study is biased. If less than 5% is missing, there is low risk of bias”.

We can all apply this when we are reading papers.  A good example of this issue is a low response rate to a survey that measures an outcome in a trial.

Publication bias

This occurs when a trial is not published because the authors, or funders, do not like the conclusions. This is one of the main reasons why all trials should be registered in a clinical trials registry before they start.  If you go to Clinical trials.gov and put in orthodontic studies that are completed or just follow this link (https://clinicaltrials.gov/ct2/results?cond=Orthodontics) you will see that there are 55 trials completed and potentially awaiting publication. It is not possible to work out how many of these are not published, but you can see that some have been completed for sometime and may not be published.

Under reporting of adverse events

This is simple, all trials should report the benefits and harms of treatment. There are many orthodontic trials that just report the benefits of treatment but do not evaluate the harms. These are important to patients and should always be recorded and reported. I have been guilty of this in what I think have been my best papers. We simply did not think about recording harms and this was a mistake.

Spin

This is when the authors present their data in more positive way than the actual results. The best recent examples in orthodontics are those papers that claim a certain percentage increase in the rate of tooth movement with MOPs or vibration.  For example, in a study on MOPS the authors reported a 62% increase in the rate of tooth movement but this was only 0.5mm per month. This is a difference that is not clinically significant.

Multiplicity

This is when a trial reports multiple outcomes that are frequently related. We need to  remember that the greater the number of outcome, the more chance of us finding a false positive result.

Again, we are great at this.  The best example is the cephalometric festival with multiple statistical testing across 20-30 uninteresting measurements.  If you open any journal you will find one of these. When we run simple multiple tests we are going to find at least one significant difference and then we can write excitedly about it. The authors do not acknowledge the fact that these findings could have occurred by chance.

What is the solution?

Orthodontic clinical trials have come a long way in the last 20 years. Currently, investigators are doing some really good trials. But as with most things in life, we need to improve. We can do this by:

  • Registering and publishing all trials
  • Using outcomes that are relevant to patients
  • Identify one or two ceph measures that have a clinical meaning, if we must use them.
  • Do not put a spin on the findings
  • Report outcomes at the end of treatment, includes benefits and harms.

If we simply did this we would make research easier to read, understand and make a greater difference to both orthodontists and our patients.

 

Related Posts

Have your say!

  1. It is my opinion that many scientific papers are published more to demonstrate abilities of the researchers and demonstrate their research skills and knowledge of statistical analysis than for real clinical benefit to the patients. Therefore the multiplicity of outcomes that are all published on long difficlut tables and tested with multiples of staistical instruments with little eventual clinical meaning. That is not to take away from the very well designed, concise and to the point RCT’s the add much knowledge and advance clinical orthodontic practice.

  2. I would add that trials need to be carried out that are set to answer useful questions. It would be great if researchers and clinicians could get together and decide what needs answering, as well as the best ways to do so.
    I would also add that there is alot of inertia amongst clinicians when clinically useful results are published. Part of this is due to their being many ways to skin a cat but also a reluctance to change what apparently works. I suspect we are all guilty of this to some extent. The functional appliance question is a good example of this (we still call them FUNCTIONAL appliances for goodness sake), it will probably take a long time until the profession stops thinking that they grow jaws and maybe even stops using them.

  3. RCT’s tend to be rather narrow minded, poorly or subjectively concieved and of remarkably low quality. Leading to all results being useless and inconclusive. In that abscensecwe have ‘evidence based medicine’ where the practitioner does whatever is their preference, because there is no evidence

Leave a Reply

Your email address will not be published. Required fields are marked *