Fixed retainers fail a lot!
Fixed retainers are a routine part of orthodontic practice. However, do we know the failure rates for this essential component of treatment? This was examined in a new systematic review. The findings were both surprising and disappointing.
When I had practiced clinically for a few years, my preferred form of retention was fixed retainers. After a while, I became aware that I was keeping many of my patients under review to check their retainers. This was one of the factors that persuaded me to switch to vacuum-formed retention. Another factor was the failure rate. I estimated this was about 5% for the retainers I fitted. But it was just my guess, and we all know we can be overly optimistic about our clinical performance. Over the past few years, several trials have evaluated the performance of fixed retainers. This new systematic review brings us up to date on this clinical problem.
A team from the great city of Manchester, in the North of England, conducted this review. The EJO published the paper, and it is open access!
Su Thae Aye, Shiyao Liu, Emer Byrne and Ahmed El-Angbawi
EJO advanced access: DOI: https://doi.org/10.1093/ejo/cjad047I would like to declare a conflict as I know the investigators
What did they ask?
They asked this simple question.
“What influences the risk of failure of the retainers”?
What did they do?
They did a standard systematic review by searching the literature, selecting papers, assessing the risk of bias, executing data extraction, and conducting relevant meta-analyses.
The PICO was
Participants: Orthodontic patients of any age who completed a course of orthodontic treatment.
Interventions: All forms of bonded retainer
Outcomes: The primary outcome was the failure rate of the retainers
They included randomised clinical trials and prospective clinical trials.
They assessed the risk of bias with the Cochrane Risk of Bias tool for RCTs and the prospective trials using the Newcastle-Ottowa analysis.
Finally, they carried out the relevant meta-analysis to combine the data.
What did they find?
They presented a massive amount of data derived from 34 studies. Twenty-five of these were RCTs, and 9 were prospective clinical studies. This included data on 3484 participants with 4580 fixed retainers.
Investigators did most of the studies (26) in dental schools, and eight were done in dental practice settings.
Like most systematic reviews, I do not have the space to go into all the data. So, I will limit my discussion to the RCTs and failure rates.
Firstly, I looked at Risk of Bias of these studies. The authors rated 12 studies as having some concerns, and 13 had a high risk of bias.
They found the following failure rates when they excluded the non-RCTs and the high risk of biased RCTs.
|35.22% (95% CI= 27.46-42.98)
|37.53 (95% CI=27.73-47.32)
|38.67 (95% CI=31.0-46.3)
The overall quality of evidence was low and should be interpreted cautiously.
They also presented data on the short-term (up to 1 year), medium-term (1-4 years), and long-term (5-6 years) failure rates. I felt the most critical period to look at was the long term. Unfortunately, the team could not calculate the maxillary failure rate because of the high risk of bias. However, they reported that for the mandibular teeth the failure rate was 53.85% (95% CI= 40.31-67.9). Unfortunately, they obtained this data from just one study.
Importantly, they showed that approximately 25% of fixed retainers fail during the first 12 months.
My interpretation of their overall conclusions was
“The failure rate of fixed bonded retainers is high”.
What did I think?
This systematic review was an incredible amount of work, and they reported many aspects of fixed retainers. The team did the review well and followed the standard systematic review protocols for carrying out and reporting the study.
I was disappointed to see the high failure rates for the included studies. They were indeed more than my failure rates. Or were they? Was I wrong to consider that I was great at fitting retainers, and they nearly all stuck? This disparity between estimated and actual failure is great. It would make me think about my retention regime, and I would not use bonded retainers because of the high failure rate. If I continued using bonded retainers, I would review my failure rates to identify if I had a problem.
Another thought is whether the research is wrong and plug into our personal “pyramid of denial”. I think we can dismiss this argument because of the large number of trials they included.
Nevertheless, we must consider that most of the studies were done in dental schools, and the operators varied from experienced orthodontists to residents. We know that failure rates tend to drop with experience. As a result, the findings of this study may not be relevant to experienced specialist practices. But we cannot simply dismiss the findings.
I also tried to compare the results of this review to the recent Cochrane Review into retention. I was surprised that this was not possible because the authors of this paper reported on percentage failure, and the Cochrane group reported the hazard ratio. The two groups differed in their assessment of the bias of the included studies. My head also spun as I tried to compare the reviews because of the large amount of data each team reported. Yet, they both concluded that the strength of evidence was not high, and more research was needed!
I also wonder if the size and complexity of these reviews, in which the authors have tested many combinations of retainers, represents the real problem with retention. The real problem might be the fact that we do not know the best methods of retention, and we try many different approaches in an endless search for something that is effective. Perhaps the two teams can get together and come up with a concise report on their excellent reviews. Both teams are primarily located in the Northern city of Manchester or its smaller neighbour, Leeds. So, this shouldn’t be too difficult?
I asked Simon Littlewood for comments on this post with respect to systematic reviews. He sent me these great comments. These clarify some of my points and I have pasted them here.
Differences in reporting
“When we look at the difference in the way of reporting the failures rates: hazard ratios in the Cochrane review, and percentage failure rates in the recent EJO review. I think they both have a place. Perhaps they are just reporting different versions of the same the data. As you know, the hazard ratio is required by Cochrane, and compares the risk of the failure in comparison to the comparator intervention…while the % percentage failure rate just states the failure rate for that particular retainer. It feels like the first is more scientific, but the second is something the average orthodontist can relate to better!
High failure rate
As for the disappointingly high failure rates of bonded retainer that the higher quality studies seem to identify, I think there are a few things:
- I think bonded retainers are the single most technique sensitive thing we do in orthodontics, so plays a massive role. As a result trials with multiple operators are more helpful for me.
- As we always say…RCTs expose poorer results than we all like to think we have, with all patients followed up and all data recorded
- The term “failure” is a problem when it comes to bonded retainer research. In the trials, if one tooth becomes debonded from the composite (the commonest problem with bonded retainers) we record it as a failure. However, in real life, this is often picked up before any relapse occurs, is easily repaired and neither the patient or clinician would regard this as a true “failure”. So 30% failure sounds like 1 in 3 patients have terrible issues with their retainers…whereas in reality most will have had a simple repair with no harm done”.
Emeritus Professor of Orthodontics, University of Manchester, UK.