February 17, 2025

Do CAD/CAM brackets shorten treatment time by 4 months? A new systematic review.

There are many exciting innovations in orthodontics at the moment. One of these is CAD/CAM brackets designed for each patient. The main claimed advantage of these appliance systems is reduced treatment time because of their increased efficiency. However, these are early days in their development, and very few trials have been conducted.  This new systematic review looks at the current literature. It comes to the bold conclusion that CAD/CAM brackets reduce treatment time by 4 months.

As regular readers of this blog will know, I decided to reduce the number of systematic reviews I posted about. This was because many recent reviews have been of variable quality and do not add to our knowledge – primarily because of the poor primary evidence base. Nevertheless, I also decided to look closely at reviews making claims that may influence our clinical practice. This review falls nicely into the latter group.

A team from Mashad, Iran and UCLA, California, did the review. The EJO published the paper.

What did they ask?

They did the review to

“Explore and assess the effectiveness of CAD/CAM brackets systems of treatment efficacy, duration, number of appointments, clinical outcomes, patient-reported outcomes, and adverse effects”.

What did they do?

The team did a systematic review and followed all the classic stages of literature search, paper identification, filtering for studies according to their inclusion criteria, assessment of bias in the studies, data extraction, analysis and conclusions.  

The PICO for the review was

Participants

Patients having fixed appliance treatment

Intervention

CAD/CAM brackets

Comparison

Conventional orthodontic brackets

Outcomes

Treatment efficiency, duration, number of appointments, clinical outcomes.

They included all types of study. This means that they included RCTs and retrospective studies.

They assessed the risk of bias for the trials with the Cochrane Risk of Bias tool. For the non-randomised studies, they used the Newcastle-Ottowa scale.

The primary outcome was the effectiveness of treatment measured by the ABO cast radiographic examination.

They carried out a meta-analysis that included an evaluation of heterogeneity.

What did they find?

After the literature searches, they identified six papers for inclusion in the review. These consisted of four retrospective studies and three trials. The meta-analysis revealed the following.

“There was no difference between CADCAM and conventional brackets for treatment outcome and the number of appointments. However, there was a substantial clinically and statistically significant difference in treatment duration. This was 4 months shorter for CAD/CAM”.  

This is a remarkable finding.

Let’s look closely at this papers.

As this finding is important, we should look closely at this review.

When they examined the risk of bias in the trials, they concluded that they had concerns about one trial but no concerns about the other three. Similarly, they found that the risk of bias in the retrospective studies was high in one, medium in one, and low for the remaining two.

What about the papers?

However, several important matters became apparent when I examined the papers they included. On careful reading, I found that they combined the information from two separate trials. These were by a French-based team, and I will call the papers Charavet (2016) and Charavet (2019).

Firstly, it is important to point out that these were not trials of CAD/CAM v conventional brackets. 

Charavet (2016) was a trial of self-lighting brackets that were randomised to piezocision or not. 

Charavet (2019) was a trial of CAD/CAM brackets again randomised to piezocision or not.

Two years ffter they published the last of these trials, the Charavet team published a further paper that included the data from the earlier two studies (Charavet 2021). In effect, they compared the data from the groups that did not have piezocision. This meant that they did not randomly select the subjects from the same population. This paper (Charavet 2021) is included in this systematic review.  This is a retrospective comparison of case series, not a trial. 

This situation is compounded further because one of the authors of these papers then produced another paper (Jackers et al 2021) using this data. In this paper, they took the data from the original trials of the 12 patients who had CAD/CAM and had no piezocision.  

Unfortunately, the authors of the systematic review were perhaps unaware of this complexity and included this data in their review. As a result, they included the data from the same 12 patients twice in the current systematic review.

I know this is difficult to explain, and I hope you can follow it.

Another problem is the inclusion of the paper by Weber. A well-respected research team from the University of North Carolina wrote this paper. Notably, the authors pointed out that this was a pilot study with a small sample size with very low and variable occlusal index scores. Randomised trials with larger sample sizes were necessary.  The current systematic review team also identified that this study was at high risk of bias, yet they still included this small pilot study.

The meta-analysis

Finally, I would like to look at the meta-analysis for treatment duration.

When you read a meta-analysis, you should check for heterogeneity.  This is a measure of inconsistency between studies.  If heterogeneity is greater than 75%, it is considered considerable.  and it is likely that the result is not valid.  In this systematic review the heterogeneity for treatment duration was 97%. As a result, there is considerable doubt about the findings. 

The review’s authors then did a sensitivity analysis and removed the Weber paper. This only reduced the heterogeneity to 93%, and the treatment duration still favoured CAD/CAM with a mean difference of 3.2 months. However, the 95% confidence interval was very wide (-6.4, -0.04) this almost includes zero, suggesting that it is close to being not significant.

When I examined the meta-analysis closely, it was clear that Charavet and Jackers also favoured the CAD/CAM treatment. Meanwhile, Penning’s well-done RCT reported no effect. If we removed the flawed data from the Charavet and Jackers study, I am sure the difference in treatment duration would disappear.  

Final comments

After careful consideration, I’m sorry to conclude that I cannot agree with the conclusion of this review because of two main issues.

Firstly, they included low-quality retrospective pilot studies, which weakens the strength of the evidence. 

Secondly, they included data from several related studies that did not investigate CAD/CAM and conventional brackets. More importantly, I found issues with data duplication in the papers included in the review.

Indeed, this is an excellent example of how including low-level evidence papers and multiple studies, “salami sliced” from a single research effort, can lead to problematic conclusions in a systematic review. 

Related Posts

Have your say!

  1. First off, I would agree with Kevin’s comments about heterogeneity. For instance an easy case with slow erupting teeth will inherently take more time than a fully erupted more difficult case so it’s almost impossible to measure efficacy, unless you have huge sample sizes. I would also like to share my personal experience with LightForce (unbiased end user, not a KOL). The first 50-100 cases require changes in bonding technique and materials (trial and error), but once mastered, we saw dramatic improvement in timely results. Is it statistically significant? I don’t know, but our last 50 cases were better and faster than our first 50! It really boils down to lifestyle over upfront costs. I would prefer to delegate more to trained clinical assistants while I focus on diagnosis and treatment planning, plus I happen to live in an area where myself, patients and their parents would rather be anywhere but our office, yet they want clear braces with fewer, not more visits and are willing to pay more that type of treatment. Just my honest opinion!

  2. Thank you Kevin for your analysis. I would love to hear from reviewers of this paper. Having this published in a highly respected journal, it will forever be quoted for the reduction in treatment time. Sad!

  3. Thank you Kevin Obrien for your expert contributions for so many years to educate members of the specialty about possible bias in orthodontic research due to personal financial influence or monetary gain resulting from potentially false claims of advantages of old and new brackets and techniques.
    Less “snake oil” is good for the orthodontic profession.

  4. Typical case of ” if it seems too good to be true, it probably is”

  5. Thank you Kevin, excellent review and analysis, but leaves me concerned regarding publications.

    How could this problem of duplicate data have been discovered, and where you think the fault lies (if any)?

    Is it the authors of the systematic review, and would it have been possible from the papers alone to deduce the data was duplicated in different papers?

    Is it academic malpractice from the authors who have used the same data for different papers (however if differing research questions are asked of the same data, should it matter?)

    Is the peer review process accountable?

    My concerns are Kevin, without your due diligence in investigating the primary studies, the average reader, like myself, would be misinformed from the paper, even after applying critical appraisal tools to the systematic review

    • Hi Farooq, thanks for the comments. I thought very carefully before I wrote this post. This is because there are many issues with the current crop of systematic reviews. With this paper, these arose from one study team producing multiple papers by salami slicing a study. These papers were then included in the systematic review by an inexperienced team. They simply made a mistake and this is easily done when looking at multiple papers. This whole situation was then compounded by a review process by the referees, who managed to miss the problems with the data, etc.

      s with most major errors, this has arisen from several interrelated events which were compounded. This is why we need to carefully read all papers. We all need to consider that if something is published it is not necessarily good!

Leave a Reply

Your email address will not be published. Required fields are marked *