Are orthodontic systematic reviews still helpful?
Introduction
Systematic reviews are a core component of our ability to practice evidence-based care. There is no doubt that these reviews have been a tremendous development and changed orthodontic clinical practice. Unfortunately, I am beginning to tire of them. This post is about my current thoughts on the current state of systematic reviews.
I was prompted to write this post after reading an excellent editorial in the AJO-DDO. A UK/Ireland-based team of well-known researchers wrote the article.
Systematic reviews in orthodontics: A fresh look to promote renewal and reduce redundancy
Declan Millett et al. Am J Orthod Dentofacial Orthop 2022;162:1-2
The rise of systematic reviews
I will start this discussion by going back to the mid-1990s. Bill Shaw asked me to come to a meeting about a potential research opportunity. I turned up and found that the meeting was with Iain Chalmers, one of the founders of the Cochrane Collaboration. He wanted to discuss the development of a Cochrane Oral Health Group. We were utterly blown away by the concept of systematic reviews as a research method.
At that time, I had just started the UK early Class II treatment trial. As a result, I was too busy to take anything else on, and I decided to concentrate on setting up this study. Consequently, I simply watched the great work Bill did setting up the Cochrane Oral Health Group.
Nevertheless, I got involved with some editorial work and contributed to several systematic reviews. It was a great time for orthodontic research. Many people started reviews that condensed our evidence base and changed practice. Systematic reviews were great, and we were going to change the World!
Now many years later. I wonder if researchers are doing too many systematic reviews. Unfortunately, we are now all too familiar with this standard conclusion of many reviews.
“The overall quality of the evidence was low. There is a need for high-quality RCTs in this area”.
All good things must pass.
The cynic in me wonders if the systematic review has simply become a method to produce a paper and build a CV. This would be similar to the multitude of bonding studies that were done in the mid-1980s. There is nothing wrong with these papers. In fact, my first paper was a bonding study. But there comes a time when we need to take stock and consider whether a type of research adds to clinical knowledge. I wonder if we are at this point with orthodontic systematic reviews.
I feel this way because, unfortunately, many reviews include low-quality papers and retrospective studies full of bias. The authors seem to include these papers because they could not find any trials. This simply dilutes the quality of the review, no matter which convenient “tool for bias” is used. As a result, the reviews are bound to conclude that the evidence was not strong. Thus, adding to research waste.
This problem has now reached the point where I don’t read many systematic reviews. We certainly have reduced the number of reviews discussed in this blog.
Solutions from the editorial?
The editorial authors discussed this situation much more politely than I did. Importantly, they highlighted the research time wasted by many reviews. I was, therefore, pleased to read that they put forward some solutions to this problem.
The central solution that they proposed was straightforward. The authors suggested that instead of stating that “more evidence is required.” The researchers should develop a protocol for a trial that could address the deficiencies that they identified. Importantly, they should publish this protocol in one of the journals that publish protocols. This would lead to the development of much-needed research.
Final comments
In my general grumpy state, I would go further and suggest that a journal should not publish a systematic review without outlining future trials. Neither should they publish reviews that include mostly retrospective studies. This, of course, is up to the journal editors, who are the gatekeepers to the publication of papers.
As the paper editors suggested, this will lead to the “nirvana” of reviews being carried out, studies designed, and collaborators found to carry out the trials. Importantly, this would move orthodontic research forwards to address some clinical problems we face. Furthermore, it would allow us to build our evidence base. So let’s adopt the suggested principles and look forward to improving our orthodontic research.
Emeritus Professor of Orthodontics, University of Manchester, UK.
We are certainly at a point where there is significant unnecessary duplication of publication efforts when publishing systematic reviews. In some orthodontic areas, there are more reviews than clinically meaningful primary clinical studies! Adding to the discussion is the situation that so many online publication venues offer to “freely” publish our work (regardless of impact and quality) to get it exposed to everyone in the world. Without adequate quality control, any reviews are published. So even if the journal editors of reputable orthodontic journals increase their efforts to filter potentially impactful systematic reviews, the community at large will still see an unmanageable number of reviews published elsewhere. Another point is the sometimes-careless incorporation of meta-analysis in systematic reviews that mislead the readers into believing that there is a clear mean response to a given therapeutic approach. Most of the time, the included studies had a high risk of bias and were methodologically and clinically different. Everything goes into a blender, and a “magic” summary appears. Nevertheless, to finish this commentary on a positive note, it has to be reinforced that systematic reviews were a step in the right direction over the last two years. They exposed the readers to primary studies that were not significantly exposed before. They provide summaries, like old, condensed book chapters, to many readers and, ideally, make the readership aware of flaws (risk of bias) in studies continuously considered in clinical practice decisions. I agree that systematic reviews now have a place to help justify future meaningful primary studies – but only if the authors of those systematic reviews do actually follow their implied promise to facilitate or participate in future clinical trials.
All good points well made Kevin.The days of concluding “more research is needed in this area” without suggesting what form that research should take, are hopefully finally ove.r
Kevin, I appreciated the latest blog on systematic reviews and the need for more RCTs, but additionally I would like to ask a simple question. How likely is it now that clinicians such as Tweed, Steiner, Schudy, Creekmore, Williams, Mulligan, Holdaway, etc. could ever publish in one of the main orthodontic journals such the AJO-DO or Angle Orthodontist? With the current protocols adapted those chances would be zero. Yes, more accurate and relevant RCTs are needed and by necessity would need to be done in universities, but ignoring thoughtful clinicians who have valuable experience and knowledge will be a preventable calamity.
Larry W. White, Dallas, TX
I agree! The “more is better” attitude that has taken root in the orthodontic literature is part good and part bad. Clinicians with something to offer are held at arms length by the publicists unless they bow to centralized opinions and methodologies, sometimes to a fault.
Look at the medical field that was so taken over by Big Pharma during the phony pandemic, with payoffs and threats from centralized authorities. Now, as the truth is slowly being revealed, the medical field will be seeing that it has lost all trust worldwide and will have to try and regain its previous posture for the next decade or two.
Warts and all, decentralization of publishing criteria should be part of our profession. This way, truth will always have a path.
Can you elaborate on this phony pandemic? (Serious question)
As a “wet fingered orthodontist who has been in practice for 45+ years, and in as a part time professor at a highly respected residency program for 30+ years, I would like to see an international agreement on rating systematic reviews, for example, 5 stars = highly reliable and 1 star = questionable. We generally assume that the editors of the refereed journals filter the articles, but sometimes I wonder how well that is done. Readers are often left wondering how to even evaluate a journal to decide if it is worth perusal. Your thoughts, Kevin et al???
I agree there is too much poor quality research being published at the moment. I think that your idea of a quality measure for journals. I will do a blog post on this but it wont be popular!
If you think systematic reviews are passe, what about AI with big data mining?
I think the main issue I have is that SR’s are replacing actual research and now add little, if anything, to the body of knowledge. I’m afraid it looks like laziness, especially when used for specialist programmes where really the person should produce something if actual value.
I personally have been frustrated by the quality of some research that is accepted for publication that is flawed at the study design phase. For example, a recent paper accepted to a major orthodontic journal was retrospective which can be acceptable IF they do some form of matching such as a discriminant analysis. However, this was NOT done and no form of pre-Tx equivalence was performed. When you then examine the results, the plots clearly show a difference between the groups and the findings of the study could well be due to these pre-treatment differences. This would then have resulted in no significant difference between the groups rather than the finding of statistical and possible clinical significance. This muddies the waters and does not clarify the topic and therefore does not help the profession advance IMHO. This can also be seen in some RCT study designs. This then leads to the inevitable conclusion; “The overall quality of the evidence was low. There is a need for high-quality RCTs in this area.” Study design for both RCT’s and retrospective studies is critical and perhaps it should be a requirement that ALL studies are vetted and feedback given and followed prior to commencing to then later be considered for publication in at least the major journals to attempt to hopefully eliminate but at least reduce these potential issues? Perhaps we should only have one journal that accepts only the highest quality of evidence… but then someone already spoke about that in 2005!
Hi, yes I agree on the one journal. I think that we have too many journals and the editors are put under pressure to fill the journals. This means that they tend to accept papers that should not be published. But which ortho societies are going to give up their journals?
Just putting last changes into the Class III review so I will take note of your comments. So far, it looks as if the reverse TB may be worthy of further investigation and trials need to consider key outcome measures.
I agree journal submissions are being overwhelmed by SRs. May be a COVID effect.
Thanks for raising the issue & I hope it will stimulate change.
Hi Jayne, thanks for the comments. I look forward to seeing the updated class III review. As I remember it is a Cochrane review and will not suffer from the inclusion of retrospective studies. Yes, this could be a COVID effect of work being done remotely in lockdowns.
Ahhh….Thank-you Kevin! Planning a literature review session with some postgrad students, topic “Invisalign”. To illustrate your point, go to PubMed, search last 5 years Clinical Trial, RCT – you have 43 hits. Now change the search to only Systematic Review, Meta Analysis last 5 years…..55 results.
Love your suggestion for concluding these “papers” with a study protocol – I have suggested this previously in this blog when we ask for objective data on clinical efficacy of Aligners vs Fixed. Give me the definitive protocol / papers that have measured the accuracy or efficacy or treatment outcome or anything objective and valid on fixed appliances in the real world, not only university department with small case numbers, and we can then simulate with aligners. I have also mentioned before; closest I come is Bill Shaw and Stephen Richmond’s amazing efforts (and you were there too Kevin…)
VV Speaks for Align Tech, and will absolutely disclose my bias to the residents!!:)
Thanks for the great comments. Yes, you are right in many areas of ortho we have more systematic reviews than trials.
As reported in your blog in June,15 2017 “BIG DATA and ORTHODONTIC RESEARCH” :
“Seeing the results being generated through large datasets has made me think about the numbers of people worldwide who have orthodontic treatment. If we were able to collect just a fraction of the information about these patients then surely this would help us answer some of the questions that we face as clinicians and that are regularly discussed and debated in this blog.”
I think that gatherig real world data (RWD) is the solution.
Yes, this would be a good step to take. This approach to big data was suggested by Phil Benson from Sheffield, UK. It would certainly be more useful than multiple systematic reviews.
What exactly are the big questions that we need answering? All the researchers around the world should get together and decide and then publish into one journal. That might help getting rid of all the rubbish, probably won’t happen though.
Here’s a start: Can we speed up tooth movement somehow? Does any headgear actually work? Does anchorage reinforcement make a difference? (I think jaw growing has been settled except for class 3 cases). Erm , there must be a few others.
I have a list of “persistent challenges” that I compiled in my 19th & 20th year in practice. These are clinical issues I encountered fairly frequently that don’t seem to be well addressed in the literature (unless I just don’t know where to look). If anyone wants to collaborate with me on investigating these topics, let me know!
I still yearn for any orthodontic study that makes a true contribution real world to clinical practice.
I would recommend the book, “can medicine be cured?” By seamus o’mahoney. A brilliant polemic on, amongst other things, medical research and it’s associated fallacies.