October 10, 2013

The Randomised Trial and the Retrospective investigation: Looking forward or back?

Looking forward or back?  The randomised trial and the retrospective investigation

At the last European Orthodontic Congress they held a debate on the value of the randomised  trial versus the retrospective investigation.

When I initially heard about this I felt that this was a redundant topic, as people have debated this many times in the past.  I could not attend the meeting, but I was very pleased to see that the EOS has put the debate on their website for open access.

The debate was called “Randomized Controlled Trial: The gold standard or an unobtainable fallacy”? and it can be found here:  http://www.oxfordjournals.org/our_journals/eortho/ejovideo.html

The two speakers, who led the discussion, were Lars Bondemark who has carried out several trials and he spoke for the value of trials and the opposing view was expressed by Sabine Ruf, who has reported many retrospective case control studies, who spoke for the use of case control studies.

I thought that the debate was very interesting and they went over many of the “classic” arguments for and against trials.  In summary, the conclusions were that trials should be carried out to minimize bias, but they are difficult to successfully run but this does not mean that we should not do them! The opposing view, put forwards by Sabine,  was that while we accept that retrospective investigations do have high levels of bias, they are easier to run, inexpensive and if we acknowledge the deficits and ensure that our collection of records is good, then they provide us with useful information.  I agree with both of these viewpoints.  However, we need to be cautious when we consider case control studies, in that the ideal situation as suggested by Sabine very rarely exists because of the way that we collect the records that to include in retrospective studies.  I shall illustrate this further

 

The Classic Retrospective study.

Orthodontic study model

In my early research years I carried out several retrospective studies.  This was because this was the way we did research and  we had not really considered carrying out trials.  So I have first hand experience of these methods.   It is also clear that this type of study tends to be done as a Masters level as they can be completed in a short length of time.

Classically the method that is adopted is to develop a study question based on the availability of records that have been previously collected.  While this is convenient, there is a problem because the quality of the study relies on the completeness of the records. This is very relevant when we consider that there is a tendency for orthodontists to collect records of the patients whose treatment went well, as we like to collect the “precious things” and look at them from time to time.  Furthermore, we do not always collect records for the patients whose treatment did not go well. As a result,  unless we can guarantee that all the records have been collected on every patient who is eligible for the study, then we have to assume that bias is present.  It could be suggested that this bias is going to be towards the positive because the records do not include those treatments that “did not do so well”.

Can I prove this?

I fully appreciate that over the years I have put forwards this argument based on the results of studies carried out in other areas of health care.  So, does this happen in orthodontics? I have done a small pilot study to illustrate this concept and like all pilots this is a very low level of evidence.  Nevertheless, this is what I did.

I took the data on overjet change from the three classic RCTs on the effectiveness of early treatment for Class II malocclusion. These were the studies carried out in North Carolina, Florida and Manchester.  Details of these are found in the Class II Cochrane Review http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD003452.pub2/abstract

I then took the same results from three retrospective case control studies into the same question and put them into a simple meta analysis.  These studies are found on

http://www.sciencedirect.com/science/article/pii/S0889540600386759

http://www.angle.org/doi/pdf/10.1043/0003-3219%282003%29073%3C0221%3ALEATTF%3E2.0.CO%3B2

http://ejo.oxfordjournals.org/content/29/2/140.short

The analysis revealed that for the RCTs the mean overjet reduction was 3.2mm and for the retrospective studies this was 4.3mm.  This is a difference of  only 1mm but we need to remember that orthodontic research deals with this level of measurement.  I am going to expand this study in a more systematic way, as I am sure that it will lead to some interesting data.  I will report back in due course on this blog and in the published literature.

 

What about a collection of completed cases?

In the question and answer section of the EOS debate, several people made the suggestion that we should all collect the records of completed treatments and then enter these in a database.  This could then be used for research purposes.  It is clear that one advantage of this approach is that it does not  involve randomisation and is an observational study of treatments that are already being provided by experienced clinicians.  However, the only advantage of this approach over the retrospective study would arise if the patients were enrolled at the start of their treatment. In reality, this would be a prospective cohort investigation.  I have been involved in several of these studies, when it was not possible to randomise, and they involve a similar amount of work to a trial, as patients need to be entered and followed through to completion.  This methodology clearly has a place when it is not ethical to randomise,unfortunately, they are subject to bias because the treatment is not allocated randomly.  As a result, they are not an alternative to a trial when the operators are in equipoise and are prepared to randomise.

What are the other advantages of the prospective investigation?

Aside from the compelling advantage that in a trial, the study question is clearly defined and drives the randomization and data collection to minimize bias.  One other advantage of a prospective study is that  we can plan it in advance to include patient values.  We can measure these using standard patient questionnaires on important variables, such as self-esteem etc, but it is also possible to build in a qualitative component to trials. This can yield incredibly useful information that is relevant to our patients. Whereas, in a retrospective study it is not possible to obtain this information, as all we have is study models and cephalograms. I know that some orthodontists treat their collection of models as their closest friends but they cannot talk to them!

 

Importantly,  if we include patient values in our research, we can move orthodontic research away from the collection and reporting of tables of figures from cephalograms that lack meaning, towards research that informs us of our patients perceptions and feelings.  This can only be done in prospective investigations. This convinces me that the question that we considered earlier on the value of trials vs retrospective studies is even more redundant.

 

I know that my views are not universally accepted and it would be great if I could have some comments back and we can have a debate on this blog?

Related Posts

Have your say!

  1. I am way in favour of randomize trials as the gold standard for gathering data. This is because in RCT, the treatment guidelines are prospective and strongly adhered to. The problem with retrospective studies is that you cannot frame a meaningful hypothesis that will stay constant during the study because treatment procedures frequently vary between cases or clusters of cases. The end object of cases (in retrospective studies) is to provide the best treatment outcome using a particular treatment method. This means that there are no guidelines to be adhered between cases, e.g. extractions halfway during treatment, cases drop outs due to non compliance. I don’t see how you can test a good hypothesis by looking at the rear view mirror. Some would argue that the science of paleontology or archaelogy relies precisely on this method of analysis. This is a fallacious argument because ‘god’ does not manipulate the fossil data halfway during evolution to suit his end objectives.

  2. Thanks for the comment. I completely agree with the points that you have made. Again, looking back at my experience of carrying out retrospective studies, I was involved in several studies where we were tempted to change the question that we were asking as the nature of our retrospective data changed. However, we should also remember that there are some problems with trials in terms of outcome reporting bias. This occurs when the investigators change the primary outcome measure because the one that they had orginally chosen was considered to result in a “negative” study. I will be posting blogs on “how to interpret a RCT” and shall be dealing with these issues then/

    • Did you mean negative results or a negative study, with no data? Negative results are good and as scientists, we are always glad to receive robust negative results. This allows us to move forward to frame new hypotheses, and hopefully we will get more negative results (you know, the whole Popper thing). For example, if a proper study was performed to demonstrate without doubt that self-ligating brackets perform no better than conventional brackets, then the debate can be settled once for all and we can move on. This would be a good negative result (note, I personally use self-ligating brackets). The problem is that most trials of self-ligation versus conventional are not comprehensive enough or the framed hypothesis sufficiently rigorous to give us robust answers. On the other hand, a negative study, with no prospect of gaining meaningful data, is a plain waste of time and should never have been commissioned.

      • Hi again, yes I agree, and this is why I wrote “negative”. This term is commonly used in studies that do not show differences between interventions. These provide information that is of equal value to studies that show clear differences. I was interested in your comments on self ligating brackets and I have followed the literature on self ligating brackets very closely. I am not sure that I agree with your comments on these studies. I have always thought that they were comprehensive and certainly tested their hypotheses?

  3. I would be grateful for your thoughts on the below – these are comments made by one of the people who are working with one of the short term orthodontics systems. I have discussed clinical evidence with them and all they state is that there are 1000’s of cases being treated successfully. These are the other comments that are being made by them directly to me

    “I’m sorry Ian but that is a gross mis-representation – hiding behind EBD?
    You consistently give NO evidence for round wire systems when asked.
    I am NOT trying to change Specialists from the systems they already do, BUT a lot of Traditional Ortho approaches by many DO need dragging into the 21st Century IMHO and yes that is EBD too!!!
    Rectangular wire systems ARE superior, FACT!
    Brackets are Superior, FACT!
    Moving the Roots into the correct/stable Position is Superior, FACT!
    FastBraces can achieve this for 80% of cases in a shorter timescale, with less discomfort and less complications like apical resorption in GDPs hands, FACT”!

    • Ian, thanks for sending this in. I find that the statements made by the short term ortho person rather confusing. They imply that fixed appliance treatment is carried out on round wires, and that new techniques are so much better than the orthodontics practiced by trained orthodontists. They even seem to miss the point that contemporary orthodontic systems use rectangular wires!

      The thing that I find really strange about the claims for STO is that none of these “new” bracket systems is “new”they are simply being recycled from many years ago and then sold as a system for the general practitioner. I will return to STO in future blog posts…when I have found out a lot more about them..

  4. Hi Kevin and Seong-Seng,
    Interesting discussion and I agree.
    Patient centered outcomes should be the focus and I have been very disappointed on the continued compilation of meaningless cephalometric comparisons interpreted solely on p-values!

    Regarding the EOS debate – watched online also: I found it a little bit misleading and not placed in the right context for the audience. Different questions require different designs (sometimes overlapping). Obviously, we should strive for RCTs where applicable but more importantly we should understand that all studies command a place in the evidence pyramid and have pros and cons. Understanding the issues associated with designs allows to place the evidence in the correct level . There are observational studies better designed than RCT and vice versa, however, this does not mean that we should abandon one design in favor of the other. It is not a black and white situation but rather pieces of the evidence puzzle that should be assigned the correct weight and considered under the appropriate context.
    Your study idea is interesting and it has been shown, as I am sure you know, at least in medicine that risk of bias is associated with exaggerated treatment effects.
    Best wishes,
    Nick

    • Hi Nick and thanks for the comments. You have made good points. I certainly agree very strongly about the reporting of values that are important to patients. I rally look forwards to the day that an orthodontic trial is reported and no cephalometric data is reported!

      As regards, your comments on study design. Again, I agree, there is a perception that all studies that are not trials should be ignored. Certainly, when I was starting to work on trial, I was rather a zealot about this and now having matured, I wonder if this approach was wrong. Information on treatment can be obtained from other study designs, particularly, when randomisation is not ethically possible. We should even, not ignore retrospective investigations, but we should appreciate the inherent biases and take these into account when we consider our treatments. Sadly, with many orthodontic case control studies, the biases are so marked and uncontrolled that they are of doubtful value. I am currently writing a more “academic” post about reducing uncertainty in orthodontic research and I will post this in a few weeks, when I have, hopefully completed it.

      • Kevin,
        I agree on all counts including the response to Ian’s questions. The thing people do not understand is that as specialists we can do faster and better whatever the GP can do with the next miracle appliance. The difference is treatment duration has to do with the objectives of the therapy set by the practitioner and not with the appliance. Most cases will align in less than 6 months and if this was simply the only objective of the orthodontic treatment then there would have been no difference between ‘high tech” and “low tech’.
        The main problem is that it is often easier to rely on what someone else tells you than to do your own thinking.
        Best wishes,
        Nick

  5. Dr. OBrien,

    Great points. I am involved in a clinical study of 60 patients treated with 3 different modalities. Prospective but we did not randomize. I am the clinician so I am aware of the modality – potential source of bias. I am treating them out ‘honestly’ as possible seeking truth. Of course I know my study is unlikely to be taken seriously due to lack of randomization. What to do about this? Anything or nothing…

    • Hi Manish, thanks for the message. You are correct. Because the treatment allocation was not random, there may be bias because other factors may have influenced the treatment allocation. I think that all you could do is to simply report the study as it stands, but draw attention to the bias in your discussion.

      • Hi
        You can also adjust in your statistical analysis for potential confounders.
        This is a common approach in observational studies.
        Different methods are available some simpler and some more complicated.
        We published a paper on the fact that confounding is often incorrectly ignored in orthodontics.
        See Spanou et al Europ J Orthodontics
        March 3, 2015
        Statistical analysis in orthodontic journals…
        Therefore the problem with retrospective studies in our field is due to study limitations but also less appropriate/incorrect data analysis.
        Best wishes
        Nick

Leave a Reply

Your email address will not be published. Required fields are marked *