A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies
MacLehose R R, Reeves B C, Harvey I M, Sheldon T A, Russell I T, Black A M S
Record ID 32001000003
English
Authors' objectives:
The aim was to investigate the association between methodological quality and the magnitude of estimates of effectiveness by comparing systematically estimates of effectiveness derived from RCTs and quasi-experimental and observational (QEO) studies. Quantifying any such association should help healthcare decision-makers to judge the strength of evidence from non-randomised studies. Two strategies were used to minimise the influence of differences in external validity between RCTs and QEO studies:
- a comparison of the RCT and QEO study estimates of effectiveness of any intervention, where both estimates were reported in a single paper - a comparison of the RCT and QEO study estimates of effectiveness for specified interventions, where the estimates were reported in different papers.
The authors also sought to identify study designs that have been proposed to address one or more of the problems often found with conventional RCTs.
Authors' recommendations:
The findings of strategy 1 suggest that QEO study estimates of effectiveness may be valid if important confounding factors are controlled for. The small size of discrepancies for high-quality comparisons also implies that psychological factors (e.g. treatment preferences or willingness to be randomised) had a negligible effect on outcome. However, the authors caution against generalising their findings to other contexts, for three main reasons:
- Few papers were reviewed, and the findings may depend on the specific interventions evaluated. - Most high-quality comparisons studied RCT and QEO study populations that met the same eligibility criteria, which may have reduced the importance of controlling for confounding. - The literature reviewed is likely to have been subject to some form of publication bias. Authors of papers appeared to have strong a priori views about the usefulness of evidence from QEO studies, and the findings of papers appeared to support these views
Strategy 2 found no association between study quality and effect size for either intervention, after taking account of study design. The lack of association between quality and effect size could have arisen for a variety of reasons, the most likely being that study quality is not associated with relative risk in a predictable way or that the instrument failed to characterise methodological quality adequately.
There are several possible reasons for the finding that effect size estimates for case-control studies were significantly different from those for RCTs and cohort studies. The inconsistency of the direction of the discrepancy suggests that the direction is unpredictable and may be intervention specific. Case-control estimates of effectiveness should therefore be interpreted with extreme caution.
Several study designs were identified, which had been proposed to overcome a range of problems experienced with conventional RCTs, although the reported advantages were rarely substantiated. Discrepancies between RCT and QEO study estimates should not be attributed to factors such as patient preferences by default, since there may be residual confounding. Randomising patients prior to obtaining consent can cause as many problems as it solves, but may be useful when patients have a strong preference for an intervention. Other RCT variants may have a role when the aim is to measure efficacy.
The primary aim of quantifying any association between methodological quality and effect size was thwarted by several obstacles. For objective 1, the authors were unable to draw strong conclusions because of the paucity of evidence, and the potentially unrepresentative nature of the evidence they reviewed. For objective 2, the authors were unable adequately to distinguish, and measure, the variations in different aspects of quality between studies. The authors' recommendations relate directly to these obstacles.
Authors' methods:
Systematic review
Details
Project Status:
Completed
Year Published:
2000
English language abstract:
An English language summary is available
Publication Type:
Not Assigned
Country:
England, United Kingdom
MeSH Terms
- Random Allocation
- Research Design
Contact
Organisation Name:
NIHR Health Technology Assessment programme
Contact Address:
NIHR Journals Library, National Institute for Health and Care Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK
Contact Name:
journals.library@nihr.ac.uk
Contact Email:
journals.library@nihr.ac.uk
Copyright:
2009 Queen's Printer and Controller of HMSO