Evidence in Medicine. Iain K. CrombieЧитать онлайн книгу.
different outcomes. For example, in cancer trials possible outcomes would include the average survival time, disease‐free survival or quality of life. Commonly, one outcome measure is designated the primary outcome, with the other outcomes being termed secondary measures. (This is to prevent researchers from analysing many different outcomes, then highlighting the one which looks best.) Selecting the primary outcome involves difficult choices, but it greatly simplifies the interpretation of the results.
Switching Primary Outcomes
Before recruiting patients, many trials report their detailed methods in an international trial register. Several international registers have been established (e.g. ClinicalTrials.gov and the ISRCTN registry) [19, 20]. Many researchers also publish their study protocols in a medical journal. These sources allow other researchers to compare the outcome measures that were initially specified with those that are presented in the publication of the trial results. A review of outcome reporting in high quality neurology journals found that in 180 trials, 21% of the specified primary outcomes had been omitted, 6% of primary outcomes were demoted to secondary outcomes and 34% of trials added previously unmentioned primary outcomes [21]. A similar pattern was seen in trials published in haematology journals where 40% of primary outcomes had been omitted, 25% of primary outcomes were demoted and many new outcomes were added [22]. The evidence is clear that in trials across the medical specialties, primary outcomes are frequently changed [23–26].
Outcomes may be changed for good reasons, such as replacing a difficult to measure outcome with a more amenable one. But there may be other motives. Several studies have shown that the effect of substituting outcomes favours the publication of positive findings [21, 26]; that is when a non‐significant primary outcome is demoted and a significant secondary one is promoted to primary outcome. Compared to trials with unchanged outcomes, those with substituted outcomes report an increased effect size [27].
One study explored why authors had omitted or changed outcomes [28]; often this was because the researchers thought that a non‐significant result was not interesting. A review of such studies found that a preference for positive finding, and a poor or flexible research design, were the reasons most commonly mentioned for switching outcomes [29]. It seems likely that outcomes are sometimes changed based on the findings from an initial analysis.
Blinding of Outcome Assessment
A long‐standing feature of trials is that the patient, and the person who measures patient status (the outcome) at the end of the trial, should be unaware of (blinded to) the treatment the participants received. This ensures that knowledge of treatment group does not influence the way the outcome is measured.
In many trials the method of blinding outcome assessment is poor. An evaluation of 20,920 trials included in Cochrane systematic reviews found that 31% of trials had unclear risk of blinding of participants and a further 33% were at high risk of bias [5]. For outcome assessment, 25% were at unclear risk of and 23% were at high risk of bias [5].
Another concern is whether blinding is compromised. This can happen when an intervention is sufficiently different from the control (e.g. by taste) that the patient identifies which treatment they have been given, and reveals this to the outcome assessor [30]. Few studies report whether they have assessed the risk that unblinding has occurred [31]. However one study that contacted the authors of published trials found that 43% has assessed this risk without reporting it, and that in 11% of studies it was likely that blinding had been compromised [30].
The impact of poor quality blinding on estimates of treatment effect is unclear, as review studies give conflicting results. Two overview studies found that poor blinding was associated with an increased effect size compared to well‐blinded trials [7, 32]. Another study found that this bias only occurred for subjective outcome measures [2], and a fourth reported an inconsistent effect [18]. The most recent, and largest, study concluded that there was no effect of blinding [33].
Reporting of Adverse Events
A basic principle of pharmacology is that ‘all drugs have beneficial and harmful effects’ [34], with the value of the drug depending on the benefit: harm ratio. Establishing where the balance lies can be difficult because the harms are frequently under‐reported in published clinical trials [35, 36]. One study found that 43% of adverse events recorded in trial registries were not reported in the published study [37]. A review of such studies concluded that, on average, some 64% of harms were not reported [38]. The benefit: harm ratio may often be biased because of the under‐reporting of harms [39].
FOLLOW‐UP AND MISSING OUTCOMES
In a trial, patients are followed up for a defined period to determine the effect of treatment. During this time period some patients can withdraw from the study, and contact can be lost with others, so that the outcome is often not measured on all patients. A rule of thumb for losses to follow‐up is that <5% loss is unlikely to cause bias and that >20% loss is serious, with those between 5% and 20% being potentially problematic [40, 41].
Extent of Loss to Follow‐up
Loss to follow‐up occurs in almost all clinical trials, although reporting of this is often incomplete. One problem is that some published papers give no information about loss to follow‐up. A study of trials in five leading journals found that 13% failed to report whether or not loss to follow‐up had occurred [42]. Other studies have found that the lack of an explicit statement about missing data occurred in 6.5% [43] and 26% [44] of published papers.
Trials that report the extent of loss to follow‐up vary greatly in the proportion of participants affected. Among 77 trials published in leading medical journals in 2013, 95% reported some missing outcome data: although the median loss was 9%, the highest reported was 70% [45]. Similarly in trials funded by a Health Technology Assessment programme, the median loss was 11%, but this ranged from 0% to 77% in individual studies [46].
In many trials loss follow‐up exceeds the notional threshold of 20% for serious loss. The extent of this may vary across specialties. Among trials in palliative care it was 23% [47], slightly higher in osteoarthritis (26%) [44] and 39% in rheumatoid arthritis [48]. Among obesity trials 74% reported losses >20% [49].
Care is needed in the interpretation of the loss to follow‐up rates. In a study of 21 trials, the data reported to the US Food and Drugs Administration showed markedly higher median loss to follow‐up (13%), than that in the corresponding published papers (0.3%) [50]. The authors of this study concluded that the ‘published rates consistently seem to be inadequate representations of the completeness and quality of follow‐up’.
Characteristics of Patients Lost
The type of patient lost to follow‐up is possibly more important than the number lost. If more seriously ill patients are lost, and this loss happens to a greater extent in one of the treatment groups, then substantial bias can occur. Thus when loss to follow‐up occurs, trials need to report not just the magnitude of loss, but the number and characteristics of the patients lost from each of the treatment arms.
Trials that report the overall loss to follow‐up often do not give the data separately for intervention and control groups. One study of trials in leading medical journals found that 20% did not report the numbers missing in the treatment and control groups separately [42]. However a more recent study found that leading journals had improved, with only 3% of trials failing to report this information [45]. Another study of trials in palliative care found that 13% did not report the numbers lost to follow‐up in both treatment arms [51]. In these studies it is not clear whether treatment effects will be biased by differential loss to follow‐up.
Reporting of the types of patients lost to follow‐up is often poor. In one review, 91% of trials did not compare the characteristics of those lost to follow‐up with those successfully followed up [43]. Among 108 trials in palliative care, none compared the intervention and control groups