THE SMILE TRIAL (part 2)

A trial so flawed as to be worthless.

The second of three blogs on the SMILE trial. The first is here and part three here.

 

The SMILE trial was ‘a pilot randomized trial with children (aged 12 to 18 years) comparing specialist medical care with specialist medical care plus the Lightning Process‘.

A report was published in December 2013. The trial has been over for almost four years but the results have yet to be published. A paper was submitted to The Lancet Psychiatry in August 2016 but was unsuccessful. A paper was resubmitted (to an unnamed journal, which may or may not be The Lancet Psychiatry) on 11th May 2017 (revealed in ‘Freedom of Information Request: Reference FOI17 193 – Information from the SMILE study’; decision currently under review).

The report itself, though, reveals a number of flaws in the study:

First, the choice of trial participants meant there was no real randomization.

The trial population was drawn from the Bath and Bristol NHS specialist paediatric CFS or ME service (43 patients were excluded for being too far away). It is debatable how representative such a relatively affluent area is, particularly as a course of the LP normally costs over £600. The charity, the clinic and Phil Parker, the inventor of the LP, are all located there. It’s likely that publicity and word-of-mouth have increased awareness and demand for the course in Bristol and Bath, which in turn may have attracted people with ‘chronic fatigue’ and also made those patients more susceptible to belief in the intervention.

Whether the area is representative or not, trial participants were selected from a clinic led by a paediatrician known for a particular view of the illness and a particular approach to it.

The trial was for ‘mildly to moderately affected’ patients, a group which is more likely to include those with ‘chronic fatigue’ rather than ME. The criteria used were broad:
‘generalized fatigue, causing disruption of daily life, persisting after routine tests and investigations have failed to identify an obvious underlying “cause”’ . National Institute of Health & Clinical Excellence (NICE) guidelines recommend a minimum duration of 3 months of fatigue before making a diagnosis in children… Children were eligible for this study if they were diagnosed with CFS/ME according to NICE diagnostic criteria.’
There is no mention of post-exertional malaise, which is now recognized as an essential part of ME (see, for example, the report to the US Institute of Medicine), nor of impaired cognitive functioning or even disrupted sleep. In fact, there is no more than ‘generalized fatigue’ for 3 months. Different case definitions have been shown to select disparate groups of participants and the use of such broad criteria increases once again the likelihood that patients were included who did not have ME but instead simply ‘chronic fatigue’.

Of the 157 eligible children, 28 declined to participate at the clinical assessment. The majority were ‘not interested’ (15) or said it was ‘too much’ (7).
Patients were only included ‘if the child and his or her family were willing to find out more about the study’.
Only patients who are prepared to join the study can, of course, be included, but such a test immediately filters the participants. Many patients with ME would know the LP to be worthless and so would not want to find out any more about the study.

It’s also true that for any trial involving children parents would need to approve, but again the same sort of bias presents itself. Participants were only chosen if both the parent and the child believed a trial of LP to be valuable. Since they held this belief, they would be invested in the trial and in making it a success, and so more likely to report improvement on self-measurement. Similarly, children may have felt encouraged or even under pressure to take part in the trial and then say they ‘felt better’ afterwards.

Even after this filtering of patients, more self-selection occurs:
59 did not return consent forms.
In other words, over half those eligible (81/157) explicitly (by declining to participate) or implicitly (by not returning the consent forms) showed they did not want to take part in the trial. And then of the remaining 69 who were contacted, another 13 declined. Of the 157 contacted and invited to participate, 94 chose not to.

These numbers are devastating and turn the trial into a farce:
Despite claims patients wanted more information about the LP, most clearly are not interested. The justification for carrying out this misadventure isn’t supported by the evidence.
The suspicion is that patients with ME, who knew the intervention to be worthless, refused to take part, while those with ‘chronic fatigue’ enrolled.
So many patients excluded themselves, leaving only a small number prepared to go through with the trial, that any claims of randomization are baseless. The participants had self-selected.

Even after the trial started another three allocated to the LP group dropped out. A mere 50 out of a possible 157 (32%) completed the trial.

Three in the SMC group left to have the LP outside the trial. Which again suggests most of the patients who took part only did so because they wanted the LP, considered it effective and saw the trial as a way of getting it at someone else’s expense.

In effect, the researchers asked for volunteers from the clinic for a free course of the LP, gave it to some and not others, then asked everyone if they were happy. The likelihood is that those who received it would be grateful and say they were content and those who did not were not. SMILE was not a randomized trial.

Second, the Specialist Medical Care (SMC) group were not properly controlled.

‘Other interventions, such as cognitive behavioural therapy or graded exercise therapy (GET), were offered to children if needed (usually if there were comorbid mood problems or the primary goal was sport-related, respectively).’
In other words those receiving SMC were also offered other interventions and were not just receiving SMC.
While, of course, patients must at all times receive treatments deemed necessary by their care providers, the other interventions mean no proper evaluation can be made of the SMC group relative to the LP group. It is not an SMC group but an SMC-and-sometimes-other-things group

These interventions were provided ‘if needed’, which again seems reasonable: if patients are deemed to need an intervention they should receive it. But it also undermines the notion of any control. Patients are being assessed during the trial. The intervention of the assessor, one who may offer other possible treatments, is enough to mean the patients are not simply getting SMC.

There was no attempt to replicate with the SMC group the non-specific conditions of the SMC + LP group. All the participants knew full well whether they were receiving a much publicized intervention which they had been told was effective, or not. Indeed, three patients allocated to the SMC group quit the trial and went off to get LP for themselves. This lack of equipoise would have worked both ways: to influence those who were getting the intervention to ‘feel better’ and to make those in the SMC group feel they were missing out and so report negatively.

It has to be acknowledged that some of these flaws were difficult to avoid: no one can be forced to take part in a trial, parents must give their consent, patients are going to know whether they are receiving the intervention or not. But the researchers didn’t take steps to mitigate them, in particular:

Third, the trial was unblinded yet used subjective outcome measures:
‘The following inventories were completed by children just before their clinical assessment (baseline) and follow-up (6 weeks and 3, 6 and 12 months): 11-item Chalder Fatigue Scale; visual analogue pain rating scale; the SF-36; the Spence Children’s Anxiety Scale; the Hospital Anxiety and Depression Scale (HADS), a single-item inventory on school attendance and the EQ-5D five-item quality-of-life questionnaire.’

The only objective measure was school attendance, which is obviously open to confounds. So obviously, in fact, that part way through the study the measure was dropped.
‘During the study, parents and participants commented that the school attendance primary outcome did not accurately reflect what they were able to do, particularly if they were recruited during, or had transitioned to, A levels during the study. This is because it was not clear what ‘100% of expected attendance’ was. In addition, we were aware of some participants who had chosen not to increase school attendance despite increased activity.’
.
In a commentary on another trial, Jonathan Edwards, emeritus professor of connective tissue medicine at University College London, is clear:
‘The trial has a central flaw that can be lost sight of: it is an unblinded trial with subjective outcome measures. That makes it a nonstarter in the eyes of any physician or clinical pharmacologist familiar with problems of systematic bias in trial execution.’

Some problems in the trial design may have been difficult to overcome, but this failure to use objective outcome measures could easily have been avoided. There was no reason why they could not have used, for example, actimeters. The choice to use subjective measures renders the whole exercise worthless.

Edwards made his comment referring to another trial, but it applies equally to SMILE. The trail was deeply, fatally flawed. It was a nonstarter.

The report brushes over the conflicts between SMC and the LP where patients are told to use different approaches. It ignores the possibility of group-think in the LP courses. It disregards the failure to recruit sufficient numbers and the self-selection of those who did participate. It ignores the obvious flaws. And it then concludes with a recommendation for a full study.

That would be an even bigger error. Only one person has benefited from SMILE and that person would gain even more from a full study: Phil Parker. How much he has benefited will be shown in part 3.

7 thoughts on “THE SMILE TRIAL (part 2)

    1. Yes I wonder if this is because nonblinded research with subjective measures is well known to be bad science, so objective measures help please funders and ethics committees at the application/proposal. But it is a bit inconvenient in terms of needing statistically significant results from inappropriate therapies, which is necessary for publication. The ideal time to drop inconvenient measures? Mid way through the trial, as it isn’t serious enough to get the research stopped 😉

      Liked by 3 people

Leave a comment