Toxic Numbers

Soon after the main PACE paper was published in 2011, there were numerous stories in the media about the dangerous militant ME patients who were harassing the researchers. It is known that the Science Media Centre was heavily involved in promoting this coverage. Many ME patients have suspected that these stories were part of a campaign to discredit critics of the trial. There is no question that it has affected how ME has been covered, part of what has been called the epistemic injustice (here and here) suffered by patients.

One of the main criticisms of the trial was that outcome thresholds were switched after the trial had finished. Some data were released after a First Tier Tribunal hearing. Reanalyses using the original thresholds found:

The claim that patients can recover as a result of CBT and GET is not justified by the data, and is highly misleading to clinicians and patients considering these treatments.

And:

These findings raise serious concerns about the robustness of the claims made about the efficacy of CBT and GET. The modest treatment effects obtained on self-report measures in the PACE trial do not exceed what could be reasonably accounted for by participant reporting biases.

This week, three of the researchers responded to the reanalyses. This week, also, Reuters carried a story saying that one of them, Michael Sharpe, had been driven out of researching the illness because the field is ‘too toxic’. The timing did not seem coincidental.

The article claims that Professor Sharpe is not alone and that ‘there has been a decline in the number of new CFS/ME treatment trials being launched’. It quotes evidence from clinicaltrials.gov:

From 2010 to 2014, 33 such trials started. From 2015 until the present, the figure dropped to around 20.

I was surprised by that figure as research couple of years ago found an increase in the number of papers published on the illness. Along with the trend in medical science generally, more work was being done.

My first thought was that perhaps there was an innocent explanation for this decline: PACE was a massive trial, deemed ‘definitive’. The investigators were seen as the leading experts on this approach and the findings were said to be in line with other trials of the interventions. There wouldn’t seem much point in keeping on doing the same trial time after time.

I decided though, that I would also go to the website and check the numbers. And this is what I found:

ME-CFS trials 200-2019

The figures given in the article then are not false, but they would seem to be misleading. The overall numbers are very low, so a year when there are more trials than usual will have a large effect. Kelland chose to compare the last five years with the previous five years. But if you compare the last ten years (and, after all, the first stories about harassment came out in 2011, so ten years would seem just as reasonable), then there has in fact been a massive increase in trials being launched, not the decline described in the article. Or one could be very mischievous and point out that since Professor Sharpe announced he was quitting the field last year, the number of trials has gone up.

There is a further point. The article says that he and others are being driven out of research because some patients do not like their approach as it ‘suggests their illness is psychological’. No one researching ME as a pathological illness has made such complaints. It would be expected then that trials taking this behavioural approach should show a decline and that others, looking at possible drug treatments, for example, would be unaffected. As the table shows, though, there is no such trend.*

Finally, the number of trials listed seemed very low to me. I looked at some of them. They include acupuncture studies from China but as far as I could see they don’t show the PACE trial (or SMILE), so the numbers are probably not very reliable anyway.

An article standing up for the PACE researchers uses dodgy numbers.

As for the substance of Professor Sharpe’s accusations, he has himself described what he finds offensive: ‘attempts to damage his reputation and get his papers retracted’. Many would not agree that amounts to trolling.

It should also be pointed out that he himself has been described as ‘intemperate’; someone who ‘lashes out’ and who ‘cannot tell the difference between disagreement and defamation’.

Claims of bullying and harassment driving researchers from the field are often made by those with power against those without:

It’s a clever strategy, in its way. But it’s also the signet ring of weak people defending weak ideas.

 

Added 17/03/2019:

*It has been suggested that there is in fact a discernible trend in the lower numbers of behavioural trials, but there is no way of knowing whether there is a trend or whether there was a slight blip in 2009 or whether it is possible to know anything at all from such small numbers. It could be argued, for example, that one third of all clinical trials for ‘CFS/ME’ in 2018 were behavioural.

The same point about the reliability of the search applies, since neither PACE nor SMILE are listed.

And even if there were some sort of trend, there is no way of knowing the impact, for example, of the IOM report and the change in focus by the NIH.

 

Even the Ethics Committee says the PACE authors should share the patient-level data, so why does PLOS ONE not enforce its regulations and compel them to?

In 2012 a cost-effectiveness analysis of the PACE trial was published by PLOS ONE. One of the conditions of publication was an agreement by the authors ‘to make freely available any materials and data described in their publication that may be reasonably requested for the purpose of academic, non-commercial research’.

A number of requests have been made to the authors for the patient-level data. These requests have been rejected.

PLOS ONE has issued an Expression of Concern, but has not obliged the authors to make the data available. In response, the authors state they are ‘surprised by and question the decision by the journal to issue this Expression of Concern’. They continue to reject all requests for patient-level data. In their Statement, they make it clear why:

‘During negotiations with the journal over these matters, we have sought further guidance from the PACE trial R[esearch] E[thics] C[ommittee]. They have advised that public release, even of anonymised data, is not appropriate.’

The PACE trial REC is a public body and its advice to the authors was a public document covered by the FOIA, so I made a request asking for a copy (available here).

The REC does indeed advise against public release of the data, though for what many would consider a spurious reason and one beyond its remit, namely because of the controversy surrounding the illness (‘a contentious matter upon which there are divergent views’).

However, the REC also made clear that the data should be shared with an independent researcher or institution who/which would then ‘receive the anonymised data and produce a report’. In its response to my request the REC’s summarized its position as follows (email available here):

…from a transparency perspective the sharing of data would be the HRA’s normal recommended course of action. However as suggested in the attached email by the REC Chair, and from our discussions with other parties involved we understand the data may be made accessible via a secure site to other independent researchers wishing to analyse the data.

The REC’s view then is that there should not be public release of the data but that the data should be shared with independent researchers.

So why does PLOS ONE not enforce its conditions of publication and oblige the authors to provide the data to the researchers who have requested them or retract the paper?

Is PLOS ONE saying the researchers who have requested the data, such as Professor Coyne (see also here and here), are not ‘independent’?

If those researchers are not deemed sufficiently ‘independent’, then why doesn’t PLOS ONE nominate an independent scientist and instruct the authors to comply with the regulations and the wishes of the REC and provide that scientist with the patient-level data or retract the paper?

How does this refusal by PLOS ONE meet the claim by its editor to be committed to ‘full transparency‘?

Where does PLOS ONE’s failure to abide by its regulations leave all the academics who work for it as unpaid peer-reviewers, bloggers and editors because they support open science?

And why do the PACE authors continue to refuse access to the patient-level data even when the REC says they should provide it to independent researchers?

 

The PACE trial and why the data need to be shared

PACE was the ‘definitive’ trial of two interventions for myalgic encephalomyelitis (ME), sometimes known as chronic fatigue syndrome (CFS): cognitive behavioural therapy (CBT) and graded exercise therapy (GET). The main paper was heavily promoted in the media as providing evidence that CBT and GET could help patients ‘get back to normal’.

A number of criticisms have been made about the design of the trial, in particular: First, the trial used a very broad criteria for diagnosis of the illness which could include patients with general fatigue and/or mental health problems. Such patients are known to respond to the interventions. Second, the investigators failed to guard against the powerful distorting effect of researcher bias. Third, there was a heavy reliance on self-report measures in an unblinded trial (where both patients and therapists knew who was and who was not receiving the interventions). It has long been known that the use of subjective measures in an ‘open-label trial’ gives false results.

There has also been considerable criticism of the decision, which was taken after the trial had been completed, to deviate from the protocol and make a major change in the thresholds in the outcome measures. The investigators altered how they measured whether an intervention was effective or not. According to the measures they eventually used, a participant could actually get worse in the trial — their score could go down during the period when they were receiving treatment — and yet they could still count as evidence for the effectiveness of the treatment. 13% of patients on this new scoring were already ‘recovered’ on a measure before they had even entered the trial. In other words, they were both ill enough to qualify to participate in the trial and ‘recovered’.

The investigators, the responsible authorities and the main funder, the MRC, have always said they were prepared to provide access to the data to ‘bona fide researchers’. Patient-level data were shared with a few researchers, who are known to support the interventions, for a Cochrane systematic review (though note criticism of Cochrane reviews of interventins for this illness here, here and here, comments by among others Bob Courtney and Tom Kindlon, and reanalysis by Mark Vink and Alexandra Vink-Niese).

When researchers, patient organizations and patients have asked for the data in order to conduct independent analyses using the measures set out in the protocol, these requests have been rejected. These rejections led the First Tier Tribunal (FTT) in Matthees to express concern that the researchers may have been cherry-picking who analysed their data to only sympathetic researchers in an attempt to suppress criticism and proper scrutiny of their trial.

An open letter signed by many independent statisticians and scientists, along with MPs, lawyers and patient organizations, has been sent to The Lancet calling for the release of the data for independent analysis. There has been no response.

After a Freedom Of Information Act (FOIA) request by Alem Matthees, the FTT did order the release of some data. In the unanimous part of its ruling, the FTT was clear: there is no legal or ethical consideration which prevents release of the trial data; there is a strong public interest in release; making data available advances legitimate scientific debate; and where possible data should be released.

These data were analysed using the outcomes specified by the investigators in the trial protocol. It was found:

When recovery was defined according to the original protocol, recovery rates in the GET and CBT groups were low and not significantly higher than in the control group.

And:

These findings raise serious concerns about the robustness of the claims made about the efficacy of CBT and GET. The modest treatment effects obtained on self-report measures in the PACE trial do not exceed what could be reasonably accounted for by participant reporting biases.

Since the decision in Matthees, the responsible authority, QMUL, has announced a trial data-sharing policy but has rejected a number of FOI requests for access to more data and continues to deny access to anyone critical of the trial, including highly qualified academics.

A Response to Fiona Fox

This is in response to the blog post by Fiona Fox who was reacting to criticism of the Science Media Centre (SMC) for its support of the SMILE trial (1). Although both her blog and this post refer to myalgic encephalomyelitis (ME), the issues are much broader and raise major questions about the organization.

 

Two things immediately stand out. First, in a blog post claiming that she and the SMC are free of bias, Fox reveals her bias: she links ME to climate change and to animal experiments; she makes a loaded comment about ‘vocal critics’; she refers to ‘ME activists’; she contrasts ‘ME activists’ with friends ‘in science’. Second, in an attempt to address questions about SMC bias, Fox responds with anecdotes. ‘I’m not biased: some of my best friends are…’

For Fox’s response to carry any weight, she needed to talk about transparency and procedures. A number of questions remain unanswered:

1. What steps are taken to ensure SMC governors have no influence on day-to-day decisions taken by the SMC?
Simon Wessely for example is seen as one of the creators of the ‘false belief’ model of ME. He is also a governor of the SMC. How does the SMC ensure he has no influence, including indirect influence, over anything the SMC has to do with ME?

2. Who decides what research is covered by the SMC? And on what basis?

3. Who decides and on what basis which researchers are invited to the SMC to give a briefing?
Why, for example, was Esther Crawley asked to do a presentation?
Why was Fiona Fox herself at the presentation? Why is Fiona Fox who is known to have a particular view of ME (apparently one can inoculate oneself against ME by belonging to the Revolutionary Communist Party) allowed to have anything to do with SMC’s work on ME?

4. Who decides and on what basis which researchers are supported by the SMC?

5. Who decides and according to what criteria who counts as an ‘expert’?

6. Who decides and on what basis which ‘experts’ are asked to respond to any particular piece of research?

7. What steps are taken to ensure that the experts who do respond are not self-selecting?

8. Why are experts with a clear conflict of interest allowed to give reactions to research?
Michael Sharpe is deeply involved in promoting the ‘false belief’ model of ME. Why was he allowed anywhere near the response to the SMILE trial?

Dorothy Bishop has a particular view of ME ‘as someone who is familiar with the condition both from family members and colleagues’ (2). Bishop considers criticism of the PACE trial amounted to an ‘orchestrated and well-funded harassment campaign’. Why was she allowed to give a reaction to the SMILE trial?

Alastair Sutcliffe is Professor of General Paediatrics at UCL, where paediatrician Crawley, lead investigator on SMILE, did her PhD. Is there any link between the two which may create a possible conflict of interest?

9. How does the SMC ensure the ‘expert reaction’ is balanced?

10. Who determines and on what basis whether this balanced ‘expert reaction’ has been achieved?

11. What steps are taken to ensure that research supported by SMC funders is not treated more favourably?

An organization set up ‘to promote more informed science’ doesn’t seem to understand the nature of bias or the concept of a biased sample.

As every GCSE student knows, everyone is biased and the easiest person to fool is oneself. If the SMC were serious about avoiding bias, then it would have proper procedures in place to guard against its own. Fox should not need to resort to a few stories to make her case, but should be able to point to established safeguards.

The problem for the SMC, though, is that as an institution it is inherently biased. It exists to be biased. A handful of people have set themselves up as judges of what does and does not constitute ‘good science’ and who does and does not qualify as an ‘expert’. The SMC is predicated on an idea that there is one true science and that a group of ‘wise people’ can find other ‘wise people’ to make sure the media get the true picture.

It’s not necessary to be a Gove-ian sceptic of experts or a devout believer in the Kuhn cycle to know that this view of science is deeply flawed. With new methods, new technology and new approaches, science, all knowledge, is constantly evolving.  As so often happens, by concentrating on one problem (false equivalence), the SMC magnifies the risk of others (eg false consensus). The SMC is a brake on science’s self-correction.

That Fox either does not understand or cannot see these problems is deeply concerning. What is more, her blog post reveals something else. The gushing, schoolgirlish tone (3) is not just a question of personal style. Perhaps because of her background, Fox does not seek to argue her point, but to charm. We are the good guys, she is saying. Join us, be in our gang and on the side of true science. If you don’t, you’ll be one of them. You’ll be on the dark side, with the ME activists, the ‘vocal critics’, the climate-change deniers, the animal-rights extremists. It’s politics, and not good politics at that; rhetoric and not substance.

The SMC was set up in 2000. Perhaps it did then have a purpose, but it is hard to see why it exists today. Any science journalist worth their pay has no need for it. In the time it would take to go to the SMC for a briefing, they could read the study, contact half a dozen ‘experts’ whose views they trust, and send an email to the investigator with any questions. Everyone now is online and most researchers are on social media. Access to scientists is easy; there is no need for a go-between.

The existence of the SMC reveals a collective failure of nerve by science journalists in the UK. They have contracted out their job to a self-appointed group led by someone with no scientific qualifications. Journalists who are churning out SMC briefings are effectively saying they can’t do their own jobs. They need Fiona Fox to decide which research is important and which ‘experts’ are trustworthy.

The SMC has nothing to offer on the current issues in science. With major concerns such as the ‘reproducibility crisis’, the SMC is more of a hindrance than a help. It is entrenching the status quo; it is delaying the funerals. Fox’s failure to comprehend these issues reveals she is unsuited to her role, but in any case the Science Media Centre is no longer, if it ever has been, part of the solution. It’s become part of the problem.

 

 

1. Criticism of the trial here:
https://www.coyneoftherealm.com/blogs/mind-the-brain/embargo-broken-bristol-university-professor-to-discuss-trial-of-quack-chronic-fatigue-syndrome-treatment

And here (note, I was involved in a small way with this post):
https://www.coyneoftherealm.com/blogs/mind-the-brain/the-smile-trial-lightning-process-for-children-with-cfs-results-too-good-to-be-true

And here (pdf):

Click to access MEA-Review-The-SMILE-Trial-12.10.17.pdf

2. Quotation from email to me of 15/03/2015.

3. Reading Fox’s blog, this 1999 article from the Guardian came to mind (HT & #FF Charles Turner)

In particular these paragraphs:

‘And all around the room, pouring wine for the panellists, offering tiny pastries, and gently inquiring about everyone’s careers and interests, or simply posing, very upright, against the shiny white walls, were the correct young staff of Living Marxism.

The men wore suits, or close-fitting shirts with pressed trousers. They had disciplined hair: shaven, cropped or gelled back. Their shoes were gleaming as tap dancers’. As they stood in twos and threes, clicking their heels, coughing into their palms and clasping their hands behind their backs, something else about them became apparent. They were mostly wearing black: black shirts and black ties, black socks and black polo necks, everything spotless.

The women were similar. They wore suits and tied-back hair, or short skirts and tight tops. Few of them seemed older than 30. And, like their male colleagues, who slightly outnumbered them, they asked lots of questions. They always made eye contact. They smiled a lot, and stood very close, and tried fleeting, flirty touches. Near the end of the reception, at about midnight, a well-dressed couple in their thirties walked across. They had, as the man put it, “a driving situation”. Could I drive? Would I drive them home? He did not say where they lived. Their eyes shone pleadingly, but they seemed quite sober. We had known each other for all of a minute.’

 

 

 

 

 

THE SMILE TRIAL (part 3)

The only one to benefit is Parker.

The last of three blogs on the SMILE trial. Part one is here and part two here.

 

The SMILE trial was deeply flawed: criteria used were too broad, participants self-selected, it was not properly controlled and it relied upon subjective measures when participants were unblinded. Like the Lightning Process (LP) itself, it is worthless.

It was completed almost four years ago, but results have still not been published. The concern is that this poorly conducted trial, based on criteria wide enough to include patients with generic ‘chronic fatigue’, did indeed find that some patients reported subjective improvement. If true, SMILE did nothing more than provide false scientific justification for quackery.

The trial has already been of considerable benefit to Phil Parker, a man with no professional qualifications who has designed an intervention with no scientific basis: he is said to receive a fee from each LP provider for every course participant. Since the trial was in the Bristol and Bath area, he may well himself have been one of the LP providers as he ‘leads experienced teams’ in the region.

He has also used the mere fact of the trial as a form of endorsement on his site:
NHS and LP
The Lightning Process has been working with the University of Bristol and the NHS on a feasibility study; full information can be found here.  Two papers have been published and you can find a link to them both here:
1. The feasibility and acceptability of conducting a trial of specialist medical care and the Lightning Process in children with chronic fatigue syndrome: feasibility randomized controlled trial (SMILE study)
2. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial)

The trial acts as an advertisement not just to patients but to potential trainers. The only way to ‘qualify’ as an LP provider is via one of Parker’s own courses which cost £2100 (including VAT). Parker insists anyone who wants to continue as a certified provider must pay him an annual licence of £495 (incl VAT) in the UK or £750 internationally. These practitioners then go out to find more potential patients to generate more money for Parker.

This misadventure was funded by the Linbury Trust and the Ashden Trust for £172,200.

The actual cost of the trial is unknown as staff and NHS costs, so-called non-core costs, cannot be calculated. They were paid by public funds and must be added to that figure for the total trial expenditure.

I made a Freedom Of Information Act request to the University of Bristol to discover how much they paid for these courses and to find out if there was some kind of arrangement with Parker. At first they refused to tell me how much the courses cost, but an ICO decision rejected their claims the information was exempt from the Act.

The mean cost of a course for trial participants was £567, less than the figure (£620) given in the paper for the then current approximate cost. I asked the university if Parker offered a cut of some kind, but they replied: ‘There is no information held relating to any discount or special deal that was arranged with the providers of ‘Lightning Process’ courses.’ If there was some sort of discount, then Parker not only benefited from the trial and had a financial interest in the outcome but actually subsidized it and so effectively part-funded it.

25 in the group assigned to the intervention went through with the course, so 25 x £567, that is £14,175.

Over £14,000 wasted on a junk intervention.

If there is a ‘full study’ as Crawley concludes there should be, there’ll be a lot more spent on this quackery. Any slight evidence of effect will be used not just to promote this nonsense further but also, no doubt, to claim his mumbo-jumbo should be funded by the NHS.

The only person to benefit from this costly disgrace is Parker.

 

See also:

http://blogs.plos.org/mindthebrain/2016/09/23/before-you-enroll-your-child-in-the-magenta-chronic-fatigue-syndrome-study-issues-to-be-considered/

https://www.coyneoftherealm.com/search?q=smile+trial

http://www.skepdic.com/lightningprocess.html

 

 

THE SMILE TRIAL (part 2)

A trial so flawed as to be worthless.

The second of three blogs on the SMILE trial. The first is here and part three here.

 

The SMILE trial was ‘a pilot randomized trial with children (aged 12 to 18 years) comparing specialist medical care with specialist medical care plus the Lightning Process‘.

A report was published in December 2013. The trial has been over for almost four years but the results have yet to be published. A paper was submitted to The Lancet Psychiatry in August 2016 but was unsuccessful. A paper was resubmitted (to an unnamed journal, which may or may not be The Lancet Psychiatry) on 11th May 2017 (revealed in ‘Freedom of Information Request: Reference FOI17 193 – Information from the SMILE study’; decision currently under review).

The report itself, though, reveals a number of flaws in the study:

First, the choice of trial participants meant there was no real randomization.

The trial population was drawn from the Bath and Bristol NHS specialist paediatric CFS or ME service (43 patients were excluded for being too far away). It is debatable how representative such a relatively affluent area is, particularly as a course of the LP normally costs over £600. The charity, the clinic and Phil Parker, the inventor of the LP, are all located there. It’s likely that publicity and word-of-mouth have increased awareness and demand for the course in Bristol and Bath, which in turn may have attracted people with ‘chronic fatigue’ and also made those patients more susceptible to belief in the intervention.

Whether the area is representative or not, trial participants were selected from a clinic led by a paediatrician known for a particular view of the illness and a particular approach to it.

The trial was for ‘mildly to moderately affected’ patients, a group which is more likely to include those with ‘chronic fatigue’ rather than ME. The criteria used were broad:
‘generalized fatigue, causing disruption of daily life, persisting after routine tests and investigations have failed to identify an obvious underlying “cause”’ . National Institute of Health & Clinical Excellence (NICE) guidelines recommend a minimum duration of 3 months of fatigue before making a diagnosis in children… Children were eligible for this study if they were diagnosed with CFS/ME according to NICE diagnostic criteria.’
There is no mention of post-exertional malaise, which is now recognized as an essential part of ME (see, for example, the report to the US Institute of Medicine), nor of impaired cognitive functioning or even disrupted sleep. In fact, there is no more than ‘generalized fatigue’ for 3 months. Different case definitions have been shown to select disparate groups of participants and the use of such broad criteria increases once again the likelihood that patients were included who did not have ME but instead simply ‘chronic fatigue’.

Of the 157 eligible children, 28 declined to participate at the clinical assessment. The majority were ‘not interested’ (15) or said it was ‘too much’ (7).
Patients were only included ‘if the child and his or her family were willing to find out more about the study’.
Only patients who are prepared to join the study can, of course, be included, but such a test immediately filters the participants. Many patients with ME would know the LP to be worthless and so would not want to find out any more about the study.

It’s also true that for any trial involving children parents would need to approve, but again the same sort of bias presents itself. Participants were only chosen if both the parent and the child believed a trial of LP to be valuable. Since they held this belief, they would be invested in the trial and in making it a success, and so more likely to report improvement on self-measurement. Similarly, children may have felt encouraged or even under pressure to take part in the trial and then say they ‘felt better’ afterwards.

Even after this filtering of patients, more self-selection occurs:
59 did not return consent forms.
In other words, over half those eligible (81/157) explicitly (by declining to participate) or implicitly (by not returning the consent forms) showed they did not want to take part in the trial. And then of the remaining 69 who were contacted, another 13 declined. Of the 157 contacted and invited to participate, 94 chose not to.

These numbers are devastating and turn the trial into a farce:
Despite claims patients wanted more information about the LP, most clearly are not interested. The justification for carrying out this misadventure isn’t supported by the evidence.
The suspicion is that patients with ME, who knew the intervention to be worthless, refused to take part, while those with ‘chronic fatigue’ enrolled.
So many patients excluded themselves, leaving only a small number prepared to go through with the trial, that any claims of randomization are baseless. The participants had self-selected.

Even after the trial started another three allocated to the LP group dropped out. A mere 50 out of a possible 157 (32%) completed the trial.

Three in the SMC group left to have the LP outside the trial. Which again suggests most of the patients who took part only did so because they wanted the LP, considered it effective and saw the trial as a way of getting it at someone else’s expense.

In effect, the researchers asked for volunteers from the clinic for a free course of the LP, gave it to some and not others, then asked everyone if they were happy. The likelihood is that those who received it would be grateful and say they were content and those who did not were not. SMILE was not a randomized trial.

Second, the Specialist Medical Care (SMC) group were not properly controlled.

‘Other interventions, such as cognitive behavioural therapy or graded exercise therapy (GET), were offered to children if needed (usually if there were comorbid mood problems or the primary goal was sport-related, respectively).’
In other words those receiving SMC were also offered other interventions and were not just receiving SMC.
While, of course, patients must at all times receive treatments deemed necessary by their care providers, the other interventions mean no proper evaluation can be made of the SMC group relative to the LP group. It is not an SMC group but an SMC-and-sometimes-other-things group

These interventions were provided ‘if needed’, which again seems reasonable: if patients are deemed to need an intervention they should receive it. But it also undermines the notion of any control. Patients are being assessed during the trial. The intervention of the assessor, one who may offer other possible treatments, is enough to mean the patients are not simply getting SMC.

There was no attempt to replicate with the SMC group the non-specific conditions of the SMC + LP group. All the participants knew full well whether they were receiving a much publicized intervention which they had been told was effective, or not. Indeed, three patients allocated to the SMC group quit the trial and went off to get LP for themselves. This lack of equipoise would have worked both ways: to influence those who were getting the intervention to ‘feel better’ and to make those in the SMC group feel they were missing out and so report negatively.

It has to be acknowledged that some of these flaws were difficult to avoid: no one can be forced to take part in a trial, parents must give their consent, patients are going to know whether they are receiving the intervention or not. But the researchers didn’t take steps to mitigate them, in particular:

Third, the trial was unblinded yet used subjective outcome measures:
‘The following inventories were completed by children just before their clinical assessment (baseline) and follow-up (6 weeks and 3, 6 and 12 months): 11-item Chalder Fatigue Scale; visual analogue pain rating scale; the SF-36; the Spence Children’s Anxiety Scale; the Hospital Anxiety and Depression Scale (HADS), a single-item inventory on school attendance and the EQ-5D five-item quality-of-life questionnaire.’

The only objective measure was school attendance, which is obviously open to confounds. So obviously, in fact, that part way through the study the measure was dropped.
‘During the study, parents and participants commented that the school attendance primary outcome did not accurately reflect what they were able to do, particularly if they were recruited during, or had transitioned to, A levels during the study. This is because it was not clear what ‘100% of expected attendance’ was. In addition, we were aware of some participants who had chosen not to increase school attendance despite increased activity.’
.
In a commentary on another trial, Jonathan Edwards, emeritus professor of connective tissue medicine at University College London, is clear:
‘The trial has a central flaw that can be lost sight of: it is an unblinded trial with subjective outcome measures. That makes it a nonstarter in the eyes of any physician or clinical pharmacologist familiar with problems of systematic bias in trial execution.’

Some problems in the trial design may have been difficult to overcome, but this failure to use objective outcome measures could easily have been avoided. There was no reason why they could not have used, for example, actimeters. The choice to use subjective measures renders the whole exercise worthless.

Edwards made his comment referring to another trial, but it applies equally to SMILE. The trail was deeply, fatally flawed. It was a nonstarter.

The report brushes over the conflicts between SMC and the LP where patients are told to use different approaches. It ignores the possibility of group-think in the LP courses. It disregards the failure to recruit sufficient numbers and the self-selection of those who did participate. It ignores the obvious flaws. And it then concludes with a recommendation for a full study.

That would be an even bigger error. Only one person has benefited from SMILE and that person would gain even more from a full study: Phil Parker. How much he has benefited will be shown in part 3.

THE SMILE TRIAL (part 1)

Why the trial should never have been allowed in the first place.

The first of three blogs on the SMILE trial. Part two is here.

 

There is no evidence the Lightning Process (LP), a mish-mash of elements of cognitive behavioural therapy, neurolinguistic programming, hypnotherapy, life coaching and osteopathy, is anything other than quackery. For decades Phil Parker has made claims for its efficacy, including as a treatment for myalgic encephalomyelitis (ME), but no proper trial has ever supported these claims.

The Advertising Standards Authority (ASA) guidance is clear:
To date, neither the ASA nor CAP has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.

There are people who claim to have been helped, of course, but such claims are made for all bogus therapies. It seems that some people are simply amenable to these interventions. In addition, perhaps there are those who have become stuck in a rut, experiencing a generic chronic fatigue, believing themselves to have ME, and who are helped to kickstart their lives again by the LP. Since there is no biomarker for ME, diagnosis of the illness can be difficult: 40% of patients in an ME clinic may not actually have ME.

There is currently no treatment for ME, so it is understandable that some patients would be easy prey for and would seek more information about interventions hawked about with exaggerated claims.

Parents of children with ME were apparently contacting the charity Association of Young People with ME (AYME) (1) and asking whether it was worth trying the LP. Bewilderingly, Esther Crawley, a Bristol paediatrician and then medical adviser to AYME, instead of telling patients and parents that the LP had no scientific basis and was not worth the considerable amount of money it costs, decided to do a trial. Just as bewilderingly, the SMILE trial received funding and ethical clearance.

First, this trial should never have been allowed. Good science is not just about evidence, but about plausibility, so any such trial immediately gives a spurious credibility to the LP. Asking a question, even sceptically, can offer an implicit endorsement of its premises.

Second, it was the first study of any kind to use the Lightning Process, and it was doing so with children. There had been no opportunity to measure harms: there have been reports of patients who do not respond to the LP who then blame themselves and in desperation contemplate killing themselves. Exposing vulnerable adolescents to such a potential risk would seem particularly irresponsible.

Third, LP patients are made to accept a number of onerous conditions (such as taking responsibility for their illness) before taking the course. It is ethically questionable to ask trial participants to agree to such conditions in order to take part in a trial of a possible treatment for their illness. Making these demands of children would seem even more ethically dubious.

Fourth, patients are told to ignore their symptoms and to resume normal activity (from SMILE study):
‘It has been a bit confusing, I have to say, because obviously we have got the [Lightning Process practitioners] approach, where, “Right, finally, done this, now you don’t need to do the pacing; you can just go back to school full time.” I think, the physical side of things, YP9 has had to build herself up more rather than just suddenly go back and do that’.

Research, backed up by patient surveys, shows the harms caused by exertion in patients with ME (see Kindlon). The recent report to the US Institute of Medicine found post-exertional malaise to be so central to the illness that it suggested a new name: systemic exertion intolerance disease or SEID. Even in disputed clinical trials such as PACE which use graded exercise therapy, patients are monitored by physiotherapists and nurses and plan a gradual increase in activity. Here service providers with no professional qualifications simply tell child patients that after three sessions in three days they should return to normal activity. It is deeply irresponsible.

Fifth, to anyone with genuine ME, that is ME as defined by the International Consensus Criteria, the Lightning Process is a form of torture. It is a physical torture simply to complete the course, again from the SMILE study:
In addition to specialist medical care, children and their parents in this arm were asked to read information about the Lightning Process on the internet. They then followed the usual LP procedure (reading the introductory LP book or listening to it in CD form) and completing an assessment form to identify goals and describe what was learnt from the book. On receiving completed forms, an LP practitioner telephoned the children to check whether they were ready to attend an LP course. The courses were run with two to four children over three sessions (each 3 hours 45 minutes) on three consecutive days.

That is a very heavy burden. The homework is taxing enough but then to undergo 3 sessions of almost 4 hours each on 3 consecutive days is immense. The effort, the intensity and the busyness, would be punishment to anyone hypersensitized by the illness.

It is also a form of emotional torture as fundamental to the process is that patients take responsibility for their health, their illness and their recovery, from here, here,  here and here:
LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

the Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

This responsibility is an enduring one: patients must continue to apply the training to their lives after their course and accept that improvement in their health lies entirely within themselves.

To take chronically ill patients, who want only to get better, and spend three days attempting to brainwash them into believing their illness and recovery lie within their control is deeply unethical. Adult patients in the days after enduring this nonsense, blaming themselves for lack of improvement, have been left in such depths of despair as to want to take their own life. To expose chronically ill adolescents to such a danger was extraordinarily irresponsible.

Of course, with the broad criteria and the self-selection involved in determining who took part in the trial, it may well be that not a single participant actually had ME but had instead simply ‘chronic fatigue’. That would be even worse, though: the results may show that the LP has some effect with ‘chronic fatigue’ but would be used to claim effectiveness for patients with ME. Many children who genuinely do have ME could be gulled into paying for this nonsense only, potentially, to do themselves considerable harm.

This trial was unnecessary, gave spurious credibility to quackery and was unethical. It was also very poorly conducted, as will be shown in part 2.

 

1. AYME has now ceased trading and its role has effectively been taken over by Action for ME https://www.actionforme.org.uk/children-and-young-people/introduction/

 

 

A Response to the blog by Puebla and Heber of PLOS ONE

PLOS ONE has issued an expression of concern regarding a cost-effectiveness analysis on results from the PACE trial. The concern is because the authors will not provide data requested by Professor Coyne. The PLOS ONE regulations in force at the time stated:

‘Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.’

Queen Margaret University London (QMUL) is the responsible authority for the PACE trial, but for the purposes of this particular paper the role is played jointly by Kings College London (KCL) and QMUL. KCL and QMUL continue to refuse to release the data.

Iratxe Puebla, Managing Editor for PLOS ONE, and Joerg Heber, Editor-in-Chief of PLOS ONE, have written a blog giving their view of the arguments involved and an insight into some of their thinking in issuing the expression of concern.

One line gives cause for concern and I have written a response. I did post this response as a comment underneath their blog, but after more than 24 hours the comment still remains ‘awaiting moderation’. I have therefore decided to publish it here.

It is worth reading also the dismissive, arrogant, ignorant letter to Professor Coyne from KCL refusing his request for the data.
https://dl.dropboxusercontent.com/u/23608059/PACE%20F325-15%20-%20Prof.%20James%20Coyne%20-%20Response-2.pdf

 

(Note: the comment has now been approved on the PLOS ONE site.)

 

My response:

“Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.”
This line is misleading and reveals either ignorance or misunderstanding of the decision in Matthees.

The Information Tribunal (IT) is not a committee. It is part of the courts system of England and Wales.

First, the IT’s decisions may be appealed to a higher court. As QMUL chose not to exercise this right but to opt instead to accept the decision, then clearly it considered there were no grounds for appeal. The decision stands in its entirety and applies without condition or caveat.

Second, court decisions are not applied differently according to how those decisions are reached: they are full and final. Majority verdicts have no less standing. We are all familiar with the work of the UK & US Supreme Courts. Roe v Wade is not mitigated because it was a majority decision. May could not fudge the need for parliamentary approval of Brexit because the UKSC was not unanimous.

Third and above all, it is misleading to suggest there was a lack of consensus in the Tribunal.
The court had two decisions to make:
First, could and should trial data be released and if so what test should apply to determine whether particular data should be made public? Second, when that test is applied to this particular set of data, do they meet that test?

The unanimous decision on the first question was very clear: there is no legal or ethical consideration which prevents release; release is permitted by the consent forms; there is a strong public interest in the release; making data available advances legitimate scientific debate; and the data should be released.

The test set by this unanimous decision was simple: whether data can be anonymized. Furthermore, again unanimously, the Tribunal stated that the test for anonymization is not absolute. It is whether the risk of identification is reasonably likely, not whether it is remote, and whether patients can be identified without prior knowledge, specialist knowledge or equipment, or resort to criminality.

It was on applying this test to the data requested, on whether they could be properly anonymized, that the IT reached a majority decision.

On the principles, on how these decisions should be made, on the test which should be applied and on the nature of that test, the court was unanimous.

It should also be noted that to share data which have not been anonymized would be in breach of the Data Protection Act. QMUL has shared these data with other researchers. QMUL should either report itself to the Information Commissioner’s Office or accept that the data can be anonymized. In which case, the unanimous decision of the IT is very clear: the data should be shared.

PLOS ONE should apply the IT decision and its own regulations and demand the data be shared or the paper retracted.

A Response to Esther Crawley

This is in response to the comment piece in New Scientist by Esther Crawley.

 

Myalgic Encephalomyelitis (ME) has been studied for decades from Ramsay through Behan to Hornig and Newton. As a result, many would say we know rather a lot about the illness. We know that patients show neurological, immunological and endocrinological changes. We know that ME is not depression; that patients do not respond more to placebo; that patients do not fear exercise; that ME is not caused by physiological deconditioning. We also know that patients do not harass researchers and that the primary symptom is post-exertional malaise.

It is not true that patients believe clinicians secretly think the illness is psychological. There is no secret: the major proponents of the Cognitive Behavioural Therapy-Graded Exercise Therapy (CBT-GET) model say ME is a self-perpetuating cycle of exercise avoidance. They have all stated quite clearly that ME has no ongoing underlying biological cause: it is neurasthenia, ‘simply a belief’, a functional somatic syndrome (Simon Wessely); it is perpetuated by beliefs (Peter White); ME is a pseudo-disease, a somatoform disorder, perpetuated by misinterpretation of bodily sensations, abnormalities of mood and unhelpful coping behaviour (Michael Sharpe). If others, such as Dr Crawley, disagree and do not mean to imply that ME itself has a psychological element, then perhaps they could say so unambiguously.

While it is of course true that anything which brings about changes is ‘biological’, implicit in CBT is the notion that responsibility for recovery lies with the patient. If only patients think differently, then they will no longer be ill. No one disputes psychotherapy can help with a broad spectrum of illnesses, but there is no evidence it can reverse organic damage (injury, infection, inflammation). There is no evidence CBT can address the changes in ME patients found by Lipkin, Montoya and Naviaux.

It is true that ME is likely to prove to be more than one illness, or an illness with more than one sub-type. It is also true though that many people diagnosed with ME, do not in fact have it. This difficulty in diagnosis due to the absence of a biomarker is one which causes problems for clinical trials. Many patients think the criteria used by Dr Crawley are too broad and are likely to include patients who have a generic ‘chronic fatigue’. It is a view shared by the US Institute Of Medicine and the US Agency for Healthcare Research and Quality, which has recently stopped recommending CBT for ME because any claims for its efficacy come from trials which used the discredited Oxford criteria.

The other challenge for trials of interventions for ME is to distinguish between placebo, improved coping with the effects of the illness and a genuine treatment of the underlying illness. Since the severely ill do not respond at all to CBT and the small, subjective, self-reported benefit in the moderately ill is only temporary, many patients think that claims for effectiveness of CBT are unsafe. They are not convinced FITNET-NHS contains sufficient safeguards against this confound.

Whichever measurements of recovery and improvement are used, whether the ones initially chosen by the investigators or the ones to which they switched part-way through the trial when some data had already been collected, we do know from PACE that the vast majority of patients do not benefit at all from CBT-GET. Since NHS clinics are based on this approach, many patients do not bother attending them. Any trial, then, gathering subjects from these clinics would not only exclude the most severely ill, who are unable to travel, but also the large numbers of patients who gain no advantage from the interventions and so do not waste time and effort in going to the clinics. On the other hand, since these clinics use broad criteria to select for ME, patients fear that people with illnesses other than ME would be included.

Patients agree that it is important we all work together. We would ask, though, that any research uses strict criteria to exclude chronic fatigue; takes into account the physiological changes found in people with ME; is based on plausible theory; is not based implicitly or explicitly on the notion the illness is one of false beliefs; includes meaningful patient involvement, from the broader patient community not just the established charities; benefits all patients, including the most severely affected; has, where applicable, proper controls against confounds; and unconditionally shares all anonymized data with anyone who wants to see it.

 

Thanks to (& #FF) Samei Huda for advice, though his help should not be seen as any kind of endorsement or agreement.

Sense About Science, the PACE trial and ME.

(The email exchange discussed in this blog can be viewed here: https://justpaste.it/xnq8 )

Sense About Science (SaS) exist to challenge misrepresentation of science and evidence. They advocate openness and honesty about research findings. They encourage people to #askforevidence. They agree that: we need all available information to make informed decisions about health care; hiding half the data is how magicians do coin tricks and shell games; with incomplete data we can only get an incomplete picture; outcome switching is like choosing lottery numbers after watching the draw. They ask people to contact them when there is something wrong so they can make a fuss.

When patients were trying to obtain the PACE trial data from Queen Mary University of London, many of us asked SaS if they would help us. They refused. They said that the trial results were available and that the investigators had complied with CONSORT.

In March, Rebecca Goldin posted a scathing criticism of PACE on the website of stats.org concluding that ‘the flaws in this design were enough to doom its results from the start’. In an accompanying editorial Trevor Butterworth of Sense About Science USA, equally as critical, said that ‘the way PACE was designed and redesigned means it cannot provide reliable answers to the questions it asked’.

Sense About Science USA is described as the sister organization of SaS. It runs the stats.org website in collaboration with the American Statistical Society .

After such criticism of PACE by Sense About Science USA and stats.org, would SaS, the UK organization, now support patients? They gave no signal they would, so in June I emailed SaS. I got an automatic response acknowledging receipt of my email, but no reply. I waited a few weeks and tried again. The same thing happened. In July, I wrote a letter to Tracey Brown. She didn’t even have the good manners to reply. In September I emailed Professor Paul Hardaker, chair of the trustees, asking if he could help me get an answer to my questions. He replied almost immediately, apologized and passed on my email. Soon after, Julia Wilson sent me an email.

I asked five questions of SaS:

Do they accept Goldin’s analysis of PACE and Butterworth’s criticism as valid?

Why, despite our requests, had they not made a fuss about something said by stats.org to be wrong? Would they?

Could they say where the data were available, the ‘results’ of the PACE trial?

Did they support the attempt by the PACE investigators to extend the Data Protection Act to prevent sharing of trial data?

Would they allow us a right of reply to Michael Sharpe’s interpretation of his own study which they had been carrying on their site since last October?

By the time I received a reply the Tribunal decision had been made, ordering QMUL to release the data. My third question was redundant.

SaS conceded Sharpe’s piece contravened their editorial policy and added a rider to that effect on their website. They claimed that they had never supported the extension of the DPA, but had not had enough resources to help in the case. Not enough it seems to post a tweet or send a single email.

They did not say whether they accepted the analysis and criticism on the stats.org website as valid.

There then began an exchange in which Wilson singularly failed to answer simple questions and continued to use language like a politician trying to avoid an issue. They did, though, remove the part of Sharpe’s article in which he made claims for his own study.

Eventually Wilson stopped responding to my emails, so I copied in Hardaker again. This time she did reply and she did finally state that SaS accepted Goldin’s analysis as valid. According to SaS the PACE trial is flawed. Wilson then ended the exchange.

They still refuse to help in any way. They have not welcomed the Tribunal decision. Even though they agree PACE is flawed, they are not prepared to do anything about it. They do not say whether they accept Butterworth’s criticism as valid.

A number of questions remain for SaS:

Why did they ignore my emails and only answer when I contacted the chair of the trustees?

Why did they allow Sharpe to promote his own study, contrary to their own editorial policy?

Why did they not push for the release of PACE trial data?

Why did they not support patients in their case against QMUL when QMUL were attempting to extend the DPA, which would have had a stifling effect on trial transparency generally?

Why have they never welcomed the Tribunal decision?

Why did they take so long and why were they so reluctant to say they accept Goldin’s analysis as valid?

Why will they not say they accept Butterworth’s criticism as valid?

Why do they still refuse to make a fuss about PACE?

Why is it that for SaS different rules apply when it comes to ME?