The only one to benefit is Parker.

The last of three blogs on the SMILE trial. Part one is here and part two here.


The SMILE trial was deeply flawed: criteria used were too broad, participants self-selected, it was not properly controlled and it relied upon subjective measures when participants were unblinded. Like the Lightning Process (LP) itself, it is worthless.

It was completed almost four years ago, but results have still not been published. The concern is that this poorly conducted trial, based on criteria wide enough to include patients with generic ‘chronic fatigue’, did indeed find that some patients reported subjective improvement. If true, SMILE did nothing more than provide false scientific justification for quackery.

The trial has already been of considerable benefit to Phil Parker, a man with no professional qualifications who has designed an intervention with no scientific basis: he is said to receive a fee from each LP provider for every course participant. Since the trial was in the Bristol and Bath area, he may well himself have been one of the LP providers as he ‘leads experienced teams’ in the region.

He has also used the mere fact of the trial as a form of endorsement on his site:
NHS and LP
The Lightning Process has been working with the University of Bristol and the NHS on a feasibility study; full information can be found here.  Two papers have been published and you can find a link to them both here:
1. The feasibility and acceptability of conducting a trial of specialist medical care and the Lightning Process in children with chronic fatigue syndrome: feasibility randomized controlled trial (SMILE study)
2. Comparing specialist medical care with specialist medical care plus the Lightning Process® for chronic fatigue syndrome or myalgic encephalomyelitis (CFS/ME): study protocol for a randomised controlled trial (SMILE Trial)

The trial acts as an advertisement not just to patients but to potential trainers. The only way to ‘qualify’ as an LP provider is via one of Parker’s own courses which cost £2100 (including VAT). Parker insists anyone who wants to continue as a certified provider must pay him an annual licence of £495 (incl VAT) in the UK or £750 internationally. These practitioners then go out to find more potential patients to generate more money for Parker.

This misadventure was funded by the Linbury Trust and the Ashden Trust for £172,200.

The actual cost of the trial is unknown as staff and NHS costs, so-called non-core costs, cannot be calculated. They were paid by public funds and must be added to that figure for the total trial expenditure.

I made a Freedom Of Information Act request to the University of Bristol to discover how much they paid for these courses and to find out if there was some kind of arrangement with Parker. At first they refused to tell me how much the courses cost, but an ICO decision rejected their claims the information was exempt from the Act.

The mean cost of a course for trial participants was £567, less than the figure (£620) given in the paper for the then current approximate cost. I asked the university if Parker offered a cut of some kind, but they replied: ‘There is no information held relating to any discount or special deal that was arranged with the providers of ‘Lightning Process’ courses.’ If there was some sort of discount, then Parker not only benefited from the trial and had a financial interest in the outcome but actually subsidized it and so effectively part-funded it.

25 in the group assigned to the intervention went through with the course, so 25 x £567, that is £14,175.

Over £14,000 wasted on a junk intervention.

If there is a ‘full study’ as Crawley concludes there should be, there’ll be a lot more spent on this quackery. Any slight evidence of effect will be used not just to promote this nonsense further but also, no doubt, to claim his mumbo-jumbo should be funded by the NHS.

The only person to benefit from this costly disgrace is Parker.


See also:




A trial so flawed as to be worthless.

The second of three blogs on the SMILE trial. The first is here.


The SMILE trial was ‘a pilot randomized trial with children (aged 12 to 18 years) comparing specialist medical care with specialist medical care plus the Lightning Process‘.

A report was published in December 2013. The trial has been over for almost four years but the results have yet to be published. A paper was submitted to The Lancet Psychiatry in August 2016 but was unsuccessful. A paper was resubmitted (to an unnamed journal, which may or may not be The Lancet Psychiatry) on 11th May 2017 (revealed in ‘Freedom of Information Request: Reference FOI17 193 – Information from the SMILE study’; decision currently under review).

The report itself, though, reveals a number of flaws in the study:

First, the choice of trial participants meant there was no real randomization.

The trial population was drawn from the Bath and Bristol NHS specialist paediatric CFS or ME service (43 patients were excluded for being too far away). It is debatable how representative such a relatively affluent area is, particularly as a course of the LP normally costs over £600. The charity, the clinic and Phil Parker, the inventor of the LP, are all located there. It’s likely that publicity and word-of-mouth have increased awareness and demand for the course in Bristol and Bath, which in turn may have attracted people with ‘chronic fatigue’ and also made those patients more susceptible to belief in the intervention.

Whether the area is representative or not, trial participants were selected from a clinic led by a paediatrician known for a particular view of the illness and a particular approach to it.

The trial was for ‘mildly to moderately affected’ patients, a group which is more likely to include those with ‘chronic fatigue’ rather than ME. The criteria used were broad:
‘generalized fatigue, causing disruption of daily life, persisting after routine tests and investigations have failed to identify an obvious underlying “cause”’ . National Institute of Health & Clinical Excellence (NICE) guidelines recommend a minimum duration of 3 months of fatigue before making a diagnosis in children… Children were eligible for this study if they were diagnosed with CFS/ME according to NICE diagnostic criteria.’
There is no mention of post-exertional malaise, which is now recognized as an essential part of ME (see, for example, the report to the US Institute of Medicine), nor of impaired cognitive functioning or even disrupted sleep. In fact, there is no more than ‘generalized fatigue’ for 3 months. Different case definitions have been shown to select disparate groups of participants and the use of such broad criteria increases once again the likelihood that patients were included who did not have ME but instead simply ‘chronic fatigue’.

Of the 157 eligible children, 28 declined to participate at the clinical assessment. The majority were ‘not interested’ (15) or said it was ‘too much’ (7).
Patients were only included ‘if the child and his or her family were willing to find out more about the study’.
Only patients who are prepared to join the study can, of course, be included, but such a test immediately filters the participants. Many patients with ME would know the LP to be worthless and so would not want to find out any more about the study.

It’s also true that for any trial involving children parents would need to approve, but again the same sort of bias presents itself. Participants were only chosen if both the parent and the child believed a trial of LP to be valuable. Since they held this belief, they would be invested in the trial and in making it a success, and so more likely to report improvement on self-measurement. Similarly, children may have felt encouraged or even under pressure to take part in the trial and then say they ‘felt better’ afterwards.

Even after this filtering of patients, more self-selection occurs:
59 did not return consent forms.
In other words, over half those eligible (81/157) explicitly (by declining to participate) or implicitly (by not returning the consent forms) showed they did not want to take part in the trial. And then of the remaining 69 who were contacted, another 13 declined. Of the 157 contacted and invited to participate, 94 chose not to.

These numbers are devastating and turn the trial into a farce:
Despite claims patients wanted more information about the LP, most clearly are not interested. The justification for carrying out this misadventure isn’t supported by the evidence.
The suspicion is that patients with ME, who knew the intervention to be worthless, refused to take part, while those with ‘chronic fatigue’ enrolled.
So many patients excluded themselves, leaving only a small number prepared to go through with the trial, that any claims of randomization are baseless. The participants had self-selected.

Even after the trial started another three allocated to the LP group dropped out. A mere 50 out of a possible 157 (32%) completed the trial.

Three in the SMC group left to have the LP outside the trial. Which again suggests most of the patients who took part only did so because they wanted the LP, considered it effective and saw the trial as a way of getting it at someone else’s expense.

In effect, the researchers asked for volunteers from the clinic for a free course of the LP, gave it to some and not others, then asked everyone if they were happy. The likelihood is that those who received it would be grateful and say they were content and those who did not were not. SMILE was not a randomized trial.

Second, the Specialist Medical Care (SMC) group were not properly controlled.

‘Other interventions, such as cognitive behavioural therapy or graded exercise therapy (GET), were offered to children if needed (usually if there were comorbid mood problems or the primary goal was sport-related, respectively).’
In other words those receiving SMC were also offered other interventions and were not just receiving SMC.
While, of course, patients must at all times receive treatments deemed necessary by their care providers, the other interventions mean no proper evaluation can be made of the SMC group relative to the LP group. It is not an SMC group but an SMC-and-sometimes-other-things group

These interventions were provided ‘if needed’, which again seems reasonable: if patients are deemed to need an intervention they should receive it. But it also undermines the notion of any control. Patients are being assessed during the trial. The intervention of the assessor, one who may offer other possible treatments, is enough to mean the patients are not simply getting SMC.

There was no attempt to replicate with the SMC group the non-specific conditions of the SMC + LP group. All the participants knew full well whether they were receiving a much publicized intervention which they had been told was effective, or not. Indeed, three patients allocated to the SMC group quit the trial and went off to get LP for themselves. This lack of equipoise would have worked both ways: to influence those who were getting the intervention to ‘feel better’ and to make those in the SMC group feel they were missing out and so report negatively.

It has to be acknowledged that some of these flaws were difficult to avoid: no one can be forced to take part in a trial, parents must give their consent, patients are going to know whether they are receiving the intervention or not. But the researchers didn’t take steps to mitigate them, in particular:

Third, the trial was unblinded yet used subjective outcome measures:
‘The following inventories were completed by children just before their clinical assessment (baseline) and follow-up (6 weeks and 3, 6 and 12 months): 11-item Chalder Fatigue Scale; visual analogue pain rating scale; the SF-36; the Spence Children’s Anxiety Scale; the Hospital Anxiety and Depression Scale (HADS), a single-item inventory on school attendance and the EQ-5D five-item quality-of-life questionnaire.’

The only objective measure was school attendance, which is obviously open to confounds. So obviously, in fact, that part way through the study the measure was dropped.
‘During the study, parents and participants commented that the school attendance primary outcome did not accurately reflect what they were able to do, particularly if they were recruited during, or had transitioned to, A levels during the study. This is because it was not clear what ‘100% of expected attendance’ was. In addition, we were aware of some participants who had chosen not to increase school attendance despite increased activity.’
In a commentary on another trial, Jonathan Edwards, emeritus professor of connective tissue medicine at University College London, is clear:
‘The trial has a central flaw that can be lost sight of: it is an unblinded trial with subjective outcome measures. That makes it a nonstarter in the eyes of any physician or clinical pharmacologist familiar with problems of systematic bias in trial execution.’

Some problems in the trial design may have been difficult to overcome, but this failure to use objective outcome measures could easily have been avoided. There was no reason why they could not have used, for example, actimeters. The choice to use subjective measures renders the whole exercise worthless.

Edwards made his comment referring to another trial, but it applies equally to SMILE. The trail was deeply, fatally flawed. It was a nonstarter.

The report brushes over the conflicts between SMC and the LP where patients are told to use different approaches. It ignores the possibility of group-think in the LP courses. It disregards the failure to recruit sufficient numbers and the self-selection of those who did participate. It ignores the obvious flaws. And it then concludes with a recommendation for a full study.

That would be an even bigger error. Only one person has benefited from SMILE and that person would gain even more from a full study: Phil Parker. How much he has benefited will be shown in part 3.


Why the trial should never have been allowed in the first place.

The first of three blogs on the SMILE trial.


There is no evidence the Lightning Process (LP), a mish-mash of elements of cognitive behavioural therapy, neurolinguistic programming, hypnotherapy, life coaching and osteopathy, is anything other than quackery. For decades Phil Parker has made claims for its efficacy, including as a treatment for myalgic encephalomyelitis (ME), but no proper trial has ever supported these claims.

The Advertising Standards Authority (ASA) guidance is clear:
To date, neither the ASA nor CAP has seen robust evidence for the health benefits of LP. Advertisers should take care not to make implied claims about the health benefits of the three-day course and must not refer to conditions for which medical supervision should be sought.

There are people who claim to have been helped, of course, but such claims are made for all bogus therapies. It seems that some people are simply amenable to these interventions. In addition, perhaps there are those who have become stuck in a rut, experiencing a generic chronic fatigue, believing themselves to have ME, and who are helped to kickstart their lives again by the LP. Since there is no biomarker for ME, diagnosis of the illness can be difficult: 40% of patients in an ME clinic may not actually have ME.

There is currently no treatment for ME, so it is understandable that some patients would be easy prey for and would seek more information about interventions hawked about with exaggerated claims.

Parents of children with ME were apparently contacting the charity Association of Young People with ME (AYME) (1) and asking whether it was worth trying the LP. Bewilderingly, Esther Crawley, a Bristol paediatrician and then medical adviser to AYME, instead of telling patients and parents that the LP had no scientific basis and was not worth the considerable amount of money it costs, decided to do a trial. Just as bewilderingly, the SMILE trial received funding and ethical clearance.

First, this trial should never have been allowed. Good science is not just about evidence, but about plausibility, so any such trial immediately gives a spurious credibility to the LP. Asking a question, even sceptically, can offer an implicit endorsement of its premises.

Second, it was the first study of any kind to use the Lightning Process, and it was doing so with children. There had been no opportunity to measure harms: there have been reports of patients who do not respond to the LP who then blame themselves and in desperation contemplate killing themselves. Exposing vulnerable adolescents to such a potential risk would seem particularly irresponsible.

Third, LP patients are made to accept a number of onerous conditions (such as taking responsibility for their illness) before taking the course. It is ethically questionable to ask trial participants to agree to such conditions in order to take part in a trial of a possible treatment for their illness. Making these demands of children would seem even more ethically dubious.

Fourth, patients are told to ignore their symptoms and to resume normal activity (from SMILE study):
‘It has been a bit confusing, I have to say, because obviously we have got the [Lightning Process practitioners] approach, where, “Right, finally, done this, now you don’t need to do the pacing; you can just go back to school full time.” I think, the physical side of things, YP9 has had to build herself up more rather than just suddenly go back and do that’.

Research, backed up by patient surveys, shows the harms caused by exertion in patients with ME (see Kindlon). The recent report to the US Institute of Medicine found post-exertional malaise to be so central to the illness that it suggested a new name: systemic exertion intolerance disease or SEID. Even in disputed clinical trials such as PACE which use graded exercise therapy, patients are monitored by physiotherapists and nurses and plan a gradual increase in activity. Here service providers with no professional qualifications simply tell child patients that after three sessions in three days they should return to normal activity. It is deeply irresponsible.

Fifth, to anyone with genuine ME, that is ME as defined by the International Consensus Criteria, the Lightning Process is a form of torture. It is a physical torture simply to complete the course, again from the SMILE study:
In addition to specialist medical care, children and their parents in this arm were asked to read information about the Lightning Process on the internet. They then followed the usual LP procedure (reading the introductory LP book or listening to it in CD form) and completing an assessment form to identify goals and describe what was learnt from the book. On receiving completed forms, an LP practitioner telephoned the children to check whether they were ready to attend an LP course. The courses were run with two to four children over three sessions (each 3 hours 45 minutes) on three consecutive days.

That is a very heavy burden. The homework is taxing enough but then to undergo 3 sessions of almost 4 hours each on 3 consecutive days is immense. The effort, the intensity and the busyness, would be punishment to anyone hypersensitized by the illness.

It is also a form of emotional torture as fundamental to the process is that patients take responsibility for their health, their illness and their recovery, from here, here,  here and here:
LP trains individuals to recognize when they are stimulating or triggering unhelpful physiological responses and to avoid these, using a set of standardized questions, new language patterns and physical movements with the aim of improving a more appropriate response to situations.

* Learn about the detailed science and research behind the Lightning Process and how it can help you resolve your issues

* Start your training in recognising when you’re using your body, nervous system and specific language patterns in a damaging way

What if you could learn to reset your body’s health systems back to normal by using the well researched connection that exists between the brain and body?

the Lightning Process does this by teaching you how to spot when the PER is happening and how you can calm this response down, allowing your body to re-balance itself.

The Lightning Process will teach you how to use Neuroplasticity to break out of any destructive unconscious patterns that are keeping you stuck, and learn to use new, life and health enhancing ones instead.

The Lightning Process is a training programme which has had huge success with people who want to improve their health and wellbeing.

This responsibility is an enduring one: patients must continue to apply the training to their lives after their course and accept that improvement in their health lies entirely within themselves.

To take chronically ill patients, who want only to get better, and spend three days attempting to brainwash them into believing their illness and recovery lie within their control is deeply unethical. Adult patients in the days after enduring this nonsense, blaming themselves for lack of improvement, have been left in such depths of despair as to want to take their own life. To expose chronically ill adolescents to such a danger was extraordinarily irresponsible.

Of course, with the broad criteria and the self-selection involved in determining who took part in the trial, it may well be that not a single participant actually had ME but had instead simply ‘chronic fatigue’. That would be even worse, though: the results may show that the LP has some effect with ‘chronic fatigue’ but would be used to claim effectiveness for patients with ME. Many children who genuinely do have ME could be gulled into paying for this nonsense only, potentially, to do themselves considerable harm.

This trial was unnecessary, gave spurious credibility to quackery and was unethical. It was also very poorly conducted, as will be shown in part 2.
1. AYME has now ceased trading and its role has effectively been taken over by Action for ME



A Response to the blog by Puebla and Heber of PLOS ONE

PLOS ONE has issued an expression of concern regarding a cost-effectiveness analysis on results from the PACE trial. The concern is because the authors will not provide data requested by Professor Coyne. The PLOS ONE regulations in force at the time stated:

‘Publication is conditional upon the agreement of the authors to make freely available any materials and information described in their publication that may be reasonably requested by others for the purpose of academic, non-commercial research.’

Queen Margaret University London (QMUL) is the responsible authority for the PACE trial, but for the purposes of this particular paper the role is played jointly by Kings College London (KCL) and QMUL. KCL and QMUL continue to refuse to release the data.

Iratxe Puebla, Managing Editor for PLOS ONE, and Joerg Heger, Editor-in-Chief of PLOS ONE, have written a blog giving their view of the arguments involved and an insight into some of their thinking in issuing the expression of concern.

One line gives cause for concern and I have written a response. I did post this response as a comment underneath their blog, but after more than 24 hours the comment still remains ‘awaiting moderation’. I have therefore decided to publish it here.

It is worth reading also the dismissive, arrogant, ignorant letter to Professor Coyne from KCL refusing his request for the data.


(Note: the comment has now been approved on the PLOS ONE site.)


My response:

“Interestingly, the ruling of the FOI Tribunal also indicated that the vote did not reflect a consensus among all committee members.”
This line is misleading and reveals either ignorance or misunderstanding of the decision in Matthees.

The Information Tribunal (IT) is not a committee. It is part of the courts system of England and Wales.

First, the IT’s decisions may be appealed to a higher court. As QMUL chose not to exercise this right but to opt instead to accept the decision, then clearly it considered there were no grounds for appeal. The decision stands in its entirety and applies without condition or caveat.

Second, court decisions are not applied differently according to how those decisions are reached: they are full and final. Majority verdicts have no less standing. We are all familiar with the work of the UK & US Supreme Courts. Roe v Wade is not mitigated because it was a majority decision. May could not fudge the need for parliamentary approval of Brexit because the UKSC was not unanimous.

Third and above all, it is misleading to suggest there was a lack of consensus in the Tribunal.
The court had two decisions to make:
First, could and should trial data be released and if so what test should apply to determine whether particular data should be made public? Second, when that test is applied to this particular set of data, do they meet that test?

The unanimous decision on the first question was very clear: there is no legal or ethical consideration which prevents release; release is permitted by the consent forms; there is a strong public interest in the release; making data available advances legitimate scientific debate; and the data should be released.

The test set by this unanimous decision was simple: whether data can be anonymized. Furthermore, again unanimously, the Tribunal stated that the test for anonymization is not absolute. It is whether the risk of identification is reasonably likely, not whether it is remote, and whether patients can be identified without prior knowledge, specialist knowledge or equipment, or resort to criminality.

It was on applying this test to the data requested, on whether they could be properly anonymized, that the IT reached a majority decision.

On the principles, on how these decisions should be made, on the test which should be applied and on the nature of that test, the court was unanimous.

It should also be noted that to share data which have not been anonymized would be in breach of the Data Protection Act. QMUL has shared these data with other researchers. QMUL should either report itself to the Information Commissioner’s Office or accept that the data can be anonymized. In which case, the unanimous decision of the IT is very clear: the data should be shared.

PLOS ONE should apply the IT decision and its own regulations and demand the data be shared or the paper retracted.

A Response to Esther Crawley

This is in response to the comment piece in New Scientist by Esther Crawley.


Myalgic Encephalomyelitis (ME) has been studied for decades from Ramsay through Behan to Hornig and Newton. As a result, many would say we know rather a lot about the illness. We know that patients show neurological, immunological and endocrinological changes. We know that ME is not depression; that patients do not respond more to placebo; that patients do not fear exercise; that ME is not caused by physiological deconditioning. We also know that patients do not harass researchers and that the primary symptom is post-exertional malaise.

It is not true that patients believe clinicians secretly think the illness is psychological. There is no secret: the major proponents of the Cognitive Behavioural Therapy-Graded Exercise Therapy (CBT-GET) model say ME is a self-perpetuating cycle of exercise avoidance. They have all stated quite clearly that ME has no ongoing underlying biological cause: it is neurasthenia, ‘simply a belief’, a functional somatic syndrome (Simon Wessely); it is perpetuated by beliefs (Peter White); ME is a pseudo-disease, a somatoform disorder, perpetuated by misinterpretation of bodily sensations, abnormalities of mood and unhelpful coping behaviour (Michael Sharpe). If others, such as Dr Crawley, disagree and do not mean to imply that ME itself has a psychological element, then perhaps they could say so unambiguously.

While it is of course true that anything which brings about changes is ‘biological’, implicit in CBT is the notion that responsibility for recovery lies with the patient. If only patients think differently, then they will no longer be ill. No one disputes psychotherapy can help with a broad spectrum of illnesses, but there is no evidence it can reverse organic damage (injury, infection, inflammation). There is no evidence CBT can address the changes in ME patients found by Lipkin, Montoya and Naviaux.

It is true that ME is likely to prove to be more than one illness, or an illness with more than one sub-type. It is also true though that many people diagnosed with ME, do not in fact have it. This difficulty in diagnosis due to the absence of a biomarker is one which causes problems for clinical trials. Many patients think the criteria used by Dr Crawley are too broad and are likely to include patients who have a generic ‘chronic fatigue’. It is a view shared by the US Institute Of Medicine and the US Agency for Healthcare Research and Quality, which has recently stopped recommending CBT for ME because any claims for its efficacy come from trials which used the discredited Oxford criteria.

The other challenge for trials of interventions for ME is to distinguish between placebo, improved coping with the effects of the illness and a genuine treatment of the underlying illness. Since the severely ill do not respond at all to CBT and the small, subjective, self-reported benefit in the moderately ill is only temporary, many patients think that claims for effectiveness of CBT are unsafe. They are not convinced FITNET-NHS contains sufficient safeguards against this confound.

Whichever measurements of recovery and improvement are used, whether the ones initially chosen by the investigators or the ones to which they switched part-way through the trial when some data had already been collected, we do know from PACE that the vast majority of patients do not benefit at all from CBT-GET. Since NHS clinics are based on this approach, many patients do not bother attending them. Any trial, then, gathering subjects from these clinics would not only exclude the most severely ill, who are unable to travel, but also the large numbers of patients who gain no advantage from the interventions and so do not waste time and effort in going to the clinics. On the other hand, since these clinics use broad criteria to select for ME, patients fear that people with illnesses other than ME would be included.

Patients agree that it is important we all work together. We would ask, though, that any research uses strict criteria to exclude chronic fatigue; takes into account the physiological changes found in people with ME; is based on plausible theory; is not based implicitly or explicitly on the notion the illness is one of false beliefs; includes meaningful patient involvement, from the broader patient community not just the established charities; benefits all patients, including the most severely affected; has, where applicable, proper controls against confounds; and unconditionally shares all anonymized data with anyone who wants to see it.


Thanks to (& #FF) Samei Huda for advice, though his help should not be seen as any kind of endorsement or agreement.

Sense About Science, the PACE trial and ME.

(The email exchange discussed in this blog can be viewed here: )

Sense About Science (SaS) exist to challenge misrepresentation of science and evidence. They advocate openness and honesty about research findings. They encourage people to #askforevidence. They agree that: we need all available information to make informed decisions about health care; hiding half the data is how magicians do coin tricks and shell games; with incomplete data we can only get an incomplete picture; outcome switching is like choosing lottery numbers after watching the draw. They ask people to contact them when there is something wrong so they can make a fuss.

When patients were trying to obtain the PACE trial data from Queen Mary University of London, many of us asked SaS if they would help us. They refused. They said that the trial results were available and that the investigators had complied with CONSORT.

In March, Rebecca Goldin posted a scathing criticism of PACE on the website of concluding that ‘the flaws in this design were enough to doom its results from the start’. In an accompanying editorial Trevor Butterworth of Sense About Science USA, equally as critical, said that ‘the way PACE was designed and redesigned means it cannot provide reliable answers to the questions it asked’.

Sense About Science USA is described as the sister organization of SaS. It runs the website in collaboration with the American Statistical Society .

After such criticism of PACE by Sense About Science USA and, would SaS, the UK organization, now support patients? They gave no signal they would, so in June I emailed SaS. I got an automatic response acknowledging receipt of my email, but no reply. I waited a few weeks and tried again. The same thing happened. In July, I wrote a letter to Tracey Brown. She didn’t even have the good manners to reply. In September I emailed Professor Paul Hardaker, chair of the trustees, asking if he could help me get an answer to my questions. He replied almost immediately, apologized and passed on my email. Soon after, Julia Wilson sent me an email.

I asked five questions of SaS:

Do they accept Goldin’s analysis of PACE and Butterworth’s criticism as valid?

Why, despite our requests, had they not made a fuss about something said by to be wrong? Would they?

Could they say where the data were available, the ‘results’ of the PACE trial?

Did they support the attempt by the PACE investigators to extend the Data Protection Act to prevent sharing of trial data?

Would they allow us a right of reply to Michael Sharpe’s interpretation of his own study which they had been carrying on their site since last October?

By the time I received a reply the Tribunal decision had been made, ordering QMUL to release the data. My third question was redundant.

SaS conceded Sharpe’s piece contravened their editorial policy and added a rider to that effect on their website. They claimed that they had never supported the extension of the DPA, but had not had enough resources to help in the case. Not enough it seems to post a tweet or send a single email.

They did not say whether they accepted the analysis and criticism on the website as valid.

There then began an exchange in which Wilson singularly failed to answer simple questions and continued to use language like a politician trying to avoid an issue. They did, though, remove the part of Sharpe’s article in which he made claims for his own study.

Eventually Wilson stopped responding to my emails, so I copied in Hardaker again. This time she did reply and she did finally state that SaS accepted Goldin’s analysis as valid. According to SaS the PACE trial is flawed. Wilson then ended the exchange.

They still refuse to help in any way. They have not welcomed the Tribunal decision. Even though they agree PACE is flawed, they are not prepared to do anything about it. They do not say whether they accept Butterworth’s criticism as valid.

A number of questions remain for SaS:

Why did they ignore my emails and only answer when I contacted the chair of the trustees?

Why did they allow Sharpe to promote his own study, contrary to their own editorial policy?

Why did they not push for the release of PACE trial data?

Why did they not support patients in their case against QMUL when QMUL were attempting to extend the DPA, which would have had a stifling effect on trial transparency generally?

Why have they never welcomed the Tribunal decision?

Why did they take so long and why were they so reluctant to say they accept Goldin’s analysis as valid?

Why will they not say they accept Butterworth’s criticism as valid?

Why do they still refuse to make a fuss about PACE?

Why is it that for SaS different rules apply when it comes to ME?

Using public money to keep publicly funded data from the public

Update. Some have questioned whether QMUL had to pay VAT, or whether the VAT could be reclaimed by QMUL. I have done a further FOIA request to clarify and received a response on 14/09/16:

‘VAT at 20% was paid on these amounts.’


After publication of the PACE trial comparing different interventions (Cognitive Behavioural Therapy [CBT], Graded Exercise Therapy [GET], Adaptive Pacing Therapy and Specialist Medical Care) for ME/CFS, patients questioned the claims for the effectiveness of CBT and GE. These criticisms have been reported by David Tuller and James Coyne (a series on his blog here), and supported in her own look at the trial by Rebecca Goldin for the website jointly run by the American Statistical Association and Sense About Science America.

A number of Freedom of Information requests were made for the data in order to test the conclusions drawn by the Principal Investigators. Many of the patients’ requests were rejected, deemed vexatious, by the responsible research centre, Queen Mary University of London (QMUL).  In one instance, however, Alem Matthees successfully complained to the Information Commissioner (IC), and QMUL were ordered to release the data Matthees had requested.

QMUL appealed the IC’s decision and a hearing of the First-Tier Tribunal (Information Rights) was held in April this year.

Valerie Eliot Smith, a qualified barrister, has done a series of blogs on the hearing and I am grateful to her for information used here. On her website a number of downloads are available, including one which lists the attendees (bottom of page here).

Those at the Tribunal for QMUL include: a QC, Timothy Pitt-Payne; a solicitor, Edward Hadcock; two assistant solicitors, Alison Williams Mills and Gary Attle; four witnesses, Peter White (QMUL), Steve Thornton (QMUL), Trudie Chalder (KCL) and Ross Anderson (Cambridge); and two observers, Jane Pallant (Deputy Academic Registrar) and Paul Smallcombe (FOI Officer).

Hiring a QC for three days does not come cheap. I made an FOI request to QMUL to discover exactly how much the hearing had cost and have now received a reply.

It is claimed that the attendance of all those witnesses and any preparation involved for the Tribunal cost the University nothing. Presumably Anderson covered his own travel costs and the attendance at and preparation for the hearing by White and the others were considered part of normal work duties.

QMUL has said how much they paid in legal fees:

Mills & Reeve LLP: £149,482.30 ex VAT
Timothy Pitt-Payne QC: £48,320.00 ex VAT
Disbursements/expenses (Mills & Reeve LLP): £6,985.43 ex VAT

VAT is charged at 20% on legal fees. I make the total amount of public money QMUL has so far spent to keep data secret:



Screenshot of email from QMUL: