Marianne Promberger: Predicting treatment response: are we close to a good model and will anyone use it?

Here at the Child Outcomes Research Consortium (CORC), we’re very aware that not all mental health treatment works for everyone. The same is true with medication, there is an awareness that people respond differently, for example to an antidepressant.  This raises the question: could we possibly predict who responds well to a specific treatment? Those who access mental health services might then be directed to the treatment that helps them most.  Limited funding often means more intensive treatment cannot be offered to everyone. In addition to who needs it most, might we be able to predict who responds well to it? Might some service users even fare better with lighter touch treatment?

Early this summer, several CORC members of staff were lucky to be able to attend the second Treatment Selection Idea Lab (TSIL) conference, taking place at the Royal College of Psychiatrists, hosted by Steven Pilling and UCL. We heard a fascinating array of talks on the ethics and methodological issues.  

There were two topics I found particularly interesting and very relevant to research done here at CORC. The first was Aaron Fisher’s view. His talk on ‘Scalable methods for implementing person specific precision interventions’ claimed that individuals respond so differently to interventions that group-level research is completely meaningless and should be replaced by person-specific research on person-specific interventions. While I would not go so far (and I don’t think many at CORC would), Miranda Wolpert is rightly a big champion for scrutinising how many individuals are really being helped by treatment in a meaningful way.  Group level findings can obscure important individual differences. Reporting reliable change for service users is a first step in revealing the individual level.

The other key topic was around the interface between humans and algorithms. Even if we come up with a great algorithm that predicts treatment responses and can direct service users to the right path, will clinicians actually want to use it? Research from my original field of decision making psychology, going back to Paul Meehl and Robyn Dawes, says they probably won’t – clinicians will usually think they can outdo the algorithm (and they will mostly be wrong in thinking so). The conference involved a prediction contest. For this, Miranda Wolpert and her collaborator Marjolein Fokkema contributed a model involving decision trees. Decision trees may not be the most accurate or predictive, but they can be easily understood by clinicians who may then be more willing to use them. The best model or algorithm won’t help anyone if it just sits unused in a file drawer. This is just the balance of rigour and realism that we value at CORC.

Altogether, the conference was a great way to hear about different approaches to treatment selection, and was indeed an ‘idea lab’ presenting some very innovative approaches. It was also a great opportunity to network and we would hope to be able to join the next TSIL conference next year.