Part 2: Routine Outcome Monitoring in Child and Adolescent Mental Health: What does the latest research tell us?

Dr Jenna Jacob

Last month, we took a look at what the latest research tells us about the use of outcome measures, and what the evidence says works. In a follow up to that blog, here we review different types of measure, how we can analyse the information they give us, and what the available outcomes data shows.

Considering types of measure available to use

When working with young people to support their mental health and wellbeing, there is a balance to achieve in choosing what outcome measures to use. Standardised outcome measurement questionnaires focus on different constructs (e.g. symptoms of anxiety, depression, trauma, wellbeing, functioning, resilience or coping) and are suited to different ages and abilities. Where there is research behind them, there is a lot we can learn from standardised outcome measures: they give us the ability to look across services, to compare data or benchmark. There is also value in using idiographic measures to track progress: these are measures using individualised items, generated by the young person, for example, goal tracking. The very nature of us as humans means that we are complex and individual, with our own unique experiences and difficulties: young people are truly at the centre of outcome measurement if progress in areas of importance to them are prioritised.

However, when thinking about the effectiveness of services, what does it mean to combine goals from a number of young people, all with different meanings? We don’t yet know the answer to this, although there is some recent work emerging which discusses the considerations of this approach, e.g., (1). Increasing attention is also being given to the psychometric properties of idiographic outcome measures, including the statistics that can assess these such as multilevel modelling (see (2,3)). While we explore how to best use and analyse idiographic outcome measurement for service evaluation, we continue to advise the use of a combination of measures. In fact, research tells us that standardised and idiographic measures tap into different things and serve slightly different purposes (4,5) and so it makes sense to use both to gain an overall picture of progress. There is guidance on choosing and using outcome measures on our website.

While we can compare outcomes from different standardised outcome measures by looking at the number of people who reliably change for each one (see the section below), there are still challenges in comparing the different measures being used by different services, making it difficult to compare findings that measure slightly different things (6). For years, CORC members have used a base set of measures with which to track similar outcome areas laid out in the CORC data spec – and that approach was paralleled in CYP IAPT and now in the Mental Health Services Dataset (although we have always encouraged the use of additional measures as needed for specific contexts or purposes). In a similar way, The Common Measures Board, which was formed by two large funders of mental health science (National Institute of Mental Health (NIMH) and Wellcome), recently announced a set of core metrics for use in all their funded research focused on people who experience anxiety and depression (7). Whilst this will mean that there will be some consistency in what data is collected across research participants, there are also concerns with this approach, including the narrowing of the scope of enquiry, the lack of transferability across different settings, or services (8), and reducing the ability to choose a measure that best suits different service-user groups. This development is one that we will continue to watch with interest.

What does the data say?

In terms of evidencing effectiveness and benchmarking services, we have moved to calculating change using the reliable change index (see (9)). In doing so, we can evaluate (for a particular measure) whether young people have reported reliable improvement, deterioration, or no change, over and above that which we might expect to see from chance alone. This is approach is also being used by NHS England as part of the outcomes metric. The most recent publicly available stats from the CORC data sets show that based on a wide range of measures, 53% of young people attending UK mental health settings reported a reliable improvement in their outcomes, 37% reported no reliable change, and 10% reported reliable deterioration (1). Differences between outcome domains have also been found, for example, more young people see meaningful progress in achieving their goals than see reliable improvements in their internalising symptoms and functioning based on this data (4).  A recent international systematic review and meta-analysis focused on outcomes experienced by young people who experience anxiety and depression: its calculations pool all of the results in a number of published research studies, and reported 38% reliable improvement, 44% no reliable change, and 6% reliable deterioration based on self-reported symptom measures (10).

Bias in measurement

Having shared these large-scale findings about the outcomes young people see from engaging with help, it is important to note that we also know that there are systematic biases in the data. There are differences between young people who complete outcome measures at two time points (included in the analyses) and those who have missing data points: in analysis of the national children and young people’s improving access to psychological therapies (CYP IAPT) data set, we found that young people who completed two or more measures were older and less ethnically diverse than those who did not (11). This may be in part due to measurement development, for example, the majority of outcome measures we routinely use have been developed and validated in the Global North, with White English-speaking young people. There are very few studies which have explored the relationships between outcome measures and ethnicity or culture (see this blog for more on this). These discrepancies in groups need further exploration, particularly in terms of what this means in practice. 

What’s next?

While there are developments in our field, and more progress is being made in the exploration of ROM, what it means, how to consider the data and how best to implement it, first and foremost, there is an ethical consideration to discuss outcome information with young people. It is important for us to continue to be curious about outcome measures, being mindful of what we are collecting, what we do with it, and what the data might tell us. In particular, with the move to central reporting in England, and the growing use by NHS services of the outcomes metric based on the principles of the reliable change index, more research is needed into what these results mean: what does it tell us, what does it mean in a practical sense, and of key importance: what does it mean to group outcomes together in a way, where the nuances of change are hidden?

There are many things that still need to be considered, but if we continue to be curious in this way, to explore and to continue to be considered in our use of measures, we can continue to keep the child/young person at the heart of what we do.

 

  1. Jacob J, Edbrooke-Childs J, Costa da Silva L, Law D. Notes from the youth mental health field: Using movement towards goals as a potential indicator of service change and quality improvement. J Clin Psychol. 2021;(October 2019):1–14.
  2. Sales, C., Ashworth, M., Ayis, S., Barkham, M., Edbrooke-Childs, J., Faisca, J., Jacob, J., Xu, D., & Cooper M. Idiographic patient reported outcome measures (I-PROMs) for routine outcome monitoring in psychological therapies: A position paper. J Clin Psychol [Internet]. 2022; Available from: https://doi.org/10.1002/jclp.23319
  3. Cooper M XD. The goals form: Reliability, validity, and clinical utility of an idiographic goal‐focused measure for routine outcome monitoring in psychotherapy. J Clin Psychol. 2021;
  4. Krause KR, Edbrooke-Childs J, Singleton R, Wolpert M. Are We Comparing Apples with Oranges? Assessing Improvement Across Symptoms, Functioning, and Goal Progress for Adolescent Anxiety and Depression. Child Psychiatry Hum Dev [Internet]. 2021;(0123456789). Available from: https://doi.org/10.1007/s10578-021-01149-y
  5. Jacob J, Edbrooke-Childs J, Law D, Wolpert M. Measuring what matters to patients: Using goal content to inform measure choice and development. Clin Child Psychol Psychiatry. 2015;22(2).
  6. Krause KR, Bear HA, Edbrooke-Childs J, Wolpert M. Review: What Outcomes Count? A Review of Outcomes Measured for Adolescent Depression Between 2007 and 2017. J Am Acad Child Adolesc Psychiatry [Internet]. 2019;58(1):61–71. Available from: https://doi.org/10.1016/j.jaac.2018.07.893
  7. Wolpert M. Funders agree first common metrics for mental health science [Internet]. 2020. Available from: https://www.linkedin.com/pulse/funders-agree-first-common-metrics-mental-health-science-wolpert#:~:text=Patient Health Questionnaire (PHQ) (,) (adult impact on functioning)
  8. Patalay P FE. Editorial Perspective: Prescribing measures: unintended negative consequences of mandating standardized mental health measurement. J Child Psychol Psychiatry. 2021;62(8):1032–6.
  9. Jacobson N.S. T. Clinical Significance: A Statistical Approach to Denning Meaningful Change in Psychotherapy Research. J Consult Clin Psychol. 1991;59(1):12–9.
  10. Bear, H. A., Edbrooke‐Childs, J., Norton, S., Krause, K. R., & Wolpert M. Systematic review and meta‐analysis: Outcomes of routine specialist mental health care for young people with depression and/or anxiety. J Am Acad Child Adolesc Psychiatry. 2019;59(7):810–841.
  11. Wolpert M, Jacob J, Napoleone E, Whale A, Calderon A E-CJ. Child- and Parent-Reported Outcomes and Experience from Child and Young People’s Mental Health Services 2011–2015. 2016.

Our use of cookies

CORC is using functional cookies to make our site work. We would also like to set optional cookies (performance cookies). We don’t use marketing cookies that display personalised ads for third party advertisers.

Essential & functional cookies

Essential and functional cookies make our website more usable, enabling functions like page navigation, security, accessibility and network management. You may disable these through your browser settings, but this may affect how the website functions.

Performance cookies

These remember your preferences and help us understand how visitors interact with our website. We would like to set Google Analytics cookies which will collect information that does not identify you. If you are happy for us to do this, please click “I’m ok with cookies”.

For more detailed information about the cookies we use and how they work, please see our Cookies Policy: https://www.corc.uk.net/privacy-policy/