The Case for Measurement-Based Care
If a physician prescribed medication, never checked whether it was working, and kept the patient on the same dose for years regardless of response, you would call that malpractice.
In therapy, it's called standard practice.
Fewer than 20% of therapists routinely use validated outcome measures with their clients. The rest rely on clinical judgment — which, as decades of research demonstrate, is systematically unreliable when it comes to detecting client deterioration and estimating treatment progress.
Measurement-based care changes that. And the evidence for its effectiveness is not ambiguous.
What measurement-based care is
MBC means routinely administering brief, validated questionnaires to clients and using the results to inform clinical decisions. A PHQ-9 for depression. A GAD-7 for anxiety. A PCL-5 for PTSD. Administered at each session or at regular intervals, scored immediately, reviewed before or during the session, and tracked over time.
It's not a treatment. It's a practice — one that sits on top of whatever therapeutic approach you use. CBT with MBC. Psychodynamic therapy with MBC. DBT with MBC. The framework is modality-agnostic.
What the research says
The evidence base for MBC spans three decades and multiple large-scale meta-analyses.
Lambert and colleagues conducted the foundational research at Brigham Young University. In a series of randomized trials involving thousands of clients, therapists who received client progress feedback — versus those who did not — produced significantly better outcomes. The effect was most pronounced for clients who were not responding to treatment. Clients flagged as "not on track" who received MBC-informed care were roughly twice as likely to show reliable improvement compared to treatment-as-usual.
Shimokawa, Lambert, and Smart (2010) found that feedback-informed treatment cut deterioration rates in half. In the no-feedback condition, about 20% of at-risk clients deteriorated. With feedback, that dropped to roughly 9%.
A 2020 meta-analysis by Gondek and colleagues, examining 58 studies, confirmed a small but reliable effect of routine outcome monitoring on treatment outcomes, with the effect concentrated in clients who were not improving — precisely the group that matters most.
The mechanism isn't mysterious. When therapists receive objective data showing that a client isn't improving, they adjust. They change interventions, increase session frequency, address ruptures in the alliance, consult with colleagues, or revisit the case conceptualization. Without data, they don't — because they typically don't know there's a problem.
Why therapists don't do it
If the evidence is this strong, why do fewer than one in five therapists use it?
Related reading: what measurement-based care is, the therapist blind spot, and what happens without tracking.
Training gaps are part of the answer. Most graduate programs in psychology and counseling don't teach MBC as a core competency. Therapists graduate without ever having integrated routine measurement into their clinical workflow.
Practical barriers matter too. If outcome measurement means shuffling paper forms, hand-scoring them, and manually tracking results, it's easy to see why it gets dropped. The overhead has to be nearly zero for adoption to stick.
But the deepest resistance is psychological. Outcome measurement makes clinical reality visible in a way that can be uncomfortable. If you believe you're helping every client and the data shows otherwise, that creates cognitive dissonance. It's easier to not measure than to confront the possibility that some of what you're doing isn't working.
This isn't a character flaw. It's a human response to a system that provides no external feedback loop. In every other field that takes outcomes seriously — medicine, aviation, engineering — the feedback loop is built into the infrastructure. In therapy, the therapist has to opt in. Most don't.
What MBC catches that clinical judgment misses
The research on clinical judgment in therapy is sobering.
Hannan and colleagues (2005) asked therapists to predict which of their clients would deteriorate. Therapists predicted a 1% deterioration rate. The actual rate was 8%. They missed 92% of the clients who got worse.
Walfish, McAlister, O'Donnell, and Lambert (2012) surveyed over 600 therapists. The average therapist rated themselves at the 80th percentile of effectiveness. Twenty-five percent rated themselves in the 90th percentile. None rated themselves below average. This is statistically impossible.
The issue isn't that therapists are dishonest. It's that without objective measurement, they lack the information needed to accurately assess their own performance. Every profession has this problem. Therapy is just one of the few that hasn't systematically addressed it.
The practical case
Beyond the research, the practical benefits are straightforward.
Engagement increases. Clients who see their own data — a trend line showing improvement — are more engaged in treatment. The data makes their progress visible in a way that session conversation often doesn't.
Efficiency improves. When you can see that a client has responded to treatment and is in the normal range, you can have an informed conversation about tapering sessions or termination rather than continuing indefinitely by default.
Supervision gets better. Supervisees who bring outcome data to supervision have more productive conversations. The data points supervision toward the cases that need attention rather than whichever case the supervisee happens to mention.
Risk management improves. If a client files a complaint or a malpractice claim, having a record of systematic outcome monitoring is powerful evidence of clinical rigor.
The bottom line
Not measuring outcomes in 2026 is not a neutral choice. It's a decision to practice without the information you need to know whether you're helping. The evidence says measurement works, clinical judgment alone doesn't, and the clients who benefit most are the ones who need it most — the ones who aren't getting better.
The barrier isn't the evidence. It's the implementation. The easier you make measurement, the more likely it is to happen.
If you're a therapist reading this and wondering whether to start, the answer is straightforward: pick two measures (PHQ-9 and GAD-7 cover most presentations), assign them to five clients, and review the data before each session for four weeks. By week four, you'll have caught something you would have missed without the data. That experience — seeing the value firsthand — is more persuasive than any meta-analysis.
The evidence for MBC is settled. The question is no longer whether to measure. It's how quickly you can make measurement a natural part of how you practice.
Theracharts tracks client outcomes with 120+ validated assessments, trend charts, and clinical alerts — so you always know whether the work is working. Get started free.