Methodology, design, and evaluation in psychotherapy research - a summary of chapter 2 of Bergin and Garfield’s Handbook of psychotherapy and behavior change

M. J. Lambert (Ed.), Bergin and Garfield’s Handbook of psychotherapy and behavior change 6th edition
Chapter 2
Methodology, design, and evaluation in psychotherapy research

Therapy researchers should make consistent use of designs in which patient, therapists, and type of treatment are independent variables and dependent variables are examined over time.

Guiding principles

The scientists practitioner

Treatment outcome research methods within psychology developed largely from the fundamental commitment of clinical psychologists to a scientist-practitioner model for training and professional practice.
Arguably, the scientist-practitioner model provides the framework (the adaption and refinement of the methods and guidelines of science) for continuously improving the clinical services offered to clients across the globe.
Empirical evaluation of the efficacy and effectiveness of therapy is typically considered necessary before widespread utilization can be sanctioned.

The role is intended to foster service provides who evaluate their interventions scientifically and researchers who study applied questions and interpret their findings with an understanding of the richness and complexity of human experience.

For treatment outcome studies to be meaningful, they must reflect both a fit within the guidelines of science and an understanding of the subtleties of human experience and behaviour change.

Empirically supported treatment(s)

The field has developed a set of criteria to be used when reviewing the cumulative literature on the outcomes of therapy.
These criteria help determine whether or not a treatment can be considered ‘empirically supported’.
Empirically supported treatments: treatments found to be efficacious when evaluated in randomized clinical trials (RCTs) with specified populations of patients.

The operational definition of empirically supported treatments focuses on the accumulated date on the efficacy of a psychological therapy.
These demonstrations of treatment efficacy often involve an RCT in which an intervention is applied to cases that meet criteria for a specific disorder and analysed against a comparison condition to determine the degree or relative degree of beneficial change associated with treatments.
The accumulated evidence comes from multiple studies whose aims were to examine the presence or absence of a treatment effect.
By accumulating evaluated outcomes, one can summarize the research and suggest that the beneficial effects of a given treatment have been supported empirically.

Even if a treatment has been supported empirically, the transport of the treatment from one setting (research clinic) to another (service clinic) represents a separate and important issue.
A researcher who addresses this issue considers the effectiveness of treatment.
This has to do with the

  • Generalizability
  • Feasibility
  • Cost-effectiveness of the therapeutic procedures

The investigation of treatment effectiveness necessarily grows out of studies on treatment efficacy.

For the research study to be valid, the study must include real patients, evaluate outcomes on more than narrow measures of improvement, and cannot be limited to brief therapy without follow-up.

When considering the efficacy and effectiveness of treatments, the evaluator needs to make informed decisions regarding both the internal and external validity of the study in question.  

In the future, treatment outcome research is likely to prosper in part as a means of distilling the preferred interventions from the larger number of practiced treatments.

Treatment outcome research can also offer valuable feedback to clinicians and health care providers.
It can provide information about the progress of treatment gains and suggest viable alternative treatment plans.

Design considerations

Clinical researchers use control procedures derived from experimental science to adequately assess the causal impact of a therapeutic intervention.
The objective is to distinguish intervention effects from any changes that result from other factors.
To have confidence that an intervention is responsible for observed changes, these extraneous factors must be controlled.

Random assignment

Random assignment is essential to achieving baseline comparability between study conditions by ensuring that every participant has an equal chance of being assigned to the active treatment condition or the control condition(s).

Selecting control conditions

Comparisons of participants randomly assigned to different treatment conditions are essential to control for factors other than treatment.
Control conditions

  • No-treatment
  • Wait-list condition
  • Attention-placebo
  • Local standard treatment (treatment-as-usual)

Evaluating treatment response across time

For proper evaluation of treatment effects, it is essential to first evaluate participant functioning on the dependent variables prior to the initiation of treatment. This is the baseline.
Posttreatment assessments of clients are essential to examine the comparative efficacy of treatment versus control conditions.
It is highly recommended that treatment outcome studies include a follow-up assessment.
Repeated measures across data allow the researcher to evaluate treatment responses that may be nonlinear and the impact of various treatment components.

Multiple treatment comparisons

Researchers use between-groups designs with more than one active treatment condition to determine comparative (or relative) efficacy of therapeutic interventions.
Analysis can be performed by

  • Comparing posttreatment scores across the conditions after controlling for pretreatment scores
  • Comparing change scores across the conditions

Clinical researchers may aim for therapist equivalence across

  • Training
  • Experience
  • Intervention expertise
  • Treatment allegiance
  • Expectation that the intervention will be effective

Stratified blocking offers a viable option to ensure that all intervention is conducted by several comparable therapists.

For proper evaluation, intervention procedures across treatments must be equated for key variables such as

  • Duration
  • Length, intensity and frequency of contacts with clients
  • Credibility of the treatment rationale
  • Treatment setting
  • Degree of involvement of persons significant to the client

Measures should cover he range of functioning targeted for change, tap the costs and potential negative side effects, and be unbiased with respect to the alternative interventions.
They should not be differentially sensitive to one treatment over another.

Procedural considerations

Defining the independent variable: manual-based treatments

When evaluating treatments, the treatment must be adequately described and detailed to replicate the evaluation, or to be able to show and teach others how to conduct the treatment.
The use of treatment manuals is needed to achieve the required detail and description of the treatment.

The use of manual-based treatments must be preceded by adequate training.
Interactive training, flexible application, and ongoing supervision are essential to ensure proper conduct of manual based therapy.

Several modern treatment manuals allow the therapist to attend to each client’s specific circumstances, needs, concerns, and comorbid conditions without deviating from the core treatment strategies.

Checking the integrity of the independent variable: treatment fidelity checks

Rigorous experimental research requires careful checking of the manipulated variable.

To help ensure that the treatments are indeed implemented as intended, it is wise to require that a treatment plan be followed, that therapists are carefully trained, and that sufficient supervision is available throughout.
An independent check on the manipulation should be conducted.

Evaluating the quality of treatment provided is also of interest.
When a treatment fails to demonstrate expected gains, one can examine the quality with which the treatment was implemented.
It is also of interest to investigate potential variations in treatment outcome that may be associated with differences in the quality of the treatment provided.
Expert judges are needed to make determinations of differential quality prior to the examination of differential outcomes for high- versus low-quality therapy implementation.

Sample selection

Careful deliberations are required when choosing a sample to best represent the clinical population of interest.

  • A community subthreshold sample
    A non-treatment-seeking sample of participants who may benefit from treatment but who may otherwise only approximate clinically disordered individuals.
  • Genuine clinical sample
    Treatment-seeking clients diagnosed with the disorder
  • Analogue
    Self report a disorder (not necessarily a diagnosis) similar to the disorder of interest
  • Highly select sample

Client diversity must be considered when deciding which samples to study.
The research sample should reflect the population in which the study results will be generalized.

Study setting

It is not sufficient to demonstrate treatment efficacy within a highly selective setting.
The question of whether the treatment can be transported to other settings requires independent evaluation.

Measurement considerations

Assessing the dependent variable(s)

There is no single measure that can serve as the sole indicator of clients’ treatment-related gains.
A variety of methods, measures, data sources, and sampling domains are used to assess outcomes.

A contemporary and rigorous evaluation of therapy effects will consider

  • Assessments of client self-report
  • Client test/task performance
  • Therapist judgments and ratings
  • Archival or documentary records
  • Observations by trained, unbiased, blinded observers
  • Rating by significant people in the person’s live
  • Independent judgments by professionals

With the multi-informant strategy, data on variables of interest are collected from multiple reporters.
A multimodal strategy relies on multiple modes of assessment to evaluate an underlying construct of interest.

It is optimal and preferred that multiple targets be assessed in treatment evaluations.

Data analysis

Data analysis is an active process through which we extract useful information from the data we have collected in ways that allow us to make statistical inferences about the larger population that a given sample was selected to represent.

Addressing missing data and attrition

Attrition: a loss of research participants.

Researchers can conduct and report two sets of analysis

  • Analyses of outcomes for treatment completers
  • Analysis of outcomes for all participants who were included at the time of randomization (intent-to-treat sample)

Researchers address missing endpoint data via one of several ways

  • Last observation carried forward (LOCF)
    Assumes that paricipants who attrit remain constant on the outcome variable form the last assessed point through the post-treatment evaluation
  • Substituting pretreatment scores for post-treatment scores
  • Multiple imputation methods
    Impute a range of values for the missing data, generating a number of non-identical datasets.
    After the researcher conducts analyses on the non-identical datasets, the results are pooled and the resulting variability addresses the uncertainty of the true value of the missing data.
  • Mixed-effects models
    Relies on linear and/or logistic regression to address missing data in the context of random and fixed effects.

Clinical significance

The data produced by research projects designed to evaluate the efficacy of therapy are submitted to statistical tests of significance.
Statistical tests alone do not provide evidence on clinical significance.
Sole reliance on statistical significance can lead to perceiving differences as potent when in fact they may be clinically insignificant.

Clinical significance: the meaningfulness or persuasiveness of the magnitude of change.

Two approaches of measuring clinical significance

  • Normative sample comparison
  • Reliable change index

Normative comparisons

Normative comparisons can be conducted in several steps

  • The researcher selects a normative group for posttreatment comparison
    Given that several well-established measures provide normative data, investigators may choose to rely on these pre-existing normative samples
    When normative data does not exist, or when the treatment sample is qualitatively different on key factors, it may be necessary to collect one’s own normative data
  • The equivalency testing method examines whether the difference between the treatment and the normative groups is within some predetermined range.

Reliable change index

Steps

  • First involves calculating the number of participants moving from a dysfunctional to a normative range based on a normative-dysfunctional cutoff score.
  • Evaluating whether each individual’s change was reliable
    The RCI is proposed, a calculation of a difference score divided by the standard error of measurement
  • Patients are classified as either recovered, unchanged, improved, or deteriorated

Evaluating mechanisms of change: mediators and moderators of treatment response

If is often of interest to identify

  • The conditions that dictate when a treatment is more or less effective
  • The processes through which a treatment produces change

Moderator: a variable that delineates the conditions under which a given treatment is related to an outcome.
Moderators identify ‘on whom’ and ‘under what circumstances’ which treatments have different effects.
A moderator is a variable that influences either the direction or the strength of a relationship between and independent variable and a dependent variable.

Mediator: a variable that serves to explain the process by which a treatment impacts on an outcome.
They identify how and why treatments have effects.
The mediator elucidates the mechanism by which the independent variable is related to outcome.
Mediational models are inherently causal models.

A moderation effect is inherently an interaction effect.

If the proposed moderator predicts treatment response across all conditions, without an interaction with treatment assignment, the proposed moderator is simply a predictor.
Only when this predictive relationship differs across treatments, the term moderator is applied.

To test for mediation, one examines whether the following are significant

  • The association between the predictor and the outcome
  • The association between the predictor and the mediator
  • The association between the mediator and the outcome, after controlling for the effect of the predictor.

If all these conditions are met, the researcher then examines whether the predictor-to-outcome effect is less after controlling for the mediator.

If the treatment and outcome are not significantly associated, there is no significant effect to mediate.

It is possible that significant mediation has not occurred when the test of the treatment-to-outcome effect drops form significance to non-significance after taking the mediator into account.
It is also possible that significant mediation has occurred even when the statistical test of the treatment-to-outcome effect continues to be significant.

One approach to mediation is to evaluate the indirect effect, which is mathematically equivalent to a test of whether the drop in total effect is significant upon inclusion of the mediator in the model.
Statistical analysis in the social sciences typically examine whether there is significant or non-significant partial mediation.

Cumulative outcome analysis

Several major cumulative analysis have undertaken the challenging task of reviewing and reaching conclusions with regard to the effects of psychological therapy.
Meta-analysis: multidimensional analysis of the impact of potential causal factors on therapy outcome.
Meta-analysis procedures provide a quantitative, replicable, accepted, and respected approach to the synthesis of a body of empirical literature, and are themselves to be considered empirical reports.

By summarizing the magnitude of overall relationships found across studies, determining factors associated with variations in the magnitude of such relationships, and establishing relationships by aggregate analysis, meta-analytic procedures provide more objective, exhaustive, systematic, and representative conclusions than de qualitative reviews.

Meta-analytic techniques quantitatively synthesize findings across multiple studies by converting the results of each data report into a common metric.
The outcomes of different types of treatment can then be compared with respect to the aggregate magnitude of change reflected in such statistics across studies.

Steps

  1. Literature search for studies
  2. Coding the results of specific studies
  3. Compute effect sizes should be specified and whether effect sizes will be weighted
  4. Computing an overall weighted mean effect size and confidence interval based on the inverse variance weights associated with each effect size

Caution concerning the misuse of pilot data for the purposes of power calculations

Power: the probability of accurately rejecting a null hypothesis.
Designing an adequately powered RCT entails recruiting a sample large enough to adequately and reliably test different treatment responses across conditions.

To determine the minimum sample size required for an RCT, conventional calculations consider an expected effect size in the context of an acceptably low α level and a acceptably high level of power.
Broad conventions do not stipulate an expected magnitude of effect size to include because this will vary widely across varied clinical populations and across diverse treatments.
To estimate an expected effect size for the design, the researcher must rely on theory, as well as the magnitude of effects found in related research.

Effect sizes drawn from underpowered studies (such as small pilot studies) result in effect size estimates that are unstable because a limited sample size can yield oversized variability in effects.

Reporting

The final stage of conducting a treatment evaluation entails communicating study findings to the scientific community.
A well-constructed and quality report will discuss outcomes in the context of previous related work, as well as consider limitations and shortcomings that can direct future theory and empirical efforts in the area.
To prepare a quality report, the researcher must provide all of the relative information for the reader to critically appraise, interpret, and/or replicate study findings.

We recommend that the researcher only consider submitting the report of their findings to a peer-reviewed journal.

 

Image

Access: 
Public

Image

Join WorldSupporter!
This content is used in:

Psychotherapy

Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Institutions, jobs and organizations:
Activity abroad, study field of working area:
Countries and regions:
This content is also used in .....

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: SanneA
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
2900 1