Chambless & Hollon (1998). Defining empirically supported therapies.” - Article summary

Empirically supported treatments refer to clearly specified psychological treatments shown to be efficacious in controlled research with a delineated population. Treatment efficacy must be demonstrated in controlled research in which it is reasonable to conclude that benefits observed are due to the effects of the treatment and not to chance or confounds. This is best demonstrated in randomized clinical trials. Replication is critical in this case.

A treatment which has been found efficacious in at least two studies by independent research teams is an efficacious treatment. A treatment which only has one study supporting the efficacy or if all the research has been conducted by one team is possibly efficacious treatment.

The efficacy research has to be done with proper methods. There are different types of group design of efficacy research:

  1. Comparison with no treatment
    This comparison demonstrates whether a treatment actually works.
  2. Comparisons with other treatments or with placebo
    This comparison demonstrates whether a treatment effect is due to the treatment or due to another effect (e.g. placebo; receiving attention).
  3. Comparisons with other rival interventions
    This comparison demonstrates whether a treatment effect is due to competing mechanisms (i.e. the mechanism of another treatment) or due to a unique mechanism. It can also compare relative benefits of competing interventions.

The more stringent the comparison is, the more value researchers should attach to the results. Treatments that are efficacious and specific are treatments which control for other factors that could explain the treatment effect (e.g. therapeutic alliance). This increases confidence in the theoretical explanatory model of the treatment.

Problems with comparisons with other rival interventions are interpretation (i.e. interpreting the treatment as being efficacious while it is a downgraded version of a well-established treatment) (1) and low statistical power (2).

In order to demonstrate that a treatment is equivalent to a well-established treatment, it should meet the following criteria:

  1. Investigators have a sample size of 25-30 per condition.
  2. The unproven treatment is not significantly inferior to the established efficacious treatments on tests of significance .
  3. The pattern of data indicates no trends for the established efficacious treatment to be superior.

A psychological treatment plus placebo condition which is better than a placebo condition may not be due to the psychological intervention alone but due to the interaction between the placebo and the psychological treatment.

Efficacy needs to be described for a specific population or problem rather than in general. Factors that may determine the population (e.g. socioeconomic status) need to be described. The tools used in efficacy research need to have properly demonstrated reliability and validity. It is best to use multiple methods of assessment. Measures of life quality and functioning, rather than just symptoms need to be included as well, to properly evaluate a treatment.

It is important to know whether a treatment has an enduring effect and whether treatments differ on this. Return to treatment in the absence of documentable symptomatic events is not necessarily an index of underlying risk as there are a lot of reasons for a person to resume treatment.

Follow-up designs often do not assess treatment and symptom status in an ongoing fashion and they are susceptible to bias resulting from differential retention (e.g. patients must complete treatment and show improvement to be retained in the sample). This bias can thus be the result of the sample size being significantly different than the one at the start of treatment. It is not necessarily a good idea to retain all participants but neither is leaving some out of the analysis. The length of the follow-up required depends on the natural course of the disorder and the strength and stability of the treatment effect.

One method of assessing clinical significance is in terms of meaningful change (i.e. intersection between functional and dysfunctional populations) and reliable change (i.e. error of measurement). People who meet both criteria. The norm for clinical significance depends on the seriousness of the disorder.

Treatment manuals are extensive descriptions of the treatment approach therapists are to follow. These manuals usually need to be supplemented by additional training and supervision. Training is essential for therapists but it is difficult to assess therapist competence. Outcome variability of research into treatments may be associated with the preferences and expertise of the research teams involved (i.e. a therapy does better when it is given by people who are experts in it). Inferences regarding treatment efficacy can only be framed in the context of how the therapy was delivered and by whom.

There are several typical errors in data analysis:

  1. Type I error.
  2. Uncontrolled pre-test post-test comparisons are made (i.e. this does not necessarily indicate a trend in the data).
  3. Differential attrition is not taken into account.
  4. No test for therapist or site effects.

There are several issues specific to single-case experiments:

  1. Establishing a stable baseline.
  2. Typical acceptable designs.
  3. Defining efficacy.
  4. Interpretation of results

The ABAB design is a design in which there is a baseline period, a treatment period, another period of non-treatment and a period of treatment again. The multiple baseline design is a design in which there are multiple baselines across behaviours and settings.

In the case of conflicting results, the quality of the conflicting research needs to be examined (1), the direction of the majority research needs to be considered (2) and meta-analyses should be consulted (3). Meta-analyses could obscure qualitative differences in treatment execution.

Efficacy studies are often difficult to generalize (e.g. research only uses one ethnic group) and the interaction between personality characteristics and treatment characteristics is often not studied and known.

Treatment effectiveness refers to whether treatment actually works in clinical practice. Treatment efficacy refers to whether treatment actually works in controlled settings.

It is relevant to consider the extent to which evidence from efficacy trials is relevant to the kinds of patients actually seen in clinical practice. The generalizability across therapists and settings also needs to be taken into account as therapists in RCTs often have more training and supervision than the average therapist in a clinical setting and that these therapist can focus on a single problem (e.g. depression) using a single intervention (e.g. CBT), while this is not possible in daily clinical practice.

There is no consensus regarding whether the controlled nature of RCTs is beneficial or detrimental to treatment outcomes. The controlled nature does not allow the clinician to use his judgement to tailor the treatment to the patient. However, following protocol may yield better results.

A treatment needs to be feasible. This means that the patients need to be able to adhere to the treatment. Furthermore, in order for a treatment to be feasible in clinical practice, sufficient therapist need to be able to provide that treatment to patients. The cost-effectiveness of a treatment also needs to be evaluated. The costs and benefits of a treatment need to be evaluated both in the short-term and in the long-term.

Access: 
Public

Image

This content is also used in .....

Evidence-based Clinical Practice – Article overview (UNIVERSITY OF AMSTERDAM)

Dennis et al. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders.” – Article summary

Dennis et al. (2009). Why IQ is not a covariate in cognitive studies of neurodevelopmental disorders.” – Article summary

Image

Neurodevelopmental disorders are different from adult acquired disorders involving traumatic brain injury because they involve no period of normal development. Any IQ score in a neurodevelopmental disorder is confounded with the condition. The IQ score can never be separated from the effects of the condition.

Early on in intelligence testing, IQ was seen as a latent variable. Intelligence, as measured by ‘g’ was believed to have a causal power. It was believed to be independent of test conditions and other methodological issues. However, ‘g’ or intelligence varies among time and place.

ANCOVA was devised to minimize pre-existing group differences (e.g. differences in SES). However, it is not possible to treat IQ as a covariate in neurocognitive research  since it is an attribute of the disorders (e.g. ADHD). A covariate should be used when the assignment to the independent variable is done randomly (1), the covariate is related to the outcome measure (2) and the covariate is unrelated to the independent variable (3). It can be used if the researcher is trying to find out the direct effect of the independent variable on the outcome variable and the covariate is spuriously related to either variable or when it mediates the relationship between the independent and the outcome variable.

The ANCOVA has several assumptions:

  1. Homogeneity of regression
    The within-group regression of the covariate and the dependent variable are not different.
  2. Normally distributed residuals
    The residuals should be normally distributed.
  3. Equal variance
    The variance in each group should be equal.

The presence of a relationship between the covariate and the dependent variable does not imply that the covariate mediates or moderates the relationship between the group measure and the dependent variable.

There is no consensus on the definition of IQ. IQ can be used as a covariate for acquired brain damage if the preinjury IQ scores are available or when preinjury IQ proxies are available.

Access: 
For JoHo WorldSupporters members with online access and services
Kazdin (2008). New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care.” - Article summary

Kazdin (2008). New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care.” - Article summary

Image

It is unclear which treatment is the most effective in particular cases and it is not clear which treatments have sufficient evidence to be used, as there are no guidelines for this. Evidence-based treatment (i.e. EBT) refers to interventions or techniques that have therapeutic changes in controlled trials. Evidence-based practice (EBP) refers to clinical practice informed by evidence about interventions, clinical expertise and patient needs, values and preferences. Evidence-based practice is not what researchers have studied and might lack evidence.

One critique of evidence-based treatment is that key conditions and characteristics of treatment research (e.g. context) is different from clinical practice. This makes the results difficult to generalize. Another critique is that the research tends to focus on the symptoms, rather than the patients as a whole. There are also three points of criticism regarding the methods of treatment research:

  1. Criteria
    There are different criteria for determining whether a treatment is evidence-based or empirically supported. The choice I             mplies statistical significance and not necessarily practical significance.
  2. Arbitrary rating scales
    The changes in rating scales in research are difficult to translate to changes in everyday life. The metrics are arbitrary. Clinical indices also do not necessarily reflect changes in everyday life of a patient.
  3. Mixed results
    There are often mixed results between studies. The measures that show change or do not show change in an individual study are not necessarily the same measures that show these effects between studies. The results are often mixed.

There are also several concerns about evidence-based clinical practice. This includes clinical judgement as a way of integrating information because of concerns of reliability and validity. Many critical clinical issues and concerns are not heavily researched. The clinical decision making is criticized because there is no replicable method of doing it. Furthermore, it is not clear how the results in clinical practice can be generalized. This is difficult to do because it is not always possible to generalize clinical results from one patient to the next. This makes it difficult to make clinical judgement. Another concern is determining which variables make a difference in treatment, as this is always probabilistic (i.e. clinical decision making is analogous to multiple regression). A final concern is the way in which clinical progress is evaluated. This is often done based on clinician impressions rather than systematic observations. There are a plethora of treatments available for clinicians and not all have evidence to support them.

The goals of research are optimally developing the knowledge base (1), provide the best information to improve patient care (2) and materially reduce the divide between research and practice (3). This can be better achieved by shifting the emphasis to give a greater priority to the study of mechanisms of therapy (1), study the moderators of change (2) and by conducting more qualitative research (3).

It is imperative to determine the mechanisms of change in order to assess treatments

.....read more
Access: 
For JoHo WorldSupporters members with online access and services
Kraemer et al. (2003). Measures of clinical significance.” - Article summary

Kraemer et al. (2003). Measures of clinical significance.” - Article summary

Image

There are three important questions when assessing the relationship between variables:

  1. Statistical significance
  2. Effect size
  3. Practical significance

The practical significance (i.e. clinical significance) can be assessed by considering factors such as clinical benefit (1), cost (2) and side effects (3). In order to assess the practical significance, the strength of the association (i.e. ‘r’) (1), magnitude of the difference between the difference between treatment and comparison (i.e. ‘d’) (2) and measures of risk potency (3) can be used.  Risk potency can be assessed using the odds ratio (1), risk ratio (2), relative risk reduction (3), risk difference (4) and number needed to treat (5).

The p-value refers to the probability that the found outcome or more extreme is found, given that the null hypothesis is true. It is possible to have statistical significance by chance and outcomes with lower p-values are sometimes interpreted as having stronger effect sizes. A non-significant result also does not tell us anything about the truth of the null hypothesis.

There are several different effect size measures:

  1. The ‘r’ family
    This is expressing effect size in terms of strength of association (e.g. correlation).
  2. The ‘d’ family
    This is an effect size that can be used when the independent variable is binary and the dependent variable is ordered. It can be computed by subtracting the mean of the second group from the mean of the first group and divide it by the pooled standard deviation of both groups. The effect size ranges from minus to plus infinity.
  3. Measures of risk potency
    This is an effect size that can be used when both the dependent and independent variables are binary. The odds ratio and risk ratio vary from 0 to infinity and 1 indicates no effect. Risk relative reduction and risk difference vary between -1 and 1 with 0 indicating no effect. Number needed to treat (NNT) ranges from 1 to plus infinity with very large values indicating no treatment effect.
  4. AUC
    This is an effect size that can be used when the independent variable is binary but the dependent variable is either binary or ordered. It ranges from 0% to 100% with 50% indicating no effect.

This standard is relative and it should be noted that ‘larger than typical’ should be used rather than ‘large’. However, it might be best to find information about typical effect sizes in the context of a particular research field.

The disadvantages to the ‘d’ and the ‘r’ as measures of clinical significance are that they are relatively abstract (1), they were not intended as measures of clinical significance (2) and they are not readily interpretable in terms of how much individuals are affected by treatment (3).

There is no consensus on the externally provided standards for the clinical significance of treatments. Clinical significance could be defined as a change to normal

.....read more
Access: 
For JoHo WorldSupporters members with online access and services
Funder et al. (2014). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice.” - Article summary

Funder et al. (2014). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice.” - Article summary

Image

There appears to be a high rate of false positives in science. The p-value refers to the conditional probability that the present data or more extreme will be observed in a random sample given that the null hypothesis is true. Critique of using the p-value includes not interpreting the p-value properly (1) and the fact that the p-value varies with a varying sample size (2).

The effect size refers to the magnitude of the found statistical relationship. Unstandardized effect sizes are preferred when there is a clear consensus regarding that the measurement unit is in the interval level (e.g. seconds; blood pressure).When this is not the case (e.g. in psychology), standardized effect sizes should be used. The effect size can be easily interpreted if the sample sizes are equal and the total sample size is moderate to large. However, this becomes more complex if the sample sizes differ greatly.

Power refers to the probability that a true effect of a precisely specified size in the population will be detected using significance testing. It is the probability of finding an effect given that an effect of the specified size (i.e. the power) exists. The statistical power is one minus the type II error rate. The type II error rate refers to the probability that a true effect will not be detected using significance testing. It is the probability that the alternative hypothesis is wrongfully rejected.

Power should be maximized in a study. The power, however, is affected by the sample size (1), the measurement error (2) and the homogeneity of the participants (3). According to Cohen, research should use power of at least 0.8.

It is problematic to solely focus on the observed p-value because findings with equivalent p-values can have very different implications. In cases with large sample sizes, very small effect sizes can be significant and in cases with small sample sizes, a significant effect may produce an implausible large effect size. Therefore, there is a big difference between practical and statistical significance. A lot of research with small sample sizes is underpowered.

There are several recommendations to improve research:

  1. Describe and address choice of N and consequent issues of statistical power
    The researchers should design studies with sufficient power to detect the key effects of interest. The sample size should be justified on the smallest effect of interest. More studies with adequate power means more statistical effects of interest achieve significance.
  2. Report effect sizes and 95% CIs for reported findings
    The p-values should be supplemented by effect sizes that provide information on the magnitude of the finding.
  3. Avoid questionable research practices
    Research practices that tweak the results afterwards and undermine the researcher’s ability to draw a valid conclusion should be avoided. Tweaked data leads to a larger effect size than the true effect size.
  4. Include in an appendix the verbatim wording of all independent and
.....read more
Access: 
For JoHo WorldSupporters members with online access and services
Halpern, Karlawish, & Berlin (2002). The continuing unethical conduct of underpowered clinical trials.” - Article summary

Halpern, Karlawish, & Berlin (2002). The continuing unethical conduct of underpowered clinical trials.” - Article summary

Image

Randomized controlled trials without sufficient statistical power may not adequately test the underlying hypotheses. This makes it useless and, thus, unethical. However, underpowered research continues to be conducted.

Arguments in favour of using underpowered research are that meta-analyses may save the small studies by providing a means of combining the results with those of similar studies (1) and that confidence intervals could be used to estimate treatment effects (2).

However, underpowered trials can only be properly used in interventions of rare diseases (1) and in the early-phase of the development of drugs and devices (2). The interventions for rare diseases need to specify that they are planning to use their results in a meta-analysis of similar interventions. However, in both cases, the participants must be informed that their participation may only indirectly contribute to future health care benefits.

The number of participants (1), the expected variability (2) and the chosen probability of a type-I error (3) are used to calculate statistical power. There is no consensus regarding of how small an effect size must be to be clinically significant and thus must be recognized and in cases like this, the expected effect size can be used to calculate the number of participants for any given study. Empirical definitions of clinically meaningful effects (1) and data from earlier trials (2) must be used if it exists or if it doesn’t, the moderate effect sizes described by Cohen (3) should be used in the sample size calculation.

Studies that contain too few participants to detect a positive effect via hypothesis testing will also include unacceptably wide confidence intervals, which cannot properly be used to estimate the effect size of the treatment. Furthermore, problems with synthesizing the results may prevent the calculation of valid treatment effects in a meta-analysis. The ideal conditions for meta-analysis (e.g. comparable research methods among the primary trials) are not often met, which leads different meta-analyses to have different results. Lastly, underpowered trials are more likely to produce negative results and thus less likely to be published, this reduces the probability of it being used in a meta-analysis.

It is necessary to inform the participants of an underpowered study of their limited value to the participants because of ethical concerns. However, this information is often not given to participants because investigators do not do an a priori power analysis (1), investigators do not enrol enough participants in a timely fashion (2) or investigators fear it will reduce enrolment (3).

Only prospectively designed meta-analyses can justify the risks to participants in individually underpowered trials because they provide sufficient assurance that a study’s results will eventually contribute to valuable or important knowledge. Plans for large, comparative trials of experimental intervention can justify the conduct of small studies in earlier phases of drug or device development.

 

Access: 
For JoHo WorldSupporters members with online access and services
Moore (2016). Pre-register if you want to.” - Article summary

Moore (2016). Pre-register if you want to.” - Article summary

Image

Selective reporting refers to not publishing and reporting all research results and this is a problem for psychological research. Many scientists do not preregister their studies because they fear it would constrain their creativity (1), it brings added scrutiny to their research reporting (2) and it is another requirement in a field which already has a lot of requirements (3).

Preregistering makes testing a theory more difficult because sometimes scientists think of the best way to test a theory after data collection. However, exploratory studies can be used for this. Preregistering is not as laborious as researchers might think.

Preregistering clarifies the distinction between confirmatory and exploratory tests. Conducting more tests and reporting fewer inflates the type-I error rate. Preregistration helps to obtain the truth. Preregistration contributes to the confidence in a published result. It makes p-values useful for their intended purposes. The risk of a false positive is higher than the reported p-value without preregistration but it is difficult to tell how much higher.

Access: 
For JoHo WorldSupporters members with online access and services
Chambless & Hollon (1998). Defining empirically supported therapies.” - Article summary

Chambless & Hollon (1998). Defining empirically supported therapies.” - Article summary

Image

Empirically supported treatments refer to clearly specified psychological treatments shown to be efficacious in controlled research with a delineated population. Treatment efficacy must be demonstrated in controlled research in which it is reasonable to conclude that benefits observed are due to the effects of the treatment and not to chance or confounds. This is best demonstrated in randomized clinical trials. Replication is critical in this case.

A treatment which has been found efficacious in at least two studies by independent research teams is an efficacious treatment. A treatment which only has one study supporting the efficacy or if all the research has been conducted by one team is possibly efficacious treatment.

The efficacy research has to be done with proper methods. There are different types of group design of efficacy research:

  1. Comparison with no treatment
    This comparison demonstrates whether a treatment actually works.
  2. Comparisons with other treatments or with placebo
    This comparison demonstrates whether a treatment effect is due to the treatment or due to another effect (e.g. placebo; receiving attention).
  3. Comparisons with other rival interventions
    This comparison demonstrates whether a treatment effect is due to competing mechanisms (i.e. the mechanism of another treatment) or due to a unique mechanism. It can also compare relative benefits of competing interventions.

The more stringent the comparison is, the more value researchers should attach to the results. Treatments that are efficacious and specific are treatments which control for other factors that could explain the treatment effect (e.g. therapeutic alliance). This increases confidence in the theoretical explanatory model of the treatment.

Problems with comparisons with other rival interventions are interpretation (i.e. interpreting the treatment as being efficacious while it is a downgraded version of a well-established treatment) (1) and low statistical power (2).

In order to demonstrate that a treatment is equivalent to a well-established treatment, it should meet the following criteria:

  1. Investigators have a sample size of 25-30 per condition.
  2. The unproven treatment is not significantly inferior to the established efficacious treatments on tests of significance .
  3. The pattern of data indicates no trends for the established efficacious treatment to be superior.

A psychological treatment plus placebo condition which is better than a placebo condition may not be due to the psychological intervention alone but due to the interaction between the placebo and the psychological treatment.

Efficacy needs to be described for a specific population or problem rather than in general. Factors that may determine the population (e.g. socioeconomic status) need to be described. The tools used in efficacy research need to have properly demonstrated reliability and validity. It is best to use multiple methods of assessment. Measures of life quality and functioning, rather than just symptoms need to be included as well, to properly evaluate a treatment.

It is important to know whether a treatment has an enduring effect and whether treatments

.....read more
Access: 
Public
Gelman & Geurts (2017). The statistical crisis in science: how is it relevant to clinical neuropsychology?” - Article summary

Gelman & Geurts (2017). The statistical crisis in science: how is it relevant to clinical neuropsychology?” - Article summary

Image

The statistical crisis (i.e. replication crisis) refers to the fact that statistical significance does not necessarily provide a strong signal in favour of scientific claims.

One major challenge for researchers is accepting that one’s claims can be spurious. Mistakes in statistical analyses and interpretation should be accepted and learned from rather than resisted. Criticism and replication are essential steps in the scientific process and one should not accept scientific claims at face value nor should they believe they are 100% true. Once an idea is integrated in the literature, it is very difficult to disprove it even though evidence supports the rebuttal attempts. Researchers should remain very critical of their own work and if possible, replicate their own studies.

When data analysis is selected after the data have been collected, p-values cannot be taken at face value. Published results should be examined in the context of their data, methods and theoretical support. Assessing the strength of evidence remains difficult for most researchers. Researchers might not have difficulty with finding statistically significant results that can be construed as being part of the general constellation of findings that are consistent with the theory because of the researcher degrees of freedom (e.g. choosing statistical analyses).

Statistical significant is less meaningful than originally thought because of the researcher degrees of freedom (1) and because statistically significant comparisons systematically overestimate effect sizes (2). The type M error refers to overestimating the effect size as a result of a statistically significant result.

There have been several ideas to resolve the replication crisis:

  1. Science communication
    This entails not restricting publication to statistically significant results but also publication of replication attempts. Furthermore, there should be communication between disagreeing researchers and a detailed method section in order to replicate a study.
  2. Design and data collection
    This entails focusing on preregistration (1), design analysis using prior estimates of effect sizes (2), more attention to accurate measurement (3) and replication plans included in the original designs (4).
  3. Data analysis
    This includes making use of Bayesian inference (1), hierarchical modelling of outcomes (2), meta-analysis (3) and control of error rates (4).

The replication crisis appears to be the result of a flawed scientific paradigm rather than the result of a set of individual errors.

The problem of multiple comparisons should be taken into account and an a priori power analysis should be conducted in science. However, when the power analysis is based on previous effect size estimates, there should be caution as these effect sizes are thus often overestimated.

The issue of knowledge translation is a general issue in science, as it is not clear whether case studies can be generalized to the population and whether the observed effect in group studies is sufficiently large to have clinical implications for each individual of a specific group.

There are five major challenges in assessing experimental evidence within clinical neuropsychology:

    .....read more
    Access: 
    For JoHo WorldSupporters members with online access and services
    Kazdin & Weisz (1998). Identifying and developing empirically supported child and adolescent treatments.” - Article summary

    Kazdin & Weisz (1998). Identifying and developing empirically supported child and adolescent treatments.” - Article summary

    Image

    There are several characteristics of therapy with children and adolescents:

    1. Dysfunction is difficult to assess
      The problematic behaviour may represent short-lived problems or perturbations in development rather than signs of lasting clinical impairment.
    2. Identifying cases is problematic
      Youth rarely refer themselves to treatment which makes that externalizing problems are overrepresented in treatment.
    3. Dependence on adults
      The dependence of children on adults makes them vulnerable to multiple influences over which they have little control (e.g. living circumstances; parental mental health). Therefore, the family context needs to be addressed in treatment as well.
    4. Social environment and treatment
      The social environment plays a very important role for children which makes that taking treatment alone (i.e. without a peer, parent or sibling) is often not possible.
    5. Methodological challenges
      It is not clear whether self-report is an appropriate measure for young children and other methods may be flawed when used with youth (e.g. standardized assessment methods).
    6. Heterogeneity of samples
      The studies that are conducted typically have very heterogeneous samples which makes interpretation of the results difficult.

    Many emotional and behavioural problems that are treated in therapy are often evident in less extreme forms in early development. Treatments have shown beneficial effects in the treatment of children.

    Most therapy studies focus on non-referred cases (1), provide relatively brief treatments conducted in group format (2), evaluate treatment in relation to symptom reduction and neglects impairment or adaptive functioning (3), do not evaluate clinical significance of symptom changes (4) and do not conduct a follow-up (5).

    Diverse differences among different age groups (e.g. language skills) indicate that treatment with similar general features must differ in numerous specific details when applied in different developmental periods. This leads to a classification dilemma (i.e. what cut-off to use).

    A study needs to meet the following criteria to be a good study:

    1. Replicable treatment processes.
    2. Uniform therapist training and therapists adhering to the planned procedures.
    3. Random assignment.
    4. Use of clinical samples.
    5. Multimethod outcome assessment.
    6. Tests of clinical significance.
    7. Test of treatment effect on real world, functional outcomes.
    8. Assessment of long-term outcomes.

    It is likely that dysfunctional anxiety becomes a self-perpetuating cycle of elevated biological response to stress, debilitating cognitions and avoidance of stressful circumstances. CBT appears to be effective for child anxiety.

    Depressed children are seen as subject to schemas and cognitive distortions that cast everyday experience in an unduly negative light and as lacking important skills needed to generate supportive social relationships and regulate emotion through daily activity.

    Coping skills training (CST) appears to be effective in the treatment of depression for children. It includes structured homework assignments as well as peer or therapist modelling. The mediators and differential effectiveness relative to alternative, simpler treatments still need to be tested.

    Cognitive processes refer to a broad class of constructs that

    .....read more
    Access: 
    For JoHo WorldSupporters members with online access and services
    Kahn (2011). Multilevel modelling: Overview and applications to research in counselling psychology.” - Article summary

    Kahn (2011). Multilevel modelling: Overview and applications to research in counselling psychology.” - Article summary

    Image

    Multilevel modelling (MLM) is becoming the standard for analysing nested data.

    The unit of analysis problem refers to when the moderator variable exists at a different level than the independent and dependent variable (e.g. moderator variable is on the level of the course itself and the independent and dependent variables are on the individual level). Multilevel modelling (MLM) or hierarchical linear modelling is designed to analyse data that exists at different levels.

    When individuals exist in natural groups such as schools, there is a hierarchical or nested structure. Ignoring the nested structure of the data could have adverse consequences. Person-analyses with nested data ignore the fact that individuals sharing a common environment and are more similar to each other than they are to individuals from another environment. Nested data may thus violate the assumption of independence of observations. This leads to an increase of type-I errors due to the small standard errors.

    In a typical regression analysis, the slope and the intercept are fixed as the same parameters apply to each case in the sample. However, these parameters may vary as a function of group membership in the case of nested data, thus needing the need for a different analysis.

    The formula for the level one, multilevel analysis model is the following:

    Yij=β0j+β1jXij-Xj+rij

    The letter i refers to the level 1 unit (e.g. individuals). The letter j refers to the level 2 unit (e.g. group). Yij is the level 1 dependent variable (e.g. individual score in a given group). Xij is the score on the level 1 independent variable. This formula makes use of a form of centring, as the score on the level 1 independent variable is subtracted from the mean of the group. β0j is the intercept for a given group (e.g. predicted level of a score in a particular group). This can be interpreted as the mean level of a dependent variable for all people in a particular group. β1j is the slope for a given group. It is the predicted increase in the dependent variable per 1-unit increase in the independent variable within a given group. rij is the residual at the individual level.

    The level two model addresses what the average intercept and slope across groups is (i.e. fixed effects) (1), how much intercepts and slopes vary across groups (i.e. random effects) (2) and how useful group-level variables for predicting group intercepts and slopes are (3). The level two model uses the following formula:

    The level two model describes the difference between group level and not between individuals within the groups. The first formula represents the level two model for

    .....read more
    Access: 
    For JoHo WorldSupporters members with online access and services
    Robinaugh, Hoekstra, Toner, & Borsboom (2020). The network approach to psychopathology: A review of the literature 2008-2018 and an agenda for future research.” - Article summary

    Robinaugh, Hoekstra, Toner, & Borsboom (2020). The network approach to psychopathology: A review of the literature 2008-2018 and an agenda for future research.” - Article summary

    Image

    The network model to psychopathology states that mental disorders can be conceptualized and studied as causal systems of mutually reinforcing symptoms. This holds that symptoms are not passive indicators of a latent, common cause, of the disorder but agents in a causal system.

    The causality hypothesis states that when causal relations among symptoms are strong, the onset of one symptom will lead to the onset of others. The connectivity hypothesis states that a strongly interconnected symptom network is vulnerable to a contagion effect of spreading activation through the network. Widespread symptom activation as a result of an external stressor can lead to symptoms persisting when the initiating stressor is removed. The centrality hypothesis states that highly central symptoms have greater potential to spread symptom activation throughout the network than do symptoms on the periphery. The comorbidity hypothesis states that symptoms can occur in multiple disorders and that some symptoms can thus bridge different disorders.

    A mental disorder is characterized by both the state and the structure of the network (i.e. a mental disorder is characterized by a state of harmful equilibrium). The boundary between health and disorder will vary as a function of network structure. In weakly connected networks, activation varies dimensionally. However, in strongly connected networks, activation within the system rapidly leads to a state of psychopathology (i.e. more discrete rather than continuous).

    The momentary perspective states that symptoms are aggregates of moment-to-moment experiences. According to this perspective, these experiences constitute the true building blocks of psychopathology. This highlights the importance of understanding the chronometry of experiences, symptoms and disorders.

    The assumptions of the network model currently do not always align with how disorders are believed to operate.

    There is a conditional positive manifold for most disorders. This states that symptoms of a positive disorder tend to be positively interconnected, even after controlling for shared variance among symptoms. This suggests meaningful clustering of symptoms in the syndromes we call mental disorders. Connectivity tends to be consistent across time and demographic groups. However, differences have been observed across countries.

    Greater connectivity (i.e. network density) may confer risk for psychopathology. This is based on the fact that there appears to be greater connectivity between symptoms in people with more severe mental disorders. It is also possible that greater connectivity leads to disorder persistence. However, there is no consensus regarding these topics. There is some evidence that connectivity of negative mood state networks is associated with psychopathology but minimal evidence that broader networks of momentary experience exhibit such associations.

    Node strength refers to the summed absolute strength of a node’s direct link. Non-DSM symptoms often exhibit elevated centrality (e.g. feeling disliked in depression) and some DSM-nodes are weakly connected to the network. It is not clear whether the symptoms which the DSM identifies as especially important are more central to less important DSM-symptoms. The DSM most likely has not captured all symptoms of a disorder and has not

    .....read more
    Access: 
    For JoHo WorldSupporters members with online access and services
    De Vent et al. (2016). Advanced neuropsychological diagnostics infrastructure (ANDI): A normative database created from control datasets.” - Article summary

    De Vent et al. (2016). Advanced neuropsychological diagnostics infrastructure (ANDI): A normative database created from control datasets.” - Article summary

    Image

    In the advanced neuropsychological diagnostics infrastructure (ANDI), datasets of several research groups are combined into a single database. It contains scores of neuropsychological tests from healthy participants. This allows for accurate neuropsychological assessment as the quantity and the range of the data surpasses most traditional normative data. It facilitates normative comparison methods (e.g. those in which entire profiles of scores are evaluated).

    An important element of neuropsychological practice is to determine whether a patient who presents with cognitive complaints has abnormal scores on neuropsychological tests. In the diagnostic process, a number of neuropsychological tests are administered and the test results of a patient are compared to a normative sample.

    Scores of patients are typically compared to normative data published in the manuals of an instrument. However, this data may be outdated (1), it may lack norms for very old populations (2), some tests do not have any norms (3), normative scores are often only corrected for age but not for other demographic variables (4) and normative data are often collected for one test at a time (5). This results in univariate but not multivariate data being available while multivariate normative comparison methods are more sensitive to deviating profiles of test scores.

    There are several benefits of the ANDI database:

    1. More appropriate norms
      The ANDI database may provide more appropriate norms because the data has been collected over a long period of time and are easily updated (i.e. internet-based database) (1), there is a lot of data on older participants (2), the data comes from representative participants in different countries (3), the scores are corrected for demographic variables (e.g. sex) (4), age is treated as a continuous, rather than arbitrary discrete variable (i.e. age groups) (5) and the norms are based on a large sample (6).
    2. Multivariate data
      The ANDI database consists of multivariate data as many participants completed multiple tests. This allows for multivariate comparisons which have increased sensitivity to detect cognitive impairment.
    3. Exportable infrastructure
      The software of the ANDI database is freely available for researchers to do their own research. This allows them to add to the existing ANDI database or create their own.

    Limitations of the ANDI database are that it is not necessarily based on a random sample (1) and some participants had lenient inclusion criteria and, thus, did not necessarily have no pathology (2).

     

    Access: 
    For JoHo WorldSupporters members with online access and services
    Maric et al. (2015). Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data.” - Article summary

    Maric et al. (2015). Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: An SPSS method to analyze univariate data.” - Article summary

    Image

    Single-case experimental designs (SCED) are useful methods in clinical research to investigate individual client progress. It could also be used to determine whether an intervention works. In single-case experimental designs, a single participant is repeatedly assessed on one or multiple symptoms during various phases (e.g. baseline). The statistical analysis of this data is difficult.

    Advantages of the single-case experimental designs are that it can be used to test novel interventions before RCTs are conducted (1), it may be the only way to investigate treatment outcomes in heterogeneous groups (2) and it offers the possibility to systematically document the knowledge of researchers and clinicians, preventing loss of information (3).

    The most common single-case experimental design is the AB design. It consists of two phases (i.e. baseline and treatment). It is similar to an interrupted time series. In order to obtain an adequate analysis of the differences, the overall pattern in the time series should be modelled adequately (1) and there should be adequate modelling of potential correlations between residuals (2).

    A common assumption of adequate models is that there is a linear function for both the baseline and the treatment phase. Both of these linear functions should be described by an intercept and a slope. Modelling of potential correlations between residuals means that there should be adequate modelling after the overall pattern has been accounted for. The correlation between residuals of the observations is the autocorrelation. It implies that the residuals are not independent. If residuals are correlated, the correlations are likely to decrease with increasing separation between timepoints.

    The tests on the intercepts and the slopes of the linear functions will be unreliable if the correlations between the residuals are not modelled adequately (e.g. incorrectly assuming that the residuals are uncorrelated). If positively correlated residuals are assumed to be uncorrelated, the chances of finding significant results will be too high.

    Modelling the overall pattern by the intercept and the slope for each phase (i.e. each timepoint) does not yield a direct test of intercept and slope differences. Therefore, it is useful to re-parameterize the model in terms of an intercept and slope for the baseline phase and baseline-treatment differences in the intercepts and slopes. This takes the following formula:

     Y(i) denotes the outcome variable score at time point i. Phase(i) denotes the phase in which time point i is contained. Time_in_phase denotes time points within each phase. E(i) denotes the residual at time point i.

    The parameter b0 is interpreted as the baseline intercept. B1 is interpreted as the treatment-baseline difference in intercepts. B2 is interpreted as the baseline slope and b3 is interpreted as the treatment-baseline differences in slopes. These parameters can also be interpreted as effect sizes.

    The b0 and b1 refer to symptom scores when time_in_phase is zero. Therefore, when coding the variables, time_in_phase zero should denote the start of each phase. However, when time_in_phase zero

    .....read more
    Access: 
    For JoHo WorldSupporters members with online access and services
    Follow the author: JesperN
    Comments, Compliments & Kudos:

    Add new contribution

    CAPTCHA
    This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
    Image CAPTCHA
    Enter the characters shown in the image.
    Promotions
    Image
    The JoHo Insurances Foundation is specialized in insurances for travel, work, study, volunteer, internships an long stay abroad
    Check the options on joho.org (international insurances) or go direct to JoHo's https://www.expatinsurances.org

     

    Check how to use summaries on WorldSupporter.org


    Online access to all summaries, study notes en practice exams

    Using and finding summaries, study notes en practice exams on JoHo WorldSupporter

    There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

    1. Use the menu above every page to go to one of the main starting pages
      • Starting pages: for some fields of study and some university curricula editors have created (start) magazines where customised selections of summaries are put together to smoothen navigation. When you have found a magazine of your likings, add that page to your favorites so you can easily go to that starting point directly from your profile during future visits. Below you will find some start magazines per field of study
    2. Use the topics and taxonomy terms
      • The topics and taxonomy of the study and working fields gives you insight in the amount of summaries that are tagged by authors on specific subjects. This type of navigation can help find summaries that you could have missed when just using the search tools. Tags are organised per field of study and per study institution. Note: not all content is tagged thoroughly, so when this approach doesn't give the results you were looking for, please check the search tool as back up
    3. Check or follow your (study) organizations:
      • by checking or using your study organizations you are likely to discover all relevant study materials.
      • this option is only available trough partner organizations
    4. Check or follow authors or other WorldSupporters
      • by following individual users, authors  you are likely to discover more relevant study materials.
    5. Use the Search tools
      • 'Quick & Easy'- not very elegant but the fastest way to find a specific summary of a book or study assistance with a specific course or subject.
      • The search tool is also available at the bottom of most pages

    Do you want to share your summaries with JoHo WorldSupporter and its visitors?

    Quicklinks to fields of study for summaries and study assistance

    Field of study

    Check related topics:
    Activities abroad, studies and working fields
    Institutions and organizations
    Access level of this page
    • Public
    • WorldSupporters only
    • JoHo members
    • Private
    Statistics
    2041