Evidence-based Clinical Practice – Full course summary (UNIVERSITY OF AMSTERDAM)
- 2134 reads
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
In the case of a large number of tests, the probability of a false positive does not change for individual tests. However, the tests taken together have an inflated probability of false positives (i.e. inflated type-I error rate). This is also called the multiple comparison problem.
A research needs to have enough power. Besides that, statistical significance does not tell us anything about practical significance as any result can be significant when the sample size is sufficiently large. A large sample size will almost always lead to significant results and this does not tell us anything about the practical relevance. This is partially due to the arbitrary cut-off point for the p-value (i.e. p<.05). The cut-off point is not adjusted for the sample size. Therefore, the p-value is not indicative of practical relevance.
Practical significance can be assessed by considering factors such as clinical benefit (1), cost (2) and side-effects (3). This requires the effect size and the risk potency.
The absence of evidence does not imply evidence for absence. This means that not finding evidence does not mean that there is no evidence. It is possible that the study is underpowered. Furthermore, it is possible that there are small, meaningless differences which are significant and large, meaningful differences which are not significant.
The power refers to the probability of finding an effect when there actually is an effect (equal to the size of power). This is mostly considered when failing to reject the null hypothesis but it should also be considered when the null hypothesis is rejected.
The power affects the capacity to interpret the p-value. The p-value exhibits a wide sample-to-sample variability unless the statistical power is very high. This implies that evaluating a study by the p-value alone is spurious. A research with power of 50% (i.e. standard in psychology) leads to a less than 50% odds of replicating the significant result in the study. This contributes to the replication crisis. In short, the p-value does not provide information regarding the strength of evidence against the null hypothesis.
The p-values are typically only meaningful with large samples. Therefore, it is useful to look at the effect size. The effect size refers to how strong the effect of an intervention is. It corresponds to the degree of non-overlap between sample distributions (1) and the probability that one could guess which group a person came from, based only on their test score.
Typical methods of denoting the effect size are Cohen’s d (1), Hedges’ g (2) and Pearson’s r (3).
An intervention that is compared to a placebo requires an effect size of 0.8. However, an intervention compared to another intervention requires an effect size of 0.5. Effect size is heavily inflated in small samples and thus requires large samples.
The less overlap there is in a graph, the larger the effect size. However, the effect size of an intervention does not tell us how many people recovered after an intervention.
The effect sizes for discrete outcomes (e.g. recovered or not) should be interpreted within clinical norms for health. Effect sizes for discrete outcomes make use of the odds ratio (1), number needed to treat (2) and area under curve (i.e. AUC) (3). These effect sizes are more pertinent to clinical significance. The disadvantage of effect sizes of discrete outcomes is that they are very sensitive to the limits that are set by the researchers (e.g. cut-off point).
The disadvantages of the r and the d effect sizes are that they are relatively abstract (1), they are not intended as measures of clinical significance (2) and they are not readily interpretable in terms of how much the individuals are affected by treatment (3).
The dichotomization of continuous data leads to a loss of information (1), arbitrary effect size indexes (2) and inconsistent effect size indexes (3). This is mostly due to the cut-off point of failure (e.g. treatment is not effective).
There are several different types of other effect sizes:
Risk refers to the probability that the intervention group does worse than the control group. Individual improvement refers to how many individuals improved or deteriorated.
The meta-analysis gives a summary of the literature. It assesses which variables explain the differences between different research papers. It is important for clinical practice as there is often no time to read all research papers. There are two choices when summarizing the literature:
Meta-regression can make use of a regression analysis. In the case of a regression analysis, the intercept (i.e. b0) is the value that is expected if there is a score of 0 on all independent variables. In case that there are no independent variables in the regression analysis, the model looks as follows:
In this case, y is the effect size, b0 is the overall effect and e(i) is the error. The model looks as following if an independent variable is added:
In this case, b1 (i.e. slope) tells us something about the effect of the independent variable on the intercept (e.g. overall effect size decreases with decreasing independent variable).
There are several steps in a meta-regression:
There are three methods to do the meta-regression:
If the reliability of the studies is not taken into account in a meta-regression, then each study is treated as if they are equally reliable. This leads to unreliable studies having an unduly influence. If the variance between studies is not taken into account in a meta-regression, then the result will often become significant. This is because every study is treated as if random variation between studies does not exist.
The random-effects meta-regression is the best option for a meta-analysis. However, it is important to note that meta-regression is not an experiment and causal conclusions cannot be drawn. Meta-regression may highlight moderators of intervention success which have not been investigated directly.
This bundle gives a full overview of the course "Evidence-based Clinical Practice" given at the University of Amsterdam. It contains both the articles and the lectures. The following is included:
This bundle contains a summary of all the lectures provided in the course "Evidence-based Clinical Practice" given at the University of Amsterdam. It contains the following lectures:
...There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Field of study
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
1794 |
Add new contribution