Evidence-based Clinical Practice – Full course summary (UNIVERSITY OF AMSTERDAM)
- 2190 reads
There are several ethical guidelines in science:
A lot of people are exposed to a potentially not-working intervention in an overpowered study because it requires a lot of participants (i.e. large sample size). In an educational setting, there should be a culture of getting it right rather than finding a significant result (1), students should be taught transparency of data reporting (2), the methodological instructions (e.g. confidence interval) should be improved (3) and junior researchers who seek to conduct proper research should be encouraged (4).
A-prior power analysis regards the question of how many participants are needed in the study. If a study has sufficient power, then non-significant results are also informative. The power of the test affects the capacity to interpret the p-value. The p-value exhibits wide sample-to-sample variability and does not indicate the strength of evidence against the null hypothesis unless the statistical power is high.
Arguments in favour of underpowered studies are that meta-analyses will provide useful results by combining the results and confidence intervals could be used to estimate treatment effects. However, underpowered studies are more likely to produce non-significant results and are thus more likely to disappear into the file drawer. Furthermore, the ideal conditions for meta-analyses are not often met, which makes it not possible for a meta-analysis to always save the smaller, underpowered studies.
Underpowered studies can only be used in interventions of rare diseases and the early-phase of the development of drugs and devices. It is important for interventions of rare diseases to specify that they are planning to use the results in a meta-analysis. However, it is necessary to inform participants of an underpowered study because of ethical concerns. This does not happen often because there is no a-priori power analysis (1), researchers do not enrol enough participants in time (2) or researchers are afraid it will reduce enrolment (3).
Only prospectively designed meta-analyses can justify the risks of participants in individually underpowered studies because they provide enough assurance that a study’s results will eventually contribute to valuable or important knowledge.
The p-value refers to the probability that the present data or data more extreme will be observed in a random sample given that the null hypothesis is true. However, the p-value tends to vary with sample size and the p-value is often not interpreted properly. Unstandardized effect sizes should be used when there is a clear consensus regarding that the measurement unit is in interval level (e.g. seconds; blood pressure).
The odds of replicating a significant p-value (alpha of 0.05) is less than 50% if the power is 50%. This is one of the major reasons for the replication crisis.
A replication study might need to use a lower than reported effect size (1), develop a minimum value for an effect size that is deemed too small (2) and use a higher than 0.80 power level (3). This makes sure that replication is possible as the probability of finding a significant result reduces steadily if these steps are not taken. Thus, these steps enhance the credibility and usefulness of a replication study.
Power refers to the probability that a true effect of a precisely specified size in the population will be detected using significance testing. The power depends on a combination of the sample size and the required effect size. It is affected by sample size (1), measurement error (2) and homogeneity of participants (3). The blue area with the line refers to the power. Beta refers to the probability of a type-II error.
Selective reporting refers to not publishing and reporting all research results. Many researchers do not preregister their studies because they feel like it would constrain their creativity (1), adds scrutiny to their research reporting (2) and it is another requirement in a field already filled with requirements (3).
Preregistration may make theory testing more difficult as researchers may think of the best way to test a theory after data collection. However, even when preregistering, exploratory studies can be used. It also clarifies the distinction between confirmatory and exploratory tests. It is possible to put more confidence in the results of preregistered studies (1), it helps obtain the truth (2) and it allows the p-value to be used for its intended purpose (3).
Moderation is about for whom an intervention works best. Mediation refers to which processes are important during an intervention. The degrees of freedom depends on the effect. In a main effect, it is typically calculated as number of groups minus one.
A moderation effect states that a treatment effect depends on variables that are themselves independent of treatment (e.g. sex; intelligence). A simple moderation analysis (i.e. categorical variable) includes doing an ANOVA with the potential moderator variables as factors. If the interaction between the independent variable and the moderator variable is significant, then there is a moderation effect. A moderation analysis typically requires centring which is transforming a variable into deviations around a fixed point (e.g. grand mean). Centring makes the bs for the lower-order effects (e.g. main effects) interpretable.
A moderation analysis with a continuous variable requires the moderator variable to be treated as a covariate. This requires all independent variables to be standardized and the complete model including interactions need to be specified in the ANCOVA. A significant interaction indicates a moderation effect. A simple slopes analysis compares the relationship between the predictor and outcome at low and high levels of the moderator.
Mediation helps to explain what mechanisms underlie the intervention effects. Mediators are affected by the treatment rather than that they affect the treatment. Mediation occurs when the relationship between an independent and dependent variable can (partially) be explained by a third variable (i.e. the mediator). During a mediation analysis, both the direct and indirect effect (i.e. effect through the mediator) are of interest.
The indirect effect refers to the effect of the independent variable on the dependent variable via the mediator. This is typically tested using the Sobel test. A significant Sobel test indicates that there is a mediation effect.
The R-squared is used to assess the fit of the linear model. This can be computed for the indirect effect. It tells us the proportion of variance explained by the indirect effect. A negative R-squared for the indirect effect indicates a suppression effect.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
This bundle gives a full overview of the course "Evidence-based Clinical Practice" given at the University of Amsterdam. It contains both the articles and the lectures. The following is included:
This bundle contains a summary of all the lectures provided in the course "Evidence-based Clinical Practice" given at the University of Amsterdam. It contains the following lectures:
...There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
Main summaries home pages:
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
1748 |
Add new contribution