Sampling error:
Also “chance variability”. Variability in findings are due to chance
Hypothesis testing:
Reason: Data are ambiguous à means are different
Goal: Find out if the difference is big or small i.o.w à statistically significant
Sampling distributions:
What degree of variability of sample-sample can we expect in the data?
Tells us what variability we can expect under certain conditions (e.g. if population mean is equal).
Can also be done with other measure of variability: Range,
Sampling distribution of differences between means:
Compares distribution of means
Standard error:
Expected standard deviation of samples of measured statistic, when measured repeatedly.
Theory of Hypothesis Testing
- Answering statistical significance is no longer sufficient (p<.05)
à Need to inform reader about power and confidence limits and effect size
- Try to find out if difference in sample means (sampling distribution) is likely if the sample was drawn from a population with an equal mean
Process:
1. Set up the research hypothesis. Eg. Parking takes longer if someone watches
2. Collect random sample under the 2 conditions
3. Set up Ho = null hypothesis = the population means of the 2 samples are equal
4. Calculate sampling distribution of the 2 means under condition that Ho is true
5. Calculate probability of a mean difference that is at least as large as the one obtained
6. Reject or fail to reject Ho (Assumption that Ho is not true – not proven !!!!)
1. Research Hypothesis
2. Collect random sample
3. Set up null hypothesis
4. Sampling distribution under Ho=true
5. Compare sample statistic to distribution
6. Reject or retain Ho
Null hypothesis:
- Usually the opposite of the research hypothesis
à in order to be disproven (cause we can never prove something, only disprove the
opposite.
Statistical conclusions
- Fisher:
- Options are to reject or suspend judgement over Ho.
à If Ho cannot be rejected, the judgement about it has to be suspended.
(eg. Schoolexperiment continues)
- Neyman-Pearson:
- Options are to reject or accept that Ho is true.
à If Ho cannot be rejected, Ho has to be considered true until disproven.
(eg. Schoolexperiment stops, until evidence has to be reconsidered)
Conditional Probabilites:
- Confusion between the probability of the hypothesis given the data and the data given the hypothesis.
à p = .045 means that probability of data given if hypothesis Ho = true à p(D I Ho)
Test Statistics
Sample statistics:
- Descriptives (mean, range, variance, correlation coefficient)
- Describe characteristics of the samples
Test statistics:
- Statistical procedure with own sampling distributions (t, F , X²)
Decisions about the Null-Hypothesis
Rejection level / significance level:
- Sample score falls inside the 5% level of the assumed distribution à rejection region
à If it falls there, the likelihood that the findings are due to chance is 5%
à Therefore it is statistically significant
Type I and Type II Errors
Type I : (Jackpot Error)
- Rejecting Ho when it is actually true
à Probability of making this error is expressed as alpha
à We will make this error 5% of the time
Type II :
- Fail to reject Ho when it is actually wrong
à Probability of making this error is expressed as beta
à We will make this error depending on the size of rejection region
- Less Type I error = more Type II error
Power:
If beta is smaller, the distance between sample mean and pop mean is bigger, thus the generalizability increases. à More power
One and Two Tailed Tests
One tailed / directional test:
- test only for one direction of the distribution 5% level
Two tailed / nondirectional test:
- test for negative and positive scores on 2.5% level
- Reasons: No clue what data will look like
Cover themselves in the event the prediction was wrong
One tailed tests are hard to define (if more than two groups)
àTry to keep statistical significance low.
2 Questions to deal with any new statistic
1. How and with what assumption is the statistic calculated?
2. What does the statistic´s sampling distribution look like under Ho?
à compare
Alternative view of hypothesis testing
Traditional way:
- Null hypothesis = m1 = m2 or m1 not = m2 (two tailed)
According to Jones, Tukey and Harris
- 3 possible conclusions
1. m1 < m2
2. m1 > m2
3. m1 = m2
- 3. Is ruled out, because the means are never the same. So we test for 2 directions at the same time. It allows us to keep 5% levels at both ends of the distribution, because we will just discard the other one
Basic Concepts of Probability
1.2 Basic Terminology and Rules. 1
2.0 Discrete vs Continuous Variables. 2
Analytic view: Common definition of probability. Even can occur in A ways and fail to occur in B ways.
à all possible ways are equally likely (definite probability, eg. 50%)
Probability of occurrence: A/(A+B) à p(blue)
Probability of failure to occur: B/(A+B) à p(green)
Frequentist view: Probability is the limit of the relative frequency of occurrence
à Dice will land approx. 1/6th of time on one side with multiple throws (proportions)
Subjective probability: Individuals subjective estimate. (opposite of frequentist view)
à use of Bayes´ theorem
à usually disagree with general hypothesis testing orientation
1.2 Basic Terminology and Rules
Event: The occurrence of “something”
Independent event: Set of events that do not have an effect on each others occurrences
Mutually exclusive event: The occurrence of one event precluded the occurrence of the alternative event.
Exhaustive event: All possible occurences /outcomes (e.g. die) are considered.
Theorem: Rule
(Sampling with replacement: Before drawing a new sweet (occurrence), the old draw is replaced.)
Additive law of probability: (mutually exclusive event must be given)
The occurrence of one event is equal to the sum of their separate probabilities.
p(blue or green) = p(blue) + p(green) = .24 + .16 = .40
à one outcome (occurrence)
Mulitplicative Rule: (independence of events must be given)
Probability of their joint (successive/co-occurrence) occurrence is product of individual
probabilities.
p(blue, blue) = p(blue) * p (blue) = .24 * .24 = .0576
à minimum 2 outcomes (occurrences)
Joint probability: Probability of the co-occurrence of two or more events
- If independent, p can be calculated with multiplicative law
- If not independent, than very complicated procedure (not given in book)
Denoted as: p(A, B) à p(blue, green)
Conditional probability: Probability an event occurs if / given another event has occurred.
à hypothesis testing: If Ho = true, the p of this result is….
à Conditional can be read as: If…is true, then
Denoted as: p (A I B) à p(Aids I drug user)
2.0 Discrete vs Continuous Variables
Discrete variable: Can take on specific values à 1,2,3,4,5
à Probability distribution:
Proportions translate directly to probability
à can be measured at ordinate (Y-axis) – relative frequency
Continuous variable: Can take on infinite values à 1.234422 , 2.234 , 4 …
à Variable in experiment can be considered
continuous if min. ordinal scale (e.g. IQ)
Density: height of the curve at point X
à Probability distribution:
Likelihood of one specific score is not useful, cause p(X = exactly,
e.g. 2) is highly unlikely, rather 2.1233
à Measure Interval: E.g. 1.5 – 2.5
à Area under defined interval, a to b = our probability à use distribution tables (later chapters)
Inhalt
6.0 Basics for Chi-Square tests. 1
6.1 Chi-Square Distribution. 1
6.2 Chi-Square Goodness of Fit Test – One-way Classification. 2
6.2.1 Tabled Chi-Square Distribution. 3
6.3 Two Classification Variables: Contingency Table Analysis. 3
6.3.2 Correcting for Continuity (for 2 x 2 tables + expected frequency is small). 4
6.3.3 Fischers Exact Test (another test, besides the chi-square test). 4
6.12 Kappa - Measure of Agreement. 4
6.13 How to write down findings – see book !!!!. 5
6.0 Basics for Chi-Square tests
Measurement data: (also quantitative data): Observation represents score on a continuum (e.g. mean, st. dev.)
Categorical data: (also frenquency data): Data consists of frequencies of observations that fall into 2 or more
categories. à remember frequency tables
Chi-square X²: 2 different meanings: 1. Mathematical distribution that stands for itself
or Pearson´s chi-square 2. Refers to a statistical test of which the result is distributed in
approximately the same way as X²
Assumptions of Chi-square test: Observations need to be independent of each other
+ Aim is to test independence of variables (significance of findings)
6.1 Chi-Square Distribution
Chi-square Distribution:
Explanation:
Gamma function: = factorial.
When argument of gamma (k/2), then gamma = integer à [(k/2) – 1]!
à Need of gamma functions because arguments not always integers
- Chi-square has only one parameter k. (≠ two-parameter functions with µ and ơ )
- Everything else is either a constant e or another value of X²
(- X²3 is read as “chi-square with 3 degrees of freedom = df (expl. Later))
6.2 Chi-Square Goodness of Fit Test – One-way Classification
Chi-square test: - based on X² distribution.
- can be used for one-dimensional tables and two-dimensional (contingency tables)
!!!! Beware: We need large expected frequencies: X² distribution is continuous and cannot provide a good
approximate if we have only a few possible Efrequencies, which are discrete.
à Should minimum be: Efreq. ≥ 5 ,otherwise low power to reject Ho.
(e.g. flipping a coin only 3 times cannot be compared with the frenquency distribution because the
frequency is just too small) – It could be compared but this is stupid :P
nonoccurences: Have to be mentioned in the table. Cannot compare 2 variables that only show
one observation.
Goodness-of-fit test: Test whether difference of observed score from expected scores are big enough to
question whether this is by chance or significant. Significance test or Independence test.
observed frequency: Actual data collected
expected frequency: Frequency expected if Ho were true.
6.2.1 Tabled Chi-Square Distribution
We have obtained a value for X² and now we have to compare it to the X² distribution to get a probability,
so we can define whether our X² is significant (reject Ho) or we accept our H1.
For this we use: Tabled distribution of X²:
depends on df = degrees of freedom à df = k-1 (number of categories -1)
6.3 Two Classification Variables: Contingency Table Analysis
We want to know if a variable is contingent or conditional on a second variable.
We do this by using a
contingency table:
Marginal total: (Rowtotal * Columntotal) – N
See also: Formula for joint occurrence of independent events (chapter 5)
Now continue with calculation of the chi-square to determine significance of findings.
Now, to assess whether our X² is significant, we first have to calculate the degree of freedom =df to know
where to look on the X² distribution table
6.3.2 Correcting for Continuity (for 2 x 2 tables + expected frequency is small)
Yate´s correction for continuity: Reducing absolute value of each numerator (O-E) for 0.5 before squaring
6.3.3 Fischers Exact Test (another test, besides the chi-square test)
Fischer´s Exact Test: Is mentioned, but I think not exam material. If it is, I will update the summary.
6.12 Kappa - Measure of Agreement
Kappa ( k ) : Statistic that measures interjudge agreement by using contingency tables (not based on chi-square) à measure of reliability
à corrects for chance
1. First calculate expected frequencies for diagonal cells = (cells in which the judges agree = relevant)
2. Apply formula. Result = k Kappa
6.13 How to write down findings – see book !!!!
Inhalt
7.1 Sampling Distribution of the Mean. 1
7.2 Testing Hypotheses about Means – ơ (pop. standard deviation) known (usually not the case). 1
7.3 Testing a Sample Mean vs Pop Mean when ơ is unknown – The one sample t-test. 2
7.1 Sampling Distribution of the Mean
Function:
- Used for measurement or quantitative data (instead of categorical data)
- To analyse difference between groups of subjects or relationship between 2+ variables
Sampling distribution of the mean: Use mean instead of any statistic, like in normal sampling distribution
Central Limit Theorem: Basis to set up sample distribution with the mean
- If pop. is skewed, samplessizes n = 30+ is needed to approximate a normal distribution
Uniform rectangular distribution: mean = range /2 standard dev = range / √12
If we take samples from this population, the sampling distribution will better approximate a normal distribution, if we take samplesizes of n =30 instead of n = 5
+ the higher the samplesize, the lower the standard deviation of the sampling distribution
7.2 Testing Hypotheses about Means – ơ (pop. standard deviation) known (usually not the case)
- We can do this by using the z score and table (but not if we do not have the ơ of the pop)
- Usually we do not know the variance of the population we take samples from
- t-tests are designed for this scenario
- Central limit theorem states:
If we take a sample from a pop with µ = 50, then variance = ơ²/n and standard dev = ơ / √n
Standard error: standard deviation of the sampling distribution à ơ / √n
Applied in practice:
!!!!! To test a sample mean vs a pop. mean using t-tests, the sampling distribution needs to approximate the
a normal distribution !!!!!
7.3 Testing a Sample Mean vs Pop Mean when ơ is unknown – The one sample t-test
- Ơ is not known à has to be estimated using the sample standard deviation. (replace ơ with s)
- Z becomes t à can no longer use z-tables but use student´s t distribution
- If we used z, we would get too many significant results, thus make more than 5% type I errors
(reject Ho even though it is true)
Sampling distribution of s²:
- ttest uses s² as an (unbiased) estimate of ơ²
- Problem: Shape of sampling distribution under s² is positively skewed (lower standard deviations are more likely / variance is more likely to be not so big with small samples)
-
Tvalue obtained from s² is likely to be larger than the zvalue obtained from ơ
T-statistic formula:
Remember: t statistic can only be compared to the pop. mean if sample size is big enough
à because: sample distribution needs to be approximately normal
Student´s t- distribution:
- Works with degrees of freedom (df) : n – 1 (number of observations in sample – 1 )
- Because: Formula of s² = ∑(x-x ) leaves 1 value that is already determined if the other values are known. à so that the ∑ = 0
- Skewness disappears as the df / samplesize increases
7.4 Confidence intervals
- Given to convey meaning of experimental results beyond the hypothesis test
Point estimate: A specific estimator of a parameter. E.g. sample mean is an estimate of pop. mean
Confidence interval: Interval estimates that describe the probability that the true pop. mean is included in them
- we want to know how big or small the pop. mean can be without us rejecting it.
Confidence limits: Borders of the confidence interval
Method: Rearrange formula for one-sample t test. Solve it this time not for t but for µ
General formula for confidence intervals (credible intervals):
Confidence Intervals visualised:
How to identify extreme cases (population estimates are unreliable)
- apply this new formula, because sample size is small and thus, variance in sampling distribution is skewed
- Remember: Problem with small samples is that we may calculate a disproportionally large z score
- Now instead of using z scores to determine if the score is unlikely (like we learned in the first course)
, we use this corrected formula: Standard deviation is made bigger, so that the t value will be smaller.
!!! works with degrees of freedom: n – 1 !!!
7.5 Other
Bootstrapping: Done to estimate the variability of any sample statistic over repeated sampling
- Sampling with replacement from obtained data, instead of from population
Inhalt
7.4 Hypothesis Tests applied to Means – Two matched samples. 1
7.5 Hypothesis Tests applied to Means – Two independent samples. 1
7.6 Heterogeneity of Variance: The Behrens-Fischer Problem.. 3
7.4 Hypothesis Tests applied to Means – Two matched samples
Matched sample: (also;: repeated measures, related samples, correlated samples, paired samples or dependent
Samples.) Same Subjects respond on two occasions. If you have one set of scores, this
always tells you something about the other set of scores, because they are matched.
Matched-sample t-test: Test the difference between the means
( Variables should be independent à may plot the points to check this)
- Set up Ho: µ1 = µ2
- Scores may the combined into difference or gain scores: X1 – X2 = D (diff.) (p199, 7.3) And Ho can be formulated µD = µ1 - µ2 = 0
- Create t test according to this difference score:
- Calculate df = n - 1
Missing Data: 2 ways of dealing with this: 1. Exclude missings
2. Create t-test with only available, then missing score and then
combine and compare these with special tables.
7.5 Hypothesis Tests applied to Means – Two independent samples
Sampling distribution of differences between means: - We sample independently from each population
- The sum or difference of two independent normally
distributed variables is itself normally distributed.
- Variance should be ơ²1 = ơ²2 = ơ²
- (remember however that t tests are robust = more or
less unaffected by small departures of the assumptions
Variance sum law: The variance of a sum or difference of two independent variables is equal to the sum of
their variances.
!!! Variances of the 2 samples have to be equal or at least similar !!!!
(e.g. before experiment, we always check that samples are as similar as possible so we may
may attribute differences to out experiment and not to error variance)
If sample size varies à use pooling (see next page below)
Formula of the variance sum law
2 independent variables combined into the sampling distribution of mean differences. |
Pop ơ is known à use Z score and table |
Standard error of differences between means. (stand. dev.) |
T-test statistic of sampling distribution of mean difference: (pooling)
µ1 - µ2 = 0 , therefore we may drop the term in the formula |
Pop ơ is not known à use t score and table (also df) |
Pooling of variance (used when diff. sample size) + (only when variances are homogeneous)
- Step: Weighted average of S²1 and S²2 à Use degrees of freedom
- Step: Pooled variance estimate:
Don´t forget: - 2 df on t-table |
Degrees of freedom: Because we have two variances that are squared we lose 1 df for each, thus substract -2
- only counts for independent samples (example calculations: p211, p216)
7.6 Heterogeneity of Variance: The Behrens-Fischer Problem
Heterogeneous variances: Use t´ à not necessarily distributed on n1 + n2 – 2df on t- table
- Behrens Fischer Problem: (they tried to create a table for this distribution but they couldn’t calculate the t for high degrees of freedom)
- Welch-Satterthwaite solution: df´ (df are unknown and taken to their nearest integer) à df is bound as: Min (n1 – 1, n2 – 1) ≤ df´≤ (n1 + n2 – 2)
Testing for heterogeneity: Test this differenc of variance of our samples = S²1 and 2²2
- By replacing each value of X with its absolute deviation from the group mean
dij= Xij - X
- Or by the squared deviation
dij=(Xij - X) ²
- Then run a normal two-sample t-test on the dij s
- If t turns out to be significant, we may conclude that the 2 samples differ in their variances
Testing for homogeneity: Run a test for homogeneity (not yet learned?)
- If variance is not homogeneous than pool the variance estimates.
Estimating the required sample size. 3
8.1 Noncentrality parameter δ. 3
7.0 Confidence Intervals
One sample case
Solve t formula for µ instead of t
Two sample case
- Solve t formula for µ instead of t (like in the one sample case)
- Use difference between the means and standard error of differences between means instead of mean or SE of mean
7.1 Effect size
- Used when we examine differences between 2 related measures.
- Confidence limits on effect size based on previous research are biased (narrower confidence limits than true)
- Because only significant findings are published
Reports difference in standard deviation units
One sample case
Estimate of d (as in example from the book, p. 204)
Two sample case
8.0 Power
Power: Probability of correctly rejecting a wrong Ho. More power = higher probability of rejecting Ho
Power = 1 - β
Figure 8.4
|
Factors affecting power
- Alpha (α). The larger α, the more power
- Distance between means. The larger H1 the bigger the power
- Sample size (n). If n increases, std.err. decreases à overlap between sampling distr. Decreases, thus higher power
- Variance (σ²). If σ² decreases à overlap betw. Sampling. Dist. Decreases, thus higher power
à variance of sampling distr. Is bound to sample size, because σx2 = σ² / n
Calculation of power
Because overlap is determinant for power, we may use Cohen´s d to asses how far the means differ, thus infer power from the size of d.
- 3 methods to estimate d: 1. Prior research findings
2. Personal assessment of what difference would be important
3. Use of Cohen´s table
Combine effect size (d) with sample size (n) à find delta (δ)
Estimating the required sample size
8.1 Noncentrality parameter δ
Summary:
- If Ho = true, t is distributed around zero
If Ho = not true, t is distributed as δ (degree of noncentrality) à expresses the degree of wrongness of Ho
8.2 Retrospective Power
Priori power: Power that is calculated before an experiment. Based on estimates population parameters. (means,
variances, correlations, proportions)
Retrospective (or post-hoc) power: Calculated after experiment. Done with G Power tool (p. 244)
Purpose: Help to design future research, evaluate studies in literature (meta- analysis)
18.0 Recap
Parametric tests
T-Test: - uses sample variance as pop var. estimate. à assumption that population from which sample is
Is normal.
Non-parametric tests / Distribution free tests
- Fall under the resampling tests (base conclusion on drawing a large number of samples under assum-
ption that Ho = true) –> than they compare obtained sample result with resampled results
- Some resampling procedures deal with raw scores, rather than with ranks
à Bootstrapping + Randomization tests
à Used when we are uncertain of assumptions (e.g. normal distr. of population)
à Used also when we do not have good parametric tests (e.g. Conf. Int. on a median)
Advantage | Disadvantage |
- Require general assumptions | - Lower power |
- Are sensitive do medians rather than means | - Less specific |
- Unaffected by outliers |
|
|
|
à Ho is usually if 2 populations are symmetric or have a similar shape
Bootstrapping: Interested in median whose sampling distribution and SE cannot be derived analytically
à procedure is with replacement
Permutation tests: à procedure without replacement
Rank-randomization tests: Wilcoxon´s test and permutation test (draw every possible permutation only once)
18.1 Bootstrapping
Use: - Population distribution is not normal or unknown
- To estimate pop. parameters rather than testing hypotheses
- If we want confidence interval not of the mean
- sampling distribution with replacement
18.2 Bootstrapping with one sample
Finding a confidence interval 95% (example, p.661)
- Assumption: population distr. = sample distribution
- Draw a large number of samples under this assumption with n = 20
- Determine which values encompass the 95% à sort medians and cut off lowest and highest 2.5%
18.6 Wilcoxon´s Rank-Sum Test
Use: Analogue to t-test but it tests a broader Ho.
à Ho = 2 samples are drawn at random from identical Populations ( not just pop. with the same mean)
à if Ho is rejected, this means that the 2 pop. had different central tendencies
How it works:
- Assign ranks to observations of 2 independent samples
- Add scores for each sample = W Test statistic
- Check with W-Table if significant or not
Add Rank-Scores to get W test statistic |
Now compare Ws of the smaller sample!!! to W table, which shows the smallest value that can be expected by chance if Ho = true.
- Scores of the small sample can be big which means that if Ho = false, the sum of the ranks would be larger than chance expectation instead of smaller.
- Calculated W´s - 2W is given in W-table
2W = n1( n1 + n2 + 1)
- Use W´s or Ws (whichever is smaller) to compare it to the table.
- Two tailed test: Double the value of α
The normal approximation
Ws distribution approaches normal, when sample size increases
Parameters of the Ws- Distribution
We can use z, because Ws is normally distributed
- Use z to calculate a true probability of obtaining the Ws as low as the one we got.
Example (p.672)
Treatment of Ties
- When data contains tied scores, a test that relies on ranks is distorted
- Assign ranks so that Ho gets hard to reject
Mann-Whitney U Statistic
- Competitor of Wilcoxon´s test
- U and W differ only by a constant
- U and W can be converted with Wtable
18.7 Wilcoxon´s Matched-Pairs Signed-Ranks Test
- Used because sample scores do not appear to reflect a normally distributed population.
- Nonparametric analogue to ttest for matched samples
- Tests Ho that distribution of difference scores (in the population) is symmetric about zero.
How to use:
- Calculate difference scores
- Rank all differences without regard to the sign
- Sum the positive and negative ranks
- This will give you the T test statistic (smaller sum) + ignore the sign
- Evaluate against the T table
Relevant T Score
|
Ties
- If 0, eliminate participant from consideration
- Assign tied ranks
The normal Approximation
- Large samples size = T is approx. normally distributed
18.8 The Sign Test
- Gain even more freedom from assumptions than Wilcoxon test
- Lose power
How to
- Give difference scores a + or – sign
- Sum them and calculated probability (with binomial distribution tables) of that outcome. Eg. p(13) of 16
- Use X² test (Chi-Square) p.678
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the summaries home pages for your study or field of study
- Use the check and search pages for summaries and study aids by field of study, subject or faculty
- Use and follow your (study) organization
- by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
- this option is only available through partner organizations
- Check or follow authors or other WorldSupporters
- Use the menu above each page to go to the main theme pages for summaries
- Theme pages can be found for international studies as well as Dutch studies
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Main summaries home pages:
- Business organization and economics - Communication and marketing -International relations and international organizations - IT, logistics and technology - Law and administration - Leisure, sports and tourism - Medicine and healthcare - Pedagogy and educational science - Psychology and behavioral sciences - Society, culture and arts - Statistics and research
- Summaries: the best textbooks summarized per field of study
- Summaries: the best scientific articles summarized per field of study
- Summaries: the best definitions, descriptions and lists of terms per field of study
- Exams: home page for exams, exam tips and study tips
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
- Studies: Bedrijfskunde en economie, communicatie en marketing, geneeskunde en gezondheidszorg, internationale studies en betrekkingen, IT, Logistiek en technologie, maatschappij, cultuur en sociale studies, pedagogiek en onderwijskunde, rechten en bestuurskunde, statistiek, onderzoeksmethoden en SPSS
- Studie instellingen: Maatschappij: ISW in Utrecht - Pedagogiek: Groningen, Leiden , Utrecht - Psychologie: Amsterdam, Leiden, Nijmegen, Twente, Utrecht - Recht: Arresten en jurisprudentie, Groningen, Leiden
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
1253 | 1 |
Add new contribution