How do we measure the quality of diagnostics? - Chapter 2

Psychodiagnostics consists of three parts:

  1. diagnostic theories or frames of reference;
  2. the description of the three theories in models from test theory and statistics;
  3. tests.

There are three types of frames of reference, each of which is based on a different idea of ​​how (problematic) behavior is best understood and explained:

  • Individual differences. Individual differences are considered in this context. This is the best frame of reference for the diagnostician to use.
  • Development. In this context we look at the development over time.
  • Context. According to this framework, behavior can only be understood and explained if the behavior changes or is maintained by causes: stimuli, events, interventions. This is also called the social context. This is also a good frame of reference to use, but here the diagnostician must determine the behavior to be measured.

What is the quality of the frames of reference?

Determining the quality of frames of referenda is done on the basis of four criteria:

  • Have the elements and relationships from the theory been tested and what is the result?
  • Has the theory been written down in such a way that testing is possible?
  • Has theory become a source of inspiration for empirical research?
  • Has research been conducted into practical applications of the theory and what is the result of this research?

According to Van der Werff, the trait approach, biopsychology and orthodox social learning theories are the best frames of reference to use in diagnostics. For a diagnostician, especially the trait approach is good to use, it has also yielded many intelligence and personality tests. The biopsychological approach and the social learning theories come from the context approach. Van der Werff believes that parts of psychoanalysis such as ego psychology belong to the middle class. The underperformers are psychoanalytic, humanist and existential psychology. The ideas in these frames of reference often cannot be tested.

How is the quality of the models measured?

There are several models of test theory and statistics that are used to describe the central parts of theories and constructs. There is the classical test theory and the modern test theory (IRT). The IRT is a theory of behaviors (a latent trait such as emotional stability or math skills). This theory describes the probability that when a person has a certain value on a latent trait, a person will say yes to a question whether a task will solve well.

Factor analyzes are used for the first frame of reference. Thurstone came up with the multiple intelligence theory and eventually chose to measure it by independent factors. This was in contrast to Spearman, who used a g factor to measure intelligence. A profile of someone can be made using the factor analysis. It is also a fact that after performing a factor analysis, the five factors in the Big Five (Extraversion, Kindness, Care, Emotional Stability and Intellect) do not appear to be independent factors as the theory states.

The second frame of reference, the development, can be described with, for example, a linear, negative accelerating or exponential increase in an (un) desirable behavior. There are also stage models in which it is assumed that behaviors from different stages differ from each other.

For the third frame of reference, the context, it is mainly about the effectiveness of an intervention. This can be tested with an analysis of variance. In this, the dependent variables are understood as the result of manipulated factors, their interactions and covariates (such as sex and SES).

How is the quality of the research instruments determined?

There are clear quality requirements for tests. The Standards for Educational and Psychological Testing was issued by the APA in 1999. In the Netherlands, a test assessment scheme has been developed based on this, which contains seven criteria. These criteria can be considered "good", "sufficient" or "insufficient". The criteria are:

  • The principles of the test construction. A statement must be made about the purpose of the test;
  • The quality of the test material: the standardization of the items, the scoring system and the instruction. Attention is also paid to the materials used (test booklet, score keys, length of the test) and the content of the items in order to assess whether the items are, for example, not harmful to certain population group.

How do we measure the quality of the manual?

The quality of the standards. In this, it is examined whether the standardization group used corresponds with the purpose of the test. There are two types of interpretations: a norm-oriented interpretation (comparison with a relevant norm group) and a criterion-oriented interpretation (comparison with an absolute value or norm). Absolute standards must be based on scientific research.

How do we determine the quality of the reliability data?

This concerns results of studies with parallel tests, internal consistency, test-retest and inter-rater reliability.

What is construct validity?

This is about how well a construct fits into a nomological network with a clear internal and external structure.

What is the criterion validity?

This is about how good the coherence of a test is with a criterion (the result of a treatment).

How high should a reliability coefficient be?

Nunnally and Bernstein have established values ​​for decisions about when reliability coefficients can be rated as insufficient, satisfactory, or good. For important decisions, they believe that there must be a reliability of rxx> .90 for the internal consistency and stability coefficients. Based on this, criteria have been drawn up for the assessment of the coefficients:

For tests used for important decisions at the individual level (e.g. referral to special education), rxx <.80 is insufficient, .80 ≤ rxx ≤ .90 is sufficient and rxx ≥. 90 good.

For minor decisions such as a progress check, rxx <.70 is insufficient, .70 ≤ rxx ≤ .80 is sufficient and rxx ≥. 80 good.

For tests used for group level studies, rxx <.60 is insufficient, .60 ≤ rxx ≤ .70 is sufficient, and rxx ≥. 70 good.

Reliability can best be shown with the standard measurement error. In tests and questionnaires, it is expected that there will be an accidental measurement error. The mean of this accidental measurement error is called the standard measurement error. A high standard measurement error indicates low reliability.

How high should a criterion validity coefficient be?

There are no rules of thumb for determining validity. Cohen gave three rules of thumb for the desired level of a validity coefficient:

  • A true correlation of r = .10 is low;
  • A true correlation of r = .30 is average;
  • A true correlation of r = .50 is high.

How is the quality of the diagnostic process determined?

In addition to decisions about the theory, the model and the measuring instruments, the diagnostician must also decide on the method of integrating all information. This information integration of diagnosticians and assessors has often been studied. This concerns the question of whether information should be integrated by a diagnostician or by means of an empirical model. The diagnostician is often described as a moderately intuitive statistician and information processor.

What is the difference between a clinical and a statistically oriented diagnostician?

A diagnostician who is clinically oriented often has conversations with his client in which he tries to learn about the specific personality and social context of this client. He makes a diagnosis and prediction through his own experiences in clinical practice and with the help of theories of behavior, cognitions and emotions.

A statistically oriented diagnostician integrates information using (linear) formulas. With this he predicts a criterion reasonably well based on a chance. However, this probability statement can never be perfect: a value of r = 1.00 is never achievable.

This has also led to studies on clinical versus mechanical prediction. Clinical is the term for the clinical orientation and mechanical for the statistical orientation. It turns out that the orientations are equally good, although the statistical orientation is slightly better.

What are limitations in the diagnostician's statistical assessment?

Some limitations in a diagnostician's assessment are that they often categorize someone by one salient feature. In addition, he often does not take into account the frequencies of how often something occurs: this is called the base rate. A diagnostician also puts weight on salient information, that is, things that are clearly visible. Also, a diagnostician often disregards a sample, while results from a larger sample are much more reliable than those from a smaller sample. They also often misunderstand correlations. An example of this is: when someone becomes ill, which occurs more often, and goes to the doctor, he gets medication. He takes this and then he gets better. However, this does not mean that this is because of the drugs! It is also possible that the person heals from the disease through a natural course. This is sometimes not sufficiently taken into account in diagnostic practice.

What are limitations in a diagnostician's information processing?

There are a number of heuristics (prejudices) that can influence the assessment process. There are four types of heuristics for each step in an information processing process:

  • Recruiting information. This is referred to as availability heuristic. This means that a diagnostician is more likely to think that something is happening when he or she is often confronted with it. So here is again a question of not paying close attention to the base rate;
  • Processing information. In this phase the diagnostician underestimates the growth processes, he uses rules of thumb that are not always correct, he makes a judgment under time pressure and therefore not carefully;
  • Assessing the outcome of the information. In this phase, the question can influence the answer. There is sometimes talk of "seeing what we want to see";
  • When dealing with feedback. Often the diagnostician takes seriously the information that only provides affirmative information. He does not take "chance" into account. It is also sometimes the case that a diagnostician comes up with things himself and believes that this is correct, without any scientific basis.

What are means by which the diagnostician can avoid mistakes?

Because diagnosticians make quite a few mistakes in their assessment, it is important to train them on this. With the help of training, you can learn to take the base rate into account, to deal with the availability bias and to prevent foreclose. This is accepting a statement too quickly without looking further.

Another way to avoid errors is by following prescriptions such as the diagnostic cycle (the hypothesis-testing model) and the use of empirically established rules to weigh up and integrate information properly.

Also, feedback on the correctness of the diagnoses they make could help, but this happens very little in practice.

What should be taken into account in diagnostics?

It should be taken into account that tests are often in Dutch. There are also sections in intelligence tests where people are judged on their understanding of the Dutch language (proverbs), such as in intelligence tests. For this reason, tests in the diagnosis of persons who do not have a good command of the Dutch language, such as refugees, are limited.

A diagnostician must also take into account his colleagues, the institution where he works and the legislation. Psychologists and educationalists come under the Individual Health Care Professions Act (BIG Act).

What are the ethical rules?

There are rules for the professional association that members must adhere to. These are:

  • no discrimination;
  • no abuse based on your power;
  • only a professional relationship is allowed;
  • do not use means that affect the client's well-being;
  • confidentiality;
  • keep the file for at least one year and keep it inaccessible for unauthorized persons;
  • the client may always decide about entering into and ending a professional relationship.

There is a committee that deals with complaints. In the Netherlands there are very few complaints.

What is test fairness?

Test fairness means that there is no bias (bias) in the use of tests. Keep in mind that an intelligence test cannot be used well for refugees who do not have an optimal command of the language and they will therefore score lower. Or, for example, a question about cars that men can answer more easily than women and therefore score higher in a test about technical insight. With the help of modern test theory (IRT) the items can be assessed on this. This is called Differential Item Functioning (dif). So, the goal is that people with the same skill do not score differently on a test.

Second, it is important for test fairness that there is equal treatment of people in the testing process. The third point is equal opportunity to learn: this is about education. Not every group in society has access to this. Finally, it could be that the tested self is dishonest and masquerades differently. This can occur in personality tests, in which faking bad (pretending to be worse) and faking good (pretending to be better) can occur.

 

Are you interested in the summaries of the rest of the chapters of Psychological diagnostics in health care? Then become a member of JoHo and get access to the entire bundle! You can find the complete study material via the Study Guide Psychological diagnostics in health care by Luteijn & Barelds

Image

Access: 
Public

Image

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

This content is also used in .....

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: Britt van Dongen
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
1647