Deze samenvatting is gebaseerd op het studiejaar 2013-2014.
Chapter J : Grounded Theory Approach
Grounded theory is a relatively new approach to research. The process is described as the discovery of theory in social research. It is discovered, developed and verified through systematic data collection and analysis of data concerning a phenomenon. So data collection, analysis and theory stand are interrelated.
It is discovered empirically, through induction. Grounded theory is focusses at the contextual values and not the values of the investigator. This may lead to influences of contextual factors, such as time and culture, but nevertheless grounded theory produces generalizable data. Preconceived data should not be taken into account. A general understanding of the phenomenon is enough. Evidence is then being gathered by the researcher, resulting in an “emerging” theory.
Theoretical sensitivity
Research must pay attention “theoretical sensitivity” of the data, or the relevance of the categories as they emerge from the data. These categories should make sense in comparison to already existing theories. Theoretical sensitivity involves repetition in data collection and analysis and refuses to focus on any theoretical perspective in advance of the concepts generated by evidence alone. In the end, the discovered concepts and hypothesis can then be combined with existing literature.
Process of Ground Theory Research
The research is holistic, naturalistic and inductive. Several assumptions about the research are widely shared. First, the aim of research is to generate a theory. Next, it focuses on how individuals interact in relation with the phenomenon. Furthermore, theory is derived from data through fieldwork, interviews, observations and documents. Moreover, data analysis proceeds through identifying categories and connecting them and proceeds from open coding, to axial coding to selective coding. Next, theoretical ideas have to be set aside, so a substantive theory can emerge. Theory also asserts a plausible relation between concepts and set of concepts. The data analysis is systematic and further data collection (sampling is based on emerging concepts. Finally, the resulting theory can be reported in a framework or as a set of propositions.
Process of grounded theory research:
- Initiating Research
First involves a selection of an area of interest by the researcher and a suitable site for the study. It is important, as mentioned before, that the researcher avoids preconceptions about the subject. Researcher should focus on relaying initial observations and maintain a theoretical sensitivity.
- Data Selection
Involves the location and identification of potential data sources. First, a broad, unstructured approach to selecting a sample will be used. The rest of the samples are dependent on the emergence of categories and the theory. The further samples are chosen based on their potential to offer important variation in comparisons.
- Initiation and Data Collection
Interviews are most often used, but some argue that a combination of methods, including observation and documentary resources must be used.
Data collection is combined with data analysis until saturation has been reached. The direction of data collection gains focus over time, as the theory emerges. Throughout the collection phase, the methods of collecting data become also more specific (interviews shorten).
- Data Analysis
Data analysis in ground theory research consists of a constant comparative method for generating and analysing data. Is woven with data collection.
See figure 1 on page 239. Data analysis has 9 sub-steps. These steps can be divided into 2 categories. The first category, consisting of a, b and c, follows after the interview, when the coding starts. Coding involves the process of naming, comparing and memoing.
Naming: attempts to conceptualize and develop abstract meaning for the observations in data
Comparing: development of a common category for multiple observations
Memoing: act of taking notes for elaboration. Has two forms: 1) notes that capture insights gained in the field and 2) recording of ideas generated later in the research process.
The second part of the data analysis starts with searching for the emergence of categories. If they emerge, they are organized into sets. Naming of categories and their properties follows. Then the level of elaboration that is needed is defined, based on the clarity of the categories. As the data collection becomes more focused, clarification of the concepts that are already found becomes the priority. Then time is spent clarifying the analytical rationale for the research process.
Property = part of a category. May very in degree of abstractness. Categories are not representatives of the data, but are indicated by the data.
- Concluding the Research
Grounded theory research is concluded when the point of saturation has been reached and sufficient theory has emerged from the data. Data saturation = data collection no longer contributes to elaboration of phenomenon being studied.
Once the saturation is evident, a structural framework is developed through the clarification of associations between central categories and the supporting categories and properties. The framework is likely to contain relationships, which can lead to propositions.
Using Grounded Theory Research for Theory Building
Figure 2 on page 246 shows the potential roles that ground theory research can play in context of the general method of theory-building research.
Evaluating Grounded Theory
There are four key areas for consideration when evaluating grounded theory research efforts. These are
- Judgments about the validity, reliability and credibility of the data
- Judgments about the theory itself
- Decisions regarding the adequacy of the research process
- Conclusions about the empirical grounding of the research
There are seven criteria for evaluating the research process. First, rationale for selection of the original sample. Second, elaboration of the major categories that emerges. Third, the events pointing to major categories identified.
Fourth, an explanation of how theoretical formulations influenced or guided the data collection. Next, the elaboration regarding the hypothesis and justifications for the establishment of relationships. The accounting for discrepancies in the data and resulting theoretical modifications is a criterion as well. Last, the rationale for the selection of the core or central category.
Seven other criteria are hold for assessment of the grounding of a study. First, the systematic relationships between the concepts. Next, the quality of the concepts generated. Third, the clarity and density of conceptual linkages. Fourth, the inclusion of variation into the theory. Next, a clear description of the conditions under which variations can be found. Furthermore, an account of the research process. Finally, the significance of theoretical findings.
It has been argued that a theory built form the grounded theory approach will prove its value in practical applications. From this perspective, theory is viewed as adequate, when it is a good guide to understanding and directing action, but it de-emphasizes the importance of a theory’s truth or accuracy.
Grounded Theory Research in HRD
Grounded theory is important to HRD because of its potential for contribution to an overall agenda being established. The most salient link is its connection between theory and practice. HRD can leverage the strengths of grounded theory research to inform practice and the on-going theory-building process. Grounded theory can also be used by positivists and naturalists, making it a trans-disciplinary approach. Being trans-disciplinary, together with its aim to capture tacit knowledge are very important in considering the use of grounded theory approach in HRD.
Challenges and Limitations
There is a lot of controversy about grounded theory research. Benoliel suggested that only a small percentage of research articles that claim to have used this approach, truly has used this approach. Often these articles did not account for social structural influences of respondents. Also, grounded theory approach is often confused with other research methods, such as phenomenology.
An additional criticism is that grounded theory research is underdetermined and is not viable because the raw data that is used are actually facts taken from within the framework of some other theory or theory-in-use, not understood by the researcher.
Chapter K : Doing Qualitative Analysis
In this chapter, the details of various activities that are carried out in qualitative are considered in more depth. These processes are mainly showed by practical examples throughout the chapter.
Analysis is a continuous and iterative process, as was described in chapter 8, but two key stages characterise its course. The first requires managing the data, and the second involves making sense of the evidence through description or explanatory accounts. The main sections of this chapter work through these two stages, step by step, as it is difficult to clearly separate the two stages.
Making sense of the data relies partly on the method used to order and categorise data. It, however, is mainly dependent on the analyst and the way of his conceptual thinking.
The name framework from the analysis method framework comes from the thematic framework which is the central component of the method applied here. The thematic framework is used to classify and organise data according to key themes, concepts and emergent categories. As such, each study has a distinct thematic framework comprising a series of main themes, subdivided by a succession of related subtopics. These evolve and are refined through familiarisation with the raw data and cross-sectional labelling. Once it is judged to be comprehensive, each main theme is displayed in its own matrix, where every respondent is allocated in a row and each column denotes a separate topic. Just like we have done with the assignments in the tutorials.
The raw qualitative data is most likely to be full of details but unwieldy and intertwined in content. In most analytical approaches, data management initially involves deciding upon the themes or concepts under which the data will be labelled, sorted and compared. In order to construct the thematic framework, the analyst must first gain an overview of the data coverage and become familiar with the data. Do not underestimate the importance of familiarisation, as it is a hugely important step in the process of analysing data. While becoming familiar with the data, keep the research objective in mind and compare the data to this objective.
Re-examining the sampling strategy and the profile of the achieved sample is also worthwhile as it will highlight any potential gaps or overemphasis in the data set, but also diversity of participants and circumstances. Try to collect as much diversified data as possible to let the sample look a lot like the population.
When reviewing the chosen material, the task is to identify recurring themes or ideas. Once these recurring themes have been noted, the next step is to devise a conceptual framework, drawing both upon the recurrent themes and upon issues introduced into the interviews through the topic guide. Themes are then sorted under categories and placed within the framework. Once an initial conceptual framework is constructed, it has to be applied to the raw data. The process of applying the conceptual framework to the raw data is called ‘indexing’.
With textual data, indexing involves reading each paraphrase, sentence, paragraph in fine detail and deciding ‘what is it about?’ in order to determine which parts of the index apply. Alternatively, this can be done electronically. An important feature is to two or three index numbers are often interspersed.
This is usually a sign of some interconnection between themes or issues that should be noted later for the associative analyses. Another key feature is that one always focuses on all indexes, as there can be some sort of reference in other responses to a certain index than the questions based on this index. Also, take into account that it is very well possible that you have to apply the initial indexes.
The following step is to order the data in some way so that material with similar content or properties is located together. There are different ways of sorting data, but this is described in chapter 8. Always remember that it is important to keep the option open to assign data to multiple locations. There are two reasons for this. First, it may be that a single passage will have relevance to two conceptually different subjects and carving it up would destroy both its meaning and its coherence. Second, the juxtaposition of two apparently unrelated matters may give the very first clues to some later insight or explanation.
The final stage of data management involves summarising or synthesising the original data. This not only serves to reduce the amount of material to a more manageable level but also begins the process of distilling the essence of the evidence for later representation. It also ensures the analyst inspects every word of the original material to consider its meaning and relevance to the subject under enquiry. Three key requirements are essential if the essence of the original data is to be retained. First, key terms, phrases or expressions should be retained as much as possible from the participant’s own language. Second, interpretations should be kept to a minimum at this stage, so that there is always an opportunity to revisit the original ‘expression’ as the more refined levels of analysis occur. Third, material should not be dismissed as irrelevant just because its inclusion is not immediately clear. It may well be that issues that make little sense at this early stage of analysis become vital clues in the later interpretative stages of analysis. The steps involved in data management may take place in a different order depending on the analytical tool being used.
An initial stage in descriptive analysis refers to unpacking the content and nature of a particular phenomenon or theme. The main task is to display data in a way that is conceptually pure, makes distinctions that are meaningful and provides content that is illuminating. There are three key steps involved:
- Detection – in which the substantive content and dimensions of a phenomenon are identified
- Categorisation – in which categories are refined and descriptive data assigned to them
- Classification – in which groups of categories are assigned to classes usually at a higher level of abstraction
The process of moving from synthesized or original text to descriptive categories is explained and illustrated in boxes 9.7 and 9.8. Check these boxes for an understanding of this process.
Typologies have two important characteristics. First, they are usually, although not inevitably, multidimensional or multifactorial classifications. That is, they combine two or more different dimensions so that a more refined or complex portrayal of a position or characteristic can be identified. Second, they offer a classification in which categories are discrete and independent of each other.
There are a number of steps to be taken in the detection of a typology. The first task is to identify the relevant dimensions of a typology.
For this, it is important for the analyst to have a strong familiarity with the data set and the tasks further down the analytical hierarchy, such as identifying the elements of a phenomenon and refining categories, have been completed. Once this initial construction is developed, the analyst needs to ensure that all the cases can be assigned to each of the dimensions being used in the typology. Unless the sample fits into each of the dimensions, and fits uniquely, the dimensions will not operate effectively within the typology. Once the dimensions of the typology have been checked in this way, then their cross-fertilisation into typographical categories can be made. Once this has been done the whole process of testing needs to start again to ensure that all cases can now be allocated to one, and only one, of the typological categories.
Associative analysis is a lucrative form of qualitative data investigation as it almost invariably brings a deeper understanding of the subject under review. Such analyses involve finding links or connections between two or more phenomena. It is common in qualitative analysis to find that linkages repeatedly occur between sets of phenomenon. We have termed these matched set linkages.
It is often important in a qualitative study to investigate whether there are any patterns occurring in the data within particular subgroups of the study population. Typologies and other group classifications are extremely useful in displaying associations in qualitative data by showing how particular views or experiences may attach to particular groups or sectors of the population.
Having found what appear to be linkages and associations in the data, it is necessary to explore why they exist. This is because the relationship itself – that is, that there is a connection between X and Y – is not verifiable within the small, purposively selected samples used in qualitative research unless to verify associations are the same for each of the types of associative analysis described above. A first step is to check exactly how the level of matching between the phenomena is distributed across the whole data set. This is one of the few occasions when numerical distributions are used in qualitative research – but as a means, not an end, to gaining understanding. A second step is to interrogate the patterns of association. Unlike large scale quantitative surveys where a correlation may be presented as an output in its own right, in qualitative research a pattern of association is used as a pointer toward further stages of analysis.
The search for explanations is a hard one to describe because it involves a mix of reading through synthesised data, following leads as they are discovered, studying patterns, sometimes re-reading full transcripts, and generally thinking around the data.
Short overview of ‘developing explanations’:
- Using explicit reasons and accounts
- Inferring an underlying logic
- Using common sense to search for explanations
- Developing explanatory concepts
- Drawing from other empirical studies
- Using theoretical frameworks
Chapter L : Generalization
One issue in qualitative research is generalisation. That concerns, whether the findings from a study based on a sample can be said to be of relevance beyond the sample and context of the research itself. There are two types of generalisation, namely ‘empirical’ and ‘theoretical’. Empirical generalisation concerns the application of findings from qualitative research studies to populations or settings beyond the particular sample of the study (also called ‘transferability’ or ‘external validity’). Theoretical generalisation involves the generation of theoretical concepts or propositions which are deemed to be of wider, or even universal, application.
Generalisation further consists of three concepts:
1. Representational generalisation: Can what is found in a research sample be generalised to the parent population from which the sample is drawn?
- Two key issues: first, whether the phenomena found in the research sample would similarly be found in the parent population. Second, whether other additional phenomena would be found in the parent population which are not present in the study sample.
2. Inferential generalisation: Can the finding of a particular study be generalised to other settings or contexts beyond the sampled one?
- The main issue of inferential generalisation is that it requires congruence between the ‘sending’ and the ‘receiving’ contexts, and therefore the researcher has to know both contexts.
3. Theoretical generalisation: draws theoretical propositions, principles, or statements from the findings of a study for more general application.
Approaches to Generalisation
Theoretical generalisation implies ‘nomic’ generalisation, which means that “generalisation must be truly universal, unrestricted as to time and space. It must formulate what is always and everywhere the case.” (Kaplan) A new or refined theory is relevant if it can be used in further empirical inquiry.
Inferential generalisation can also be referred to as ‘naturalistic’ generalisation. This means that knowledge is created by recognizing the similarities of objects and issues in and out of context and by sensing the natural co-variations of happenings. It also has to be understood that transferability depends on the degree of congruence between the ‘sending context’ within which the research is conducted, and the ‘receiving context’ to which it is to be applied. Therefore, the researcher has to provide a ‘thick description’ of the research context, to enable others to judge if the findings are transferable to their context.
In representational generalisation, it is at the level of categories, concepts, and explanation that generalisation can take place. Two broad issues are the accuracy with which the phenomena have been captured and interpreted in the study sample, and how representative the sample is of the parent population. Representation is a matter of inclusivity: whether the sample provides ‘symbolic representation’ by containing the diversity of dimensions and constituencies that are central to explanation. This leads to the concepts of reliability and validity.
Reliability
The question asked concerning this concept is: in repeated measurement, will I find the same values? Reliability can also be referred to as ‘confirmability’, ‘consistency’, and ‘dependability’. It is important to have some certainty (before the study) that the internal elements, dimensions, factors would recur outside of the study population. Additionally, it has to be ensured that the constructions placed on the data by the researcher have been consistently and rigorously derived. In this context, it has to be distinguished between internal and external reliability. External reliability concerns the level of replication that can be expected if similar studies are undertaken. Internal reliability (also called inter-rater reliability) relates to the extent to which the same conclusions or judgements are found or replicated by different researchers or judges.
Reliability in qualitative research also concerns the replicability of research findings and whether or not they would be repeated in another study. There are two levels on which replicability should be ensured. First, there is the need to ensure that the research is as robust as it can be by carrying out internal checks on the quality of the data and its interpretation. Second, there is the need to assure the reader of the research by providing information about the research process. Questions to be asked in this context: Was the sample design/selection without bias? Was the fieldwork carried out consistently? Was the analysis carried out systematically and comprehensively? Is the interpretation well supported by evidence? Did the design allow equal opportunity for all perspectives to be identified?
Validity
Validity refers to the ‘correctness’, ‘credibility’, or ‘precision’ of a research reading. It can be distinguished between internal (are you investigating what you claim to be investigating) and external validity (are the constructs generated applicable to other groups within the population). There are several types of validity: content, face, construct, predictive, concurrent, and instrument validity (not explained in detail in the book).
The main question concerning validity should be: Are we accurately reflecting the phenomena under study as perceived by the study population? To ensure this requirement, the following checks can be conducted: sample coverage (bias in sample frame?), capture of the phenomena (fully explored the topic?), identification or labelling (do the given names reflect the meanings assigned by study participants?), display (findings portrayed in a way that remains true to the original data?). Two concepts that increase validity are validation and the right documentation.
Internal validation – methods
1. Constant comparative method or checking accuracy of fit: derive hypotheses from one part of the data and test on another part.
2. Deviant case analysis: do not ignore outliers but use them as a resource for understanding or research development.
External validation – methods
1. Triangulation: the use of different sources of information will help both to confirm and to improve the clarity, or precision, of a research finding.
- Methods triangulation: compare data from different methods (e.g. qualitative and quantitative)
- Triangulation of sources: comparing data from different qualitative methods
- Triangulation through multiple analysis: use different observers, interviewers, analysts and compare findings
- Theory triangulation: look at data from different theoretical perspectives
2. Member or respondent validation: take research evidence back to research participants and ask for their opinion
However, we can never know with certainty that an account is true because we have no independent and completely reliable access to ‘reality’ (Hammersley).
Provide transparency or ‘thick description’, in order for the readers to verify for themselves that conclusions reached by the researcher hold ‘validity’ and to allow readers to consider their transferability to other settings.
Four important principles for generalising
1. Full and appropriate use of the evidential base:
- This can be ensured by using the original data (what requires a well-collected data set), and by encompassing diversity (identify and display the range and diversity of the research). This makes the research easier to understand for readers. The nature not number concept is also important. This means that inference that can be drawn from qualitative data concerns the nature of the phenomenon being studied, but not its prevalence or statistical distribution.
2. Display of analytical routes and interpretation:
- The level of classification assigned to a phenomenon will affect the extent to which generalisation can occur. Higher levels of aggregation of categories are more likely to be transferable in representational terms than more specific or individualised items. The process of assigning meaning and interpretation influences the generalizability. The more a researcher places his/her own meaning or interpretation on a finding as a basis for generalisation, the more open it will be to questioning and review by others.
3. Research Design and Conduct
- Checks on research design and conduct are important to prevent them from limiting the nature or power of the inference drawn. The display of research methods is important in order to allow others to assess the research methods. It is also important to note limitations, because it will help the reader to understand the boundaries of the research.
4. Validation
- Checks against other evidence and corroboration from other sources are highly desirable.
Chapter M : Representation
Reporting is not only recoding the outcomes of the analysis but also the active construction of the form and nature of the explored phenomena. It provides an opportunity for further thoughts when assembling the data into a coherent structure. It is a vital way to think about one’s data and to develop analytical ideas.
The aim of the report is to present findings in an accessible form that will satisfy the research objectives and enable the audience to understand them. One major challenge here is to portraying the different forms of descriptive and explanatory analyses:
1. Explaining the boundaries of the qualitative research (QR):
Explain what QR can and cannot do to audience that is unfamiliar with it and Include a discussion of the kinds of inferences that can be drawn
2. Documentation of the methods
Displays ‘credibility’ of the evidence. In other words, describing not only how research was conducted but also why particular methods and approaches were used to meet research aim.
3. Displaying the integrity of the findings
Explain where conclusions presented are generated from, and grounded in, date.
4. Being coherent
Depth & richness of QD presents a challenge. Therefore, present data in way that effectively guides reader through key findings.
5. Displaying diversity
Focus on only the dominant message might be misleading. Mention diversity ( main advantage of QR).
6. Judicious use of verbatim passages
Only use original passages. Note that overciting can make report hard to read.
Findings are not always reported in report form. Other possible forms are: oral presentations, interim report or paper, conference papers, journal articles, books , media debates or programmes or conferences to explore findings. QD also varies in form: can be verbal data, but also photography and videos. Note that the chosen output should be in accordance with these data forms
4 types of research output:
1. Comprehensive outputs:
To provide comprehensive review of research findings, research methods & wider implications
written reports used on completion
2. Summary:
To provide condensed information about key findings. The information is explained in a less comprehensive fashion. An executive summary holds the possibility to assess main findings. Mainly book chapter or journal articles are summarized.
3. Developmental:
To provide early indications of emergent findings or to offer theories or ideas for debate. Examples are an oral presentation, emergent findings, interim written report, journal articles or a conference or seminar paper.
4. Selective:
To focus on selected areas of research findings for specific audience, e.g give funders of committee opportunity to express particular interest.
Or let academic or colleagues contribute to interpretation. This can be done by an oral presentation with specific focus, a conference or seminar paper to selected audience, a journal article or a media article or report.
Four factors influence the form of output. First, the origin, purpose and strategy of research determine the length. Second, the condition that contractual obligations should me meet. Third, the target audience identified. Last, the resources available. They can limit the research output by financial, time and access limits.
The chapter describes aspects of writing a qualitative report. First, get organised. Researchers will be deeply involved in their studies and the research story will thus be fuzzy. Therefore it is important to take a mental break and only note down important steps and findings. Moreover, all material that is needed for writing should be assembled. Finally, space in the working time should be made free for writing.
Secondly, it is recommended to start as early as possible with writing, as writing is a way of thinking. It is important to consider the story to be told and how the story can be best conveyed in an organised and interesting way. Decisions about the shape and style are influenced by the research objective, the requirements of the commissioning body and the audience(s) to be targeted.
Third, the structure and content. The paper should at least contain of its key ingredients, which are the title page, acknowledgements, abstract, table of content, executive summary, introduction, literature review, research findings and evidence, conclusion and the technical appendices.
The main body is made up of findings and research evidences. It is important to consider the order of which evidences will be presented. 3 models:
1. puzzle: reader figures evidences out alongside the researcher
2. summary of main findings and conclusion, followed by evidence supporting them
3. analytic presentation: organise findings according to areas of existing theory
Telling the story, concerns how the evidences will be related. For instance, if researcher has developed a strong typology, this should be mentioned in the beginning. The coverage of different populations should also be explained. It is for instance important to display, the importance of constant comparison between groups and the extent to which there are very distinct issues for the different groups.
The longitude of research should be mentioned when it influenced the findings. Finally, there are usually natural building blocks given by the phenomenon regarded.
Reporting style and language, is determined by the individual style, the requirements of the funders and the target audience(s). It should be considered whether to report in the ‘realist’ style (what you found out) or the ‘confessional’ style( how you did it). Research jargon or technical terminology should be avoided.
Describing the research content refers to providing background information about the study. It should include the origins of research, aims of the study, theoretical or policy context in which research is set, design and conduct of the study and the nature of the evidence collected. Some account of the author’s personal perspectives on the subject matter or aims of enquiry.
It should be only enough to place the research evidence in its appropriate setting and to give the reader a base to judge the credibility of the research. The included ‘audit trail’ should allow readers to look into the research process and follow its main stages. It should address the sample design, method of selection, achieved sample composition (e.g. socio-demographic factors), any known limitations, tools and approaches used to analyse and the epistemological orientation of research team. It should also be supported by appending examples, namely the topic guide, recruitment documents and an analytical framework.
The length of the report depends on the number and density of areas that are to be included. Moreover, integrating qualitative and quantitative findings is an important part of the report. It should be made clear that qualitative and quantitative evidences offer different ways of ‘knowing’ the world. It should therefore be made clear which kind of evidence will tell the main ‘story’. When quantitative data is the main focus, then the dull capacity of qualitative data should nevertheless be used ( not only for supporting quotes from the participants). Moreover, a different ‘reading’ is given by the different types of data obtained. This should be figured out by the reporter and not the reader. Finally, summaries ( executive summaries) should be included to give a short, standalone account of the key findings and main messages derived from the research. Moreover, it should contain a brief description of the methods used.
Most important at displaying qualitative evidence is that the subtlety, richness and detail of the original material is displayed.
Descriptive accounts:
- Defining elements, categories and classifications
Show nature of all kinds of phenomena, covering attitudes, beliefs, behaviours, factors, features, events, procedures and processes
It is helpful for the reader to show:
- Examples of the original material on which descriptions and classification is based
- The range and diversity of different elements, concepts or constructs that have been formed
- A comprehensive map of all categories
- The basis of any subsequent classification and how the different elements and categories have been assigned.
How best to display the original material is related to the categories and classes of data found. (See p. 338 following for examples.)
When categories and classes are displayed in lists, charts or text-based descriptions, the appropriate order has to be considered.
- Typologies
Provide descriptions of different sectors or segments in the study of population or of different manifestations of phenomena. Here again, it is important to display the features that have led to the construction of the typology. For instance, it could be useful to give a case illustration of each of the typology groups. An example of a typology group could be ‘soft-negotiator’ and ‘non-interventionist’. Sometimes, a typology might also relate to a sector within the population. In these cases it is useful to describe the distribution of the typology across the study sample. Nevertheless, these distributions hold no significance statistically.
Explanatory accounts:
- Associations and Linkages
During the data analysis patterns within the data become obvious through linkages. In the reporting stage it is thus vital, to give evidence that allows the reader some understanding of why two or more sets of phenomena may be linked and why some phenomena might be attached to certain subgroups.
Firstly, you should describe the evidence available that support these linkages. This evidence might be explicit or implicitly conveyed in the text. It also may be inferred through further analysis or might simply be a explanatory hypothesis. Secondly, you should describe the circumstances in which the connection may change or become modified. And finally, exceptions should be noted.
- Displaying the explanatory base of evidence
The source of explanation can be hard to pin down, depending as it does on fitting several pieces of data together through iterative analysis. There can be several sources of explanation:
- Explicit reasons and accounts: display all the reasons that have been given by participants for a particular phenomena
- Presenting underlying logic or ‘common sense’: display implicit connections within data. Make clear that it was the research, not the participant who was the architect.
- Relaying explanatory concepts
Development of an important concept which is helpful in explaining the origins of different phenomena or sets of phenomena. Give background about definition of the concept.
- Drawing on other theoretical or empirical evidence
These theories and evidences might help to explain own findings. Give background to how the concepts or theory they are using is developed.
- Wider applications
Inferences drawn by researcher
Displaying and explaining recurrence
Frequent or dominant phenomena usually occur during the research process. The extent to which these should be reported is up to the researcher. It is vital to remember here that statistical inferences about the wider population are usually biased and not applicable because of the purposeful selection of samples. How many people said something might be important but these numbers are interpreted in a different manner in qualitative analysis than in quantitative analysis. The issues that are talked about should stand in the focus of the analysis rather than the count of people. The ‘array’ that people talk about though can be presented in a more classified form though. (There are five main types of features parents find important when deciding about the school their children should attend). It should also be noted here, which issues differ between groups. If phenomena occur in a large number, than the appropriate description would be ‘dominant’, ‘recurrent’, ‘Consistent’, ‘widespread’, or ‘commonly held’. Numerical distribution of the sample is only used in the description of typologies within a sample.
The use of illustrative material
Qualitative research easily incites you to use a variety of verbatim passages ( citations) within your report. These quotations or other types of primary data can effectively be used:
- To demonstrate the type of language, terms or concepts that people use to discuss a particular subject
- To illustrate the meanings that people attach to social phenomena
- To illustrate people’s expressions of their views or thoughts about a particular subject and the different factors influencing it
- To illustrate different positions in relation to a model, process or typology
- Ti demonstrate features or presentation about phenomena such as strength ambivalence, hesitance, confusion or even contradictory views
- To amplify the way in which complex phenomena are described and understood
- To portray the general richness of individual or group accounts
These quotations though should never be used without interpretative commentaries. Moreover, verbatim passages should not compromise the confidentiality and anonymity of the participants and the report should not be loaded with them.
The use of diagrammatic and visual representation
In general, these make complex processes or relationships ore accessible to the reader. They can help to:
- Display the range and diversity of phenomena or typologies
- Display relationships between different factors
- Explain complex processes and different levels and dimensions involved
- Provide effective means for summarising
- Break up texted-base format
Content of oral presentation
It should only represent the top line findings and methodological issues cannot be addressed in too much detail. They are most effective when tailored to one specific audience and is limited in length by the time the participants are able to listen actively.
The structure of the presentation should be coherent like the report. The presentation style concerns use of hand-outs, use of visual material, the language and the presenting stance or voice.
Chapter N : Case Studies
The development of a research design of case study research is difficult as there is not comprehensive catalog of research designs available yet (like for other methods). One pitfall to be avoided is to consider case study design to be a subset/variant of other research designs used for other methods. This is because it is a separate research method that has its own research design.
Definition of research design: Different definitions exist. Firstly, every type of empirical research has an implicit, if not explicit research design. In the most elementary sense, it is a logical plan for getting from here to there, where here may be defined as the initial set of questions to be answered, and there is some set of conclusions about the questions. Another textbook argues that research design is a plan that 'guides the investigator in the process of collecting, analysing and interpreting observations. It is a logical model of proof that allows the researcher to draw inferences'. However, it is more than just a plan because its main purpose is to help to avoid the situation in which the evidence does not address the initial research question.
Components of research design:
- a study's questions: form of question provides important clue regarding which methods to be used. Why and how is most appropriate for case study research.
- its propositions, if any: only if you have propositions, you will move in the right direction. This is because how and why questions are too broad. It also indicates where to look for relevant evidence. However, some studies also have a legitimate reason for not having propositions, e.g. because it's of exploratory nature.
- its unit(s) of analysis: This is a fundamental problem. The case may be an individual, but also an event or entity. However, sometimes cases are not easily defined in terms of the beginning or end points. Selection of your unit of analysis will start to occur then you specify your research questions, however you may also change the case as a result of discoveries during data collection.
- the logic linking the data to propositions: This will require you to link the collected data to your initial study proposition. It can be done through e.g. pattern matching, explanation building, time-series analysis, logic models, and cross-case synthesis. The text refers to a detailed explanation in Chapter 5, which is Chapter 17 in our custom book.
- the criteria for interpreting the findings: Because case study research does not rely on statistics, we have to find other ways to interpret our findings. Again, Chapter 17 of our book explains this topic in detail.
The role of theory in design work: Covering the five components of research design will effectively force us to begin constructing a preliminary theory that is related to our topic. This part is essential, whether the ensuing case study's purpose is to develop or test a theory. The theory will cover the questions, propositions, units of analysis, logic connecting data to propositions and criteria for interpreting the findings, hence it embodies the five research design components. Theory development takes time and can be difficult though.
Sometimes we have a too complex theoretical foundation while in other cases, we have only very limited information available on our topic. For the theory development, you might want to review literature concerning your topic. However there is a full range of theories available:
- Individual theories: e.g. theories about individual development, cognitive behaviour, learning, perception.
- Group theories: e.g. theories about family functioning, informal groups, teams.
- Organizational theories: e.g. theories about bureaucracies, org. structure and functioning, organizational performance
- Societal theories: e.g. theories about urban development, international behaviour
However, there are also theories like decision making theory that cut across these, hence involve more than one type. Theory development does not only facilitate data collection, but it might also facilitate the generalization from case study to theory. We have two types of generalization: Firstly, analytic generalization, in which previous developed theory is used as a template with which to compare the empirical results of the case study. It can be used with single or multiple case studies. One should aim toward analytical generalization, because it investigates at level two (see Fig. 2.2 on page 373). The second type is statistical generalization, which is less relevant for doing case studies, however it concerns making inference about a population on the basis of empirical data from a sample. However, as cases should not be seen as sample units. This type of generalization is making inference at level one only (see Fig. 2.2).
Criteria for judging the quality of research design
Four test exists in order to measure the quality of any empirical research (including case studies). Definitions are given below, while tactics that should be used in assuring a high quality of the research are summarized in Figure 2.3.
Construct validity: identifying correct operational measures for the concepts being studied. This test is especially challenging in case study research because it can be very subjective sometimes.
Internal validity: seeking to establish a causal relationship, whereby certain conditions are believed to lead to other conditions, as distinguished from spurious relationships (only applicable to explanatory or causal studies, not for exploratory or descriptive studies). Internal validity is mainly a concern for explanatory case studies, when an investigator wants to explain how and why x led to event y. Furthermore, the concern over internal validity, for case study research, extends to a broader problem of making inferences.
External validity: defining the domain to which a study's findings can be generalized.
Reliability: demonstrating that the operations of a study - such as the data collection procedures - can be repeated, with the same results. Hence, if a later investigator followed the same procedures as described by an earlier investigator. Emphasis is on the same case study, not a replication of it.
The two major dimensions case studies can be divided in are single- and multiple case analyses. After dividing the case study in one of these two categories it can further be divided in terms of unit of analysis being either holistic or embedded. This categorization leads to the matrix shown above.
Rationales when to use which case study design:
Single case study:
1) The case represents the critical case in testing a well-formulated theory
2) The case represents a unique or extreme case that is worth to be analysed.
3) The case is a representative or typical case that is used to show for example a typical project among many different projects
4) The case is revelatory, which means that it addresses an issue that has not yet been treated by other researchers
5) The case is longitudinal, which means that the case analysis is used to be compared to another single case analysis from the past, to show how results changed over time
A weakness of using single case studies is the possibility of the results being completely different than expected beforehand since it is mostly exploratory and every case can provide different results.
The unit of analysis has also to be determined by the researcher. If he/she wants to examine subcategories within the case the embedded design is used. (See matrix table) If the case does not analyse subcategories and sticks with a general analysis the level of analysis is called holistic. A weakness of the holistic approach is the possibility that the case study stays superficial and/or abstract. A weakness of the embedded approach is the possibility that the analysis focuses too much on the subcategories of analysis and does not return to the global level of analysis.
Multiple case analysis: Multiple case analysis should have a replication design. Every case study is independent in its methodology, for example the researcher does one experiment and receives a result. Then he conducts a second and third experiment to verify this result, this would be an example of replication design. Therefore, each case must be selected in a way that predicts similar results (literal replication) or predicts contrasting results that can be explained and anticipated beforehand. (Theoretical replication) Multiple case analysis in general should not follow a sampling logic that wants to make inferences about the whole populations through the use of sampling groups, but should always make use replication logic. Each individual case in the replication approach consists of a full-scale independent study. The more replications are done (so the more case studies are conducted the more reliable the gathered results become)
When using the multiple case analysis, there is still the categorization into holistic or embedded. (see matrix table) Therefore, a multiple case study of 3 cases can be embedded which means that each of the 3 cases have subcategories that are examined.
Modest advice in selecting case study design
Single or multiple design? In general, multiple case analysis leads to better and more meaningful results and is generally preferred over single case analysis as it also reduces skepticism towards the study by making it more reliable, however it takes more time and resources to conduct.
Closed-or flexible design? Sometimes it is necessary to change the outlay of your research when you discover results that point in a new direction or the case study design is inefficient. In this case you have to evaluate the nature of the changes you make so you avoid changes that are too fundamental so they would influence results of the study. In general, adaptation in design is allowed, but pay attention that it is consistent with your research objectives.
Mixing case studies with other Methods? Mixing case studies and quantitative research can often lead to more meaningful results, but integrating the approaches is complicated to it depends on the skill of the researcher and the resources he has whether he makes use of a mixed methods approach.
Chapter O: Case Study Preparation
This chapter deals with the needed preparation for a case study. A good preparation includes desired skills of the investigator, training for a specific case study, developing a protocol, screening candidates and conducting a pilot case study.
Desired skills
The demand on your intellect, ego and emotions are greater than those of other research methods, because date collection is not routinized.
Commonly required skills are:
- Able to ask good questions and interpret the answers
Case studies require an inquiring mind during date collection, not just before or after. Due to the fact that specific information is not always readily predictable, it might be needed to search for additional evidence. Research is about questions and not necessarily about answers.
- Be a good listener
Listening is also about observation. A good listener hears the exact words, captures the mood and understands the context and worries whether there is a message between the lines.
- Be adaptive and flexible, new situations are opportunities, not threats.
It might be necessary to make some changes, from the need to pursue an unexpected lead (potentially minor) to the need to identify a new case for study (potentially major). The original purpose has to be remembered, but a researched must be willing to adapt procedures.
- Have a firm grasp of the issues being studied, reduces relevant events to be sought to manageable proportions.
Data collection is not only a matter of recording date, but you must be able to interpret the information. Make inferences about what actually transpired.
- Be unbiased by preconceived notions
You have to be open to contrary findings.
-Training for a specific case study
- Human subjects protection
Most case studies are about contemporary human affairs, which require specific care.
This care includes:
- Gaining informed consent from all participants
- Protecting participants from harm, including avoiding use of any deception.
- Protecting privacy and confidentiality
- Take special precautions that might be needed to protect especially vulnerable groups (for instance, children)
Every institution now has an Institutional Review Board (IRB), which reviews and approves on human subject. An important step before proceeding your case study is to obtain IRB’s approval.
Case study training as a seminar experience
Every case study investigator must be able to operate as a ‘senior’ investigator. Training begins with the definition of the questions and development of the study design. A case study often must rely on a case study team for the following reasons:
- Often intensive data collection at the same time is needed, requiring a team of investigators
- Often multiple cases are involved, different persons are need to cover each site or to rotate among sites
- A combination of the first two conditions.
The goal of training is to have all team members understand why the study is done, what evidence is being sought, what variations can be anticipated and what would constitute supportive or contrary evidence for any given proposition.
Problems to be addressed
Training has also purpose of uncovering problems. The most obvious problem is that training may reveal flaws in the case study design, so it might be necessary to make some revisions. The training might reveal incompatibilities among the investigating team. A way of dealing with it is to suggest to the team that contrary evidence will be respected if it is collected and verifiable. The training can also reveal some impractical time deadlines or expectations. On the other hand, training may uncover some positive features as complementary skills.
The case study protocol
A protocol is more than a questionnaire or instrument. It also contains general rules to be followed and is essential doing a multiple-case study, although it is always desirable. A protocol is increasing the reliability of case study and guides the investigator in carrying out data collection. It also keeps you targeted on the topic of the case and it prepares to anticipate several problems. It might be convenient to identify the audience before you conducted the study. Figure 3.2 (page 414) shows a table of contents from a protocol. The following sections are always included in a case study protocol:
- Overview of the case study project
It should cover the background information, which is a statement of the project you can present to anyone. It should also include the substantive issues being investigated and the relevant reading about the issues.
- Field procedures
Studies within their real life context. You have to learn to integrate real-world events with the needs of the data collection plan. Within a laboratory experiment, the respondent can often not deviate from the agenda set and the behaviour is constrained by the ground rules of the investigator.
The field procedures of the protocol need to emphasize major tasks in collecting data:
- Gain access to key organizations or interviewees
- Have sufficient resources while in the field (computer, writing instruments etc.)
- Developing a procedure for calling for assistance when needed.
- Making a clear schedule of the data collection expected within specified time.
- Providing for unanticipated events, including changes in availability.
The protocol should carefully describe procedures for protection human subjects.
- Case study questions
General orientation of questions. The questions in a protocol are posed to you, the investigator, not to an interviewee. This keeps the investigator on track. Each question should be accompanied by a list of likely sources of evidence.
There are different types of questions, divided in levels.
Level 1: Asked of specific interviewees
Level 2: Asked of the individual case
Level 3: Asked of the pattern of findings across multiple cases
Level 4: Questions asked of an entire study
Level 5: Normative questions about policy recommendations and conclusions
The difference between level 1 and level 2 questions is that the verbal line of inquiry is different from the mental line of inquiry. The level 3 questions cannot be addressed until the data from all single cases are examined. Only multiple-case analysis can cover level 3.
The protocol is for the date collection from a single case (even when part of a multiple-case study) and is not intended to serve the entire project.
There can be some differences between date collection source and the unit of analysis. A matrix of this can be found in Figure 3.5 at page 423.
A guide for the case study report
Some planning about format, outline and audience is needed, although researchers often forget it. Most reports of experiments follow a similar outline with posing research questions and hypotheses; a description of the research design, apparatus; data collection procedures; presentation of data collected; analysis of the date and a discussion of findings and conclusions. But keep in mind, case study plans can change, and you still have to be flexible.
Screening candidate ‘cases’ for you case study
Sometimes selection is straightforward, but sometimes there are many qualified candidates and a choice has to be made. Prior to collecting screening data, is a defined set of operational criteria. When doing multiple-case studies, select cases that best fit your replication design. When there are too many candidates, a two-stage screening procedure is warranted. First, collect relevant quantitative data about the pool. Secondly, define relevant criteria for either stratifying or reducing the number of candidates. Reduce the number of candidates to 20.
The pilot case study
Pilot cases can be conducted for selecting the final case study design, but also a pilot case can represents a most complicated case, compared to the real case, so nearly all relevant data collection will be encountered in the pilot case.
A pilot test is not a pre-test, because a pilot is more formative and assisting. The pilot case study can be so important that more resources may be devoted to this phase than to the collection of data from an actual case. Therefore, the following subtopics are worth discussing: Selection of plot cases, the nature of the inquiry for pilot cases, nature of reports from pilot cases.
Selection of pilot cases:
Convenience, access and geographic proximity can be main criteria. It is possible to try different pilot cases for different types of technology. After a pilot case, you have to provide some feedback, to do so; a protocol also has to be developed for a pilot case. A pilot case should never be the occasion for an overly informal or highly personalized inquiry.
Scope of the pilot inquiry:
The scope of inquiry for a pilot case can be much broader than ultimate data collection. It can cover both substantive and methodological issues. Pilot data can provide considerable insight into the basic issues being studied. When this is parallel with some relevant literature, it helps to ensure that the actual study reflected significant theoretical or policy issues.
Reports from the pilot cases
The reports do have a great value and need to be written clearly. The difference with the actual study report is that pilot reports should be explicit about the lessons learned for both research design and field procedures.
Chapter P: Evidence Collection
Case study evidence can come from many sources. This chapter discusses 6 of them: documentation, archival records, interviews, direct observation, participant-observation, and physical artefacts.
At an earlier time guidance on collecting data relevant for case studies was available under three rubrics, which are fieldwork, field research, and social science methods.
Six sources of evidence
The sources of evidence discussed here are the most commonly used in doing case studies. There are however a lot more sources. Figure 4.1 provides the list of sources with their weaknesses and strengths. Note: no single source has a complete advantage over the others, the various sources are highly complementary.
1. Documentation. Documentary information is likely to relevant to every case study topic. This type of information can take many forms and should be the object of explicit data collection plans.
For case studies, the most important use of documents is to corroborate and augment evidence from other sources. First, documents are helpful in verifying the correct spellings and tittles or names of organizations that might have been mentioned in interviews. Second, documents can provide other specific details to corroborate information from other sources. Third, you can make inferences from documents.
Documents play an explicit role in any data collection in doing case studies. Systematic searches for relevant documents are important in any data collection plan. However you have to be critical, since not all documents are telling the truth. It is important in reviewing any document that it is written for some specific purpose and audience other than those of the case study being done. Another problem is the abundance of materials available through internet searches. You may get lost in reviewing materials and waste time in them.
2. Archival Records. Archival records often take the form of computer files and records. This type of data can be used in conjunction with other sources of information in producing a case study. However the importance of archival records varies a lot. When archival evidence has been deemed relevant, an investigator must be careful to ascertain the conditions under which it was produced as well as its accuracy. Most archival records are also produced for a specific purpose and a specific audience other than the case study investigation, and these conditions must be fully appreciated in interpreting the usefulness and accuracy of the records.
Interviews are one of the most important sources of case study information. interviews will be guided conversations rather than structured queries. Throughout the interview process, you have two jobs: (a) to follow your own line of inquiry, as reflected in your case study protocol, and (b) to ask your actual questions in an unbiased manner that also serves the needs of your own line of inquiry.
One type of case study interview is an in-depth interview. You can ask respondents about the facts of a matter as well as their opinions about events. Sometimes you can also ask the interviewee to propose her/his own insight into certain occurrences and may use such propositions as the basis for further inquiry.
For this reason the interview can take place over an extended period of time. And the interviewee can suggest others to interview, as well as other sources of evidence.
The more an interviewee assist, the more he/she becomes an informant. Key informants are often critical to the success of a case study. However you have to be careful that you are not becoming overly dependent on a key informant. You have to rely on other sources of evidence and search for contrary evidence as carefully as possible.
A second type of case study interview is a focused interview, in which a person is interviewed for a short period of time. In such cases, The interviews may still remain open-ended and assume a conversational manner, but you are more likely to be following a certain set of questions derived from the case study protocol. When performing such interviews, your questions have to be carefully worded, so that you appear genuinely naive about the topic and allow the interviewee to provide a fresh commentary about it.
A third type of case study entails more structured questions, which is a formal survey. Such a survey could be designed as part of an embedded case study and produce quantitative data as part of the case study evidence.
3. Overall interviews are an essential source of case study evidence. The reason here is because most case studies are about human affairs or behavioural events. Well-informed interviewees can provide important insights into such affairs or events, and can also provide shortcuts to prior history of such situations, helping you to identify other relevant sources of evidence. Interviews should always be considered verbal reports only.
A common question about doing interviews is whether to record them. This is a personal preference. It does provide a more accurate rendition. However, a recording device should not be used when (a) an interviewee refuses permission or appears uncomfortable in its presence, (b) there is no specific plan for transcribing or systematically listening to the contents of the electronic records, (c) the investigator is clumsy enough with mechanical devices that the recording creates distractions during the interview itself, or (d) the investigator thinks that the recording device is a substitute for listening close throughout the course of an interview.
4. Direct Observation. Because a case study should take place in the natural setting of the case, you are creating the opportunity for direct observations. These observations serve as yet another source of evidence in a case study. The observations can range from formal to casual data collection activities. Observational evidence is often useful in providing additional information about the topic being studied. A common procedure to increase the reliability of observational evidence is to have more than a single observer making an observation.
5. Participant-Observation. Participant-observation is a special mode of observation in which you are not merely a passive observer. Instead, you may assume a variety of roles within a case study situation and may actually participate in the events being studied. This type of study has been most frequently used in anthropological studies of different cultural or social groups. The technique can be used in more everyday settings, such as a large organization or informal small groups.
Participant-observation provides certain unusual opportunities for collecting study data, but it also involves major problems. For some topics there may be no way of collecting evidence other than through participant-observation.
Another distinctive opportunity is the ability to perceive reality from the viewpoint of someone inside the case study rather than external to it. Finally, other opportunities arise because you may have the ability to manipulate minor events.
The major problems related to participant-observation have to do with the potential biases produced. First, the investigator has less ability to work as an external observer and may, at times, have to assume positions or advocacy roles contrary to the interests of good social science practice. Second, the participant-observer is likely to follow a commonly known phenomenon and become a supporter of the group or organization being studied, if such support did not already exist. Third, the participant role may simply require too much attention relative to the observer role, so he/she has not enough time to take notes. Fourth, if the organization or social group being studied is physically dispersed, the participant-observer may find it difficult to be at the right place at the right time, either to participate in or to observe important events. These trade-offs between opportunities and the problems have to considered seriously in undertaking any participant-observation study.
6. Physical artefacts. A final source of evidence is physical or cultural artefact – a technological device, a tool or instrument, a work of art, or some other physical evidence. Such artefacts may be collected or observed as part of a case study and have been used extensively in anthropological research. Physical artefacts have less potential relevance in the most typical kind of case study. However, when relevant, the artefacts can be an important component in the overall case.
Three principles of data collection
The benefits from these six sources of evidence can be maximized if you follow three principles. These principles are extremely important for doing high-quality case studies, are relevant for all six types of sources, and should be followed whenever possible. This is because they can help dealing with the problems of establishing the construct validity and reliability of the case study evidence.
Principle 1: Use multiple sources of evidence
Triangulation: Rationale for using multiple sources of evidence. It is not recommended for conducting case studies. However, a major strength of case study data collection is the opportunity to use many different sources.
The use of multiple sources of evidence in case studies allows an investigator to address a broader range of historical and behavioural issues. However, the most important advantage presented by using multiple sources of evidence is the development of converging lines of inquiry, a process of triangulation and corroboration. Any case study finding or conclusion is likely to be more convincing and accurate if it is based on several different sources of information, following a corroboratory mode.
Patton (2002) discusses four types of triangulation in doing evaluations – The triangulation
- of data sources (data triangulation)
- among different evaluators (investigator triangulation)
- of perspectives to the same data set (theory triangulation), and
- of methods (methodological triangulation)
We mainly focus on the first type. Figure 4.2 distinguishes between two conditions – when you have really triangulated the data (upper portion) and when you have multiple sources as part of the same study but nevertheless address different facts (lower portion). When you have triangulated the data, the events or facts of the case study have been supported by more than a single source of evidence; when you have used multiple sources not actually triangulated the data, you typically have analysed each source of evidence separately and have compared the conclusions of different analysis – but not triangulated the data.
With data triangulation, the potential problem of construct validity also can be addressed, because multiple sources of evidence essentially provide multiple measures of the same phenomenon.
Prerequisites for using multiple sources of evidence. The use of multiple sources of evidence imposes a greater burden. First is that the collection of data from multiple sources is more expensive than if data were only collected from a single source. Second and more important, each investigator needs to know how to carry out the full variety of data collection techniques.
Principle 2: Create a Case Study Database
This principle has to do with the way of organizing and documenting the data collected for case studies. The documentation commonly consists of two separate conditions:
- the data or evidentiary base and
- the report of the investigator, whether in article, report, or book form.
A case study database markedly increases the reliability of the entire case study. The lack of a formal database is a major shortcoming and needs to be corrected. There are numerous ways to accomplish the task, as long as you and other investigators are aware of the need and are willing to commit the additional effort required to build the database. Every report should contain enough data so that the reader of the report can draw independent conclusions about the case study.
The problem of developing the database is described in terms of four components:
- Case study notes. These notes must be stored in a way that other people can also retrieve them at some late date. So they need to be organized, categorized, complete, and available for later access.
- Case study documents. Main objective is to make the documents readily retrievable for later inspection. This can be on paper or electronically (PDF files).
- Tabular materials. The database can consist of tabular materials, either collected from the site being studied or created by the research team. Such material also needs to be organized and stored to allow for later retrieval.
- Narratives. Narratives also can be considered as a formal part of the database. The narrative reflects a special practice that should be used more frequently: to have case study investigators compose open-ended answers to the questions in the case study protocol. The main purpose is to document the connection between specific pieces of evidence and various issues in the case study, generously using footnotes and citations.
Principle 3: Maintain a Chain of Evidence
This principle increases the reliability of the information in a case study. This principle is to allow an external observer (reader of the case study) to follow the derivation of any evidence from initial research questions to ultimate case study conclusions. The external observer should be able to trace the steps in either direction. It is the same process used in forensic investigations. No evidence should have been lost.
Chapter Q: Data Analysis
The analysis of case study evidence is one of the least developed and most difficult aspects of doing case studies. Too many times, investigators start case studies without having the foggiest notion about how the evidence is to be analysed. Investigators are continuing their search for formulas, recipes, or tools, hoping that familiarity with these devices will produce the needed analytic result. There are, for example, many computer-assisted routines with pre-packaged software that assist with qualitative data analysis. These tools can help you code and categorize large amounts of narrative text. Key to your understanding of the value of these packages are two words: assisted and tools. The software will not do any analysis for you, but it may serve as an able assistant and reliable tool. So software can be a really helpful tool, but you must keep in mind that they are the assistant, not you.
All empirical research studies, including case studies, have a 'story' to tell. The story differs from a fictional account because it embraces your data, but it remains a story because it must have a beginning, end, and middle. The needed analytical strategy is your guide to crafting this story, and only rarely will your data do the crafting for you. The strategy will help you to treat the evidence fairly, produce compelling analytic conclusions, and rule out alternative interpretations. The strategy will also help you to use tools and make manipulations more effectively and efficiently. The book describes 4 strategies:
Relying on theoretical propositions: The first and most preferred strategy is to follow the theoretical propositions that led to your case study. The original objectives and design of the case study presumably were based on such propositions, which in turn reflected a set of research questions, reviews of the literature and new hypotheses or propositions. These propositions help you to focus attention on certain data and to ignore other data.
Developing a case description: A second general analytic strategy is to develop a descriptive framework for organizing the case study. For instance, you actually (but undesirably) may have collected a lot of data without having settled on an initial set of research questions. Studies started this way inevitably encounter challenges at their analytic phase. The ideas for your framework should have come from your initial review of literature, which may have revealed gaps or topics of interest to you, spurring your interest in doing a case study.
Another suggestion is to note the structure of existing case studies, and at least to observe their tables of contents as an implicit clue to different descriptive approaches.
Using both qualitative and quantitative data: This third strategy may be more attractive to advanced students and scholars and can yield appreciable benefits. Certain case studies can include substantial amount of quantitative data. If these data are subjected to statistical analyses at the same time that qualitative data nevertheless remain central to the entire case study, you will have successfully followed a strong analytic strategy. The quantitative data may have been relevant to your case study for at least two reasons:
1. The data may cover the behaviour or events that your case study is trying to explain.
2. The data may be related to an embedded unit of analysis within your broader case study.
Examining rival explanations: a fourth general analytic strategy, trying to define and test rival explanations, generally works with all of the previous three: Initial theoretical propositions (The first strategy) might have included rival hypotheses, the contrasting perspective of participants and stakeholders, may produce rival descriptive frameworks (the second strategy); and data from comparison groups may cover rival conditions to be examined as part of using both quantitative and qualitative data (the third strategy).
Figure 5.1 classifies and lists many types of rivals. There are three types of Craft rivals, rivals that underlie all of our social science research, and textbooks have given much attention to these craft rivals. And there are six Real-life rivals, rivals which have received virtually no attention by other textbooks. These real-life rivals are the ones that you should carefully identify prior to your data collection.
1. Pattern Matching
For case study analysis, one of the most desirable techniques is to use a pattern-matching logic. Such logic compares an empirically based pattern with a predicted one. If the patterns coincide, the results can help a case study top strengthen its internal validity (see Chapter 2). If the case study is an explanatory one, the patterns may be related to the dependent or the independent variables of the study. If the case study is a descriptive one, pattern matching is still relevant, as long as the predicted pattern of specific variables is defined prior to data collection.
Non-equivalent dependent variables as a pattern
The dependent-variables pattern may be derived from one of the more potent quasi-experimental research designs, labelled a 'non-equivalent, dependent variables design'. According to this design, an experiment or quasi-experiment may have multiple dependent variables-that is, a variety of relevant outcomes. The pattern matching occurs in the following manner: If, for each outcome, the initially predicted values have been found, and at the same time, alternative 'patterns' of predicted values have not been found, strong causal inferences can be made.
Rival explanations as patterns.
The use of rival explanations provides a good example of pattern matching for independent variables. In such a situation, several cases may be known to have a certain type outcome, and your investigation has focused on how and why this outcome occurred in each case. This analysis requires the development of rival theoretical propositions.
Simpler patterns
This same logic can be applied to simpler patterns, having a minimal variety of either dependent or independent variables. In the simplest case, where there may be only two different dependent (or independent) variables, pattern matching is possible as long as a different pattern has been stipulated for these two variables
Precision of pattern matching
At this point in the state of the art, the actual pattern-matching procedure involves no precise comparisons. Whether one is predicting a pattern of non-equivalent dependent variables, a pattern based on rival explanations, or a simple pattern, the fundamental comparison between the predicted and the actual pattern may involve no quantitative or statistical criteria.
2. Explanation Building
A second analytic technique is in fact a special type of pattern matching, but the procedure is more difficult and therefore deserves separate attention. Here, the goal is to analyse the case study data by building an explanation about the case.
Elements of explanations
To 'explain' a phenomenon is to stipulate a presumed set of causal links about it, or 'how' or 'why' something happened.
The causal links may be complex and difficult to measure in any precise manner. In most existing case studies, explanation building has occurred in narrative form. Because such narratives cannot be precise, the better case studies are the ones in which the explanations have reflected some theoretically significant propositions.
Iterative nature of explanation building
The explanation-building process, for explanatory case studies, has not been well documented in operational terms. However, the eventual explanation is likely to be a result of a series of iterations (repetitions):
- Making an initial theoretical statement or an initial proposition about policy or social behaviour.
- Comparing the findings of an initial case against such a statement or proposition
- Revising the statement or proposition
- Comparing other details of the case against the revision
- Comparing the revision to the facts of a second, thirds or more cases
- Repeating this process as many times as needed.
In this sense, the final explanation may not have been fully stipulated (determined) at the beginning of a study and therefore differs from the pattern-matching approaches previously described. Rather, the case study evidence is examined, theoretical positions are revised, and the evidence is examined once again from a new perspective in this iterative mode.
Potential problems in explanation building
You should be forewarned that this approach to case study is fraught with dangers. Much analytic insight is demanded of the explanation builder. As the iterative process progresses, for instance, an investigator may slowly begin to drift away from the original topic of interest. Constant reference to the original purpose of the inquiry and the possible alternative explanations may help to reduce this potential problem.
3. Time-Series Analysis
A third analytic technique is to conduct a time-series analysis, directly analogous to the time-series analysis conducted in experiments and quasi-experiments. Such analysis can follow many intricate patterns, which have been the subject of several major textbooks in experimental and clinical psychology with single subjects; the interested reader is referred to such works for further detailed guidance. The more precise the pattern, the more that the time-series analysis also will lay a firm foundation for the conclusions of the case study.
Simple time series.
Compared to the more general pattern-matching analysis, a time-series design can be much simpler in one sense: in time series, there may only be a single dependent or independent variable. In these circumstances, when a large number of data points are relevant and available, statistical tests can even be used to analyse the data. However, the pattern can be more complicated in another sense because the appropriate starting or ending points for this single variable may not be clear.
Complex time series
The time-series designs can be more complex when the trends within a given case are postulated (presumed) to be more complex. One can postulate, for instance, not merely rising or declining trends but some rise followed by some decline within the same case. This type of mixed pattern, across time, would be the beginning of a more complex time series. In general, although a more complex time series creates greater problems for data collection, it also leads to a more elaborate trend that can strengthen an analysis.
Chronologies
The compiling of chronological events is a frequent technique in case studies and may be considered a special form of time-series analysis. The chronological sequence again focuses directly on the major strength of cases studies cited earlier-that case studies allow you to trace events over time. The procedure can have an important analytic purpose-to investigate presumed causal events-because the basis sequence of a cause and its effect cannot be temporally inverted. The analytic goal is to compare the chronology with that predicted by some explanatory theory-in which the theory has specified one or more of the following kinds of conditions:
- Some events must always occur before other events, with the reverse sequence being impossible.
- Some events must always be followed by other events, on a contingency basis.
- Some events can only follow other events after pre-specified interval of time.
- Certain time periods in a case study may be marked by classes of events that differ substantially from those of other time periods.
Summary conditions for time-series analysis
Whatever the stipulated nature of the time series, the important case study objective is to examine some relevant 'how' and 'why' questions about the relationship of events over time, not merely to observe the time trends alone. An interruption in a time series will be the occasion for postulating potential causal relationships; similarly, a chronological sequence should contain causal postulates.
4. Logic Models
This fourth technique has become increasingly useful in recent years, especially in doing case study evaluations. The logic models deliberately stipulates a complex chain of events over an extended period of time. The events are staged in repeated cause-effect-cause-effect patterns, whereby a dependent variable (event) at an earlier stage becomes the independent variable (causal event) for the next stage.
This process can help a group define more clearly its vision and goals as well how the sequence of programmatic actions will accomplish the goals. As an analytic technique, the use of logic models consists of matching empirically observed events to theoretically predicted events.
Individual-level logic model
The first type assumes that your case study is about an individual person (Figure 5.2). The events flow across a series of boxes and arrows reading from left to right in the figure.
Firm or organizational-level logic model
A second type of logic model traces events taking place in an individual organization, such as a manufacturing firm. Figure 5.3 shows how changes in a firm are claimed to lead to improved manufacturing and eventually to improved business performance.
An alternative configuration for an organizational-level logic model
Graphically, nearly all logic models follow a linear sequence. In real life, however, events can be more dynamic, not necessarily progressing linearly. One such a set of events might occur in relation to the 'reforming' or 'transformation' of an organization. For instance, business firms may undergo many significant operational changes, and the business's mission and culture also may change. Figure 5.4 presents an alternatively configured, third type of logic model, reflecting these conditions.
Program-level logic model
Returning to the more conventional linear model, Figure 5.5 contains a fourth and final type of logic model. Here, the model depicts the rationale underlying a major federal program, aimed at reducing the incidence of HIV/AIDS by supporting community planning and prevention initiatives.
5. Cross-Case Synthesis
A fifth technique applies specifically to the analysis of multiple cases. The technique is especially relevant if, a case study consist of at least two cases. The analysis is likely to be easier and the findings likely to be more robust than having only a single case. Cross-case syntheses can be performed whether the individual case studies have previously been conducted as independent research studies or as a predesigned part of the same study.
Pressing for a high quality analysis
No matter what specific analytic strategy or techniques have been chosen, you must do everything to make sure that your analysis is of the highest quality. First, your analysis should show that you attended to all the evidence. Second, your analysis should address, if possible, all major rival interpretations. Third, your analysis should address the most significant aspect of your case study. Fourth, you should use your own prior, expert knowledge in your case study.
Join with a free account for more service, or become a member for full access to exclusives and extra support of WorldSupporter >>
Contributions: posts
Spotlight: topics
Online access to all summaries, study notes en practice exams
- Check out: Register with JoHo WorldSupporter: starting page (EN)
- Check out: Aanmelden bij JoHo WorldSupporter - startpagina (NL)
How and why use WorldSupporter.org for your summaries and study assistance?
- For free use of many of the summaries and study aids provided or collected by your fellow students.
- For free use of many of the lecture and study group notes, exam questions and practice questions.
- For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
- For compiling your own materials and contributions with relevant study help
- For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.
Using and finding summaries, notes and practice exams on JoHo WorldSupporter
There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.
- Use the summaries home pages for your study or field of study
- Use the check and search pages for summaries and study aids by field of study, subject or faculty
- Use and follow your (study) organization
- by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
- this option is only available through partner organizations
- Check or follow authors or other WorldSupporters
- Use the menu above each page to go to the main theme pages for summaries
- Theme pages can be found for international studies as well as Dutch studies
Do you want to share your summaries with JoHo WorldSupporter and its visitors?
- Check out: Why and how to add a WorldSupporter contributions
- JoHo members: JoHo WorldSupporter members can share content directly and have access to all content: Join JoHo and become a JoHo member
- Non-members: When you are not a member you do not have full access, but if you want to share your own content with others you can fill out the contact form
Quicklinks to fields of study for summaries and study assistance
Main summaries home pages:
- Business organization and economics - Communication and marketing -International relations and international organizations - IT, logistics and technology - Law and administration - Leisure, sports and tourism - Medicine and healthcare - Pedagogy and educational science - Psychology and behavioral sciences - Society, culture and arts - Statistics and research
- Summaries: the best textbooks summarized per field of study
- Summaries: the best scientific articles summarized per field of study
- Summaries: the best definitions, descriptions and lists of terms per field of study
- Exams: home page for exams, exam tips and study tips
Main study fields:
Business organization and economics, Communication & Marketing, Education & Pedagogic Sciences, International Relations and Politics, IT and Technology, Law & Administration, Medicine & Health Care, Nature & Environmental Sciences, Psychology and behavioral sciences, Science and academic Research, Society & Culture, Tourisme & Sports
Main study fields NL:
- Studies: Bedrijfskunde en economie, communicatie en marketing, geneeskunde en gezondheidszorg, internationale studies en betrekkingen, IT, Logistiek en technologie, maatschappij, cultuur en sociale studies, pedagogiek en onderwijskunde, rechten en bestuurskunde, statistiek, onderzoeksmethoden en SPSS
- Studie instellingen: Maatschappij: ISW in Utrecht - Pedagogiek: Groningen, Leiden , Utrecht - Psychologie: Amsterdam, Leiden, Nijmegen, Twente, Utrecht - Recht: Arresten en jurisprudentie, Groningen, Leiden
JoHo can really use your help! Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world
5933 | 1 |
Add new contribution