List of the Important Terms of Business Research Methods (4th ed.)

List of the Important Terms of Business Research Methods by Blumberg & Cooper, 2014 edition, donated to WorldSupporter

 

A population element = the unit of study - the individual participant or object on which the measurement is taken.

A population = the total collection of elements about which some conclusion is to be drawn.

A census = a count of all the elements in a population. The listing of all population elements from which the sample will be drawn is called the sample frame.

Accuracy = the degree to which bias is absent from the sample. When the sample is drawn properly, the measure of behaviour, attitudes or knowledge of some sample elements will be less than the measure of those same variables drawn from the population. Also, the measure of the behaviour, attitudes, or knowledge of other sample elements will be more than the population values. Variations in these sample values offset each other, resulting in a sample value that is close to the population value.

Systematic variance =“the variation in measures due to some known or unknown influences that ‘cause’ the scores to lean in on direction more than another.” The systematic variance may be reduced by e.g. increasing the sample size.

Precision: precision of estimate is the second criterion of a good sample design. In order to interpret the findings of research, a measurement of how closely the sample represents the population is needed.

Sampling error = The numerical descriptors that describe samples may be expected to differ from those that describe populations because of random fluctuations natural to the sampling process.

Representation =The members of a sample are selected using probability or non-probability procedures.

Probability sampling is based on the concept of random selection – a controlled procedure which ensures that each population element is given a known non-zero change of selection.

Non-probability sampling is arbitrary and subjective; when elements are chosen subjectively, there is usually some pattern or scheme used. Thus, each member of the population does not have a known chance of being included.

Element selection - Whether the elements are selected individually and directly from the population – viewed as a single pool – or additional controls are imposed, element selection may also classify samples.

Probability sampling - is based on the concept of random selection – a controlled procedure that assures that each population element is given a known nonzero chance of selection. Only probability samples provide estimates of precision and offer the opportunity to generalize the findings to the population of interest from the sample population.

Population parameters = summary descriptors (e.g., incidence proportion, mean, variance) of variables of interest in the population.

Sample statistics = used as estimators of population parameters. The sample statistics are the basis of conclusions about the population. Depending on how measurement questions are phrased, each may collect a different level of data. Each different level of data also generates different sample statistics.

The population proportion of incidence “is equal to the number of elements in the population belonging to the category of interest, divided by the total number of elements in the population.”.

The sampling frame = is closely related to the population. It is the list of elements from which the sample is actually drawn. Ideally, it is a complete and correct list of population members only.

Stratified random sampling = is the process by which the sample is constrained to include elements from each of the segments is called.

Cluster sampling = this is where the population is divided into groups of elements with some groups randomly selected for study.

An area sampling = the most important form of cluster sampling. It is possible to use when a research involves populations that can be identified with some geographic area. This method overcomes the problems of both high sampling cost and the unavailability of a practical sampling frame for individual elements

The theory of clustering = that the means of sample clusters are unbiased estimates of the population mean. This is more often true when clusters are naturally equal, such as households in city blocks. While one can deal with clusters of unequal size, it may be desirable to reduce or counteract the effects of unequal size.

Double sampling = It may be more convenient or economical to collect some information by sample and then use this information as the basis for selecting a subsample for further study. This procedure is called double sampling, sequential sampling, or multiphase sampling. It is usually found with stratified and/or cluster designs.

Convenience = Non-probability samples that are unrestricted are called convenience samples. They are the least reliable design but normally the cheapest and easiest to conduct. Researches or field workers have the freedom to choose whomever they find.

Purposive sampling = A non-probability sample that conforms to certain criteria is called purposive sampling. There are two major types – judgment sampling and quota sampling:

Judgment sampling occurs when a researcher selects sample members to conform to some criterion. When used in the early stages of an exploratory study, a judgment sample is appropriate. When one wishes to select a biased group for screening purposes, this sampling method is also a good choice.

Quota sampling is the second type of purposive sampling. It is used to improve representativeness. The logic behind quota sampling is that certain relevant characteristics describe the dimensions of the population. If a sample has the same distribution on these characteristics, then it is likely to be representative of the population regarding other variables on which the researcher has no control. In most quota samples, researchers specify more than one control dimension. Each should meet two tests: (1) It should have a distribution in the population that can be estimated, and (2) be pertinent to the topic studied.

Snowball = In the initial stage of snowball sampling, individuals are discovered and may or may not be selected through probability methods. This group is then used to refer the researcher to others who possess similar characteristics and who, in turn, identify others.

The observation approach: involves observing conditions, behaviour, events, people or processes.

The communication approach: involves surveying/interviewing people and recording their response for analysis. Communicating with people about various topics, including participants, attitudes, motivations, intentions and expectations.

Survey: a measurement process used to collect information during a highly structured interview – sometimes with a human interviewer and other times without.

Participant receptiveness = the participant’s willingness to cooperate.

Dealing with non-respond errors - By failing to respond or refusing to respond, participants create a non-representative sample for the study overall or for a particular item or question in the study.

In surveys, non-response error occurs when the responses of participants differ in some systematic way from the responses of nonparticipants.

Response errors: occur during the interview (created by either the interviewer or participant) or during the preparation of data for analysis.

Participant-initiated error: when the participant fails to answer fully and accurately – either by choice or because of inaccurate or incomplete knowledge

Interviewer error: response bias caused by the interviewer.

Response bias = Participants also cause error by responding in such a way as to unconsciously or consciously misrepresent their actual behaviour, attitudes, preferences, motivations, or intentions.

Social desirability bias = Participants create response bias when they modify their responses to be socially acceptable or to save face or reputation with the interviewer

Acquiescence = the tendency to be agreeable.

Noncontact rate = ratio of potential but unreached contacts (no answer, busy, answering machine, and disconnects but not refusals).

The refusal rate refers to the ration of contacted participants who decline the interview to all potential contacts.

Random dialling: requires choosing telephone exchanges or exchange blocks and then generating random numbers within these blocks for calling.

A survey via personal interview is a two-way conversation between a trained interviewer and a participant.

Computer-assisted personal interviewing (CAPI): special scoring devices and visual materials are used.

Intercept interview: targets participants in centralised locations, such as shoppers in retail malls. Reduce the costs associated with travel.

Outsourcing survey services offers special advantages to managers. A professionally trained research staff, centralized location interviewing, focus group facilities and computer assisted facilities are among them.

Causal methods are research methods which answer questions such as “Why do events occur under some conditions and not under others?”

Ex post facto research designs, in which a researcher interviews respondents or observes what is or what has been, have the potential for discovering causality. In comparison, the distinction is that with causal methods the researcher is required to accept the world as it is found, whereas an experiment allows the researcher to systematically alter the variables of interest and observe what changes follow.

Experiments are studies which involve intervention by the researcher beyond what is required for measurement.

Replication = repeating an experiment with different subject groups and conditions

Field experiments =a study of the dependent variable in actual environmental conditions

Hypothesis = a relational statement as it describes a relationship between two or more variables

In an experiment, participants experience a manipulation of the independent variable, called the experimental treatment.

The treatment levels of the independent variable are the arbitrary or natural groups the researcher makes within the independent variable of an experiment. The levels assigned to an independent variable should be based on simplicity and common sense.

A control group could provide a base level for comparison. The control group is composed of subjects who are not exposed to the independent variable(s), in contrast to those who receive the experimental treatment. When subjects do not know if they are receiving the experimental treatment, they are said to be blind.

When the experimenters do not know if they are giving the treatment to the experimental group or to the control group, the experiment is said to be double blind.

Random assignment to the groups is required to make the groups as comparable as possible with respect to the dependent variable. Randomization does not guarantee that if the groups were pretested they would be pronounced identical; but it is an assurance that those differences remaining are randomly distributed.

Matching may be used when it is not possible to randomly assign subjects to groups. This employs a non-probability quota sampling approach. The object of matching is to have each experimental and control subject matched on every characteristic used in the research.

Validity = as whether a measure accomplishes its claims.

Internal validity = do the conclusions drawn about a demonstrated experimental relationship truly imply cause?

External validity – does an observed causal relationship generalize across persons, settings, and times? Each type of validity has specific threats a researcher should to guard against.

Statistical regression = this factor operates especially when groups have been selected by their extreme scores. No matter what is done between O1 and O2, there is a strong tendency for the average of the high scores at O1 to decline at O2 and for the low scores at O1 to increase. This tendency results from imperfect measurement that, in effect, records some persons abnormally high and abnormally low at O1. In the second measurement, members of both groups score more closely to their long-run mean scores.

Experimental mortality – this occurs when the composition of the study groups changes during the test.

Attrition is especially likely in the experimental group and with each dropout the group changes. Because members of the control group are not affected by the testing situation, they are less likely to withdraw.

Diffusion or imitation of treatment = if the control group learns of the treatment (by talking to people in the experimental group) it eliminates the difference between the groups.

Compensatory equalization = where the experimental treatment is much more desirable, there may be an administrative reluctance to withdraw the control group members. Compensatory actions for the control groups may confound the experiment.

Compensatory rivalry = this may occur when members of the control group know they are in the control group. This may generate competitive pressures.

Resentful demoralization of the disadvantaged = when the treatment is desirable and the experiment is obtrusive, control group members may become resentful of their deprivation and lower their cooperation and output.

Reactivity of testing on X – the reactive effect refers to sensitising subjects via a pre-test so that they respond to the experimental stimulus (X) in a different way. This before-measurement effect can be particularly significant in experiments where the IV is a change in attitude.

Interaction of selection and X = the process by which test subjects are selected for an experiment may be a threat to external validity. The population from which one selects subjects may not be the same as the population to which one wishes to generalize results.

Static Group Comparison – the design provides for two groups, one of which receives the experimental stimulus while the other serves as a control.

Pre-test-Post-test Control Group Design – this design consists of adding a control group to the one-group pre-test-post-test design and assigning the subjects to either of the groups by a random procedure (R).

Post-test-Only Control Group Design – The pre-test measurements are omitted in this design. Pre-tests are well established in classical research design but are not really necessary when it is possible to randomize.

Non-equivalent Control Group Design – this is a strong and widely used quasi-experimental design. It differs from the pre-test-post-test control group design - the test and control groups are not randomly assigned.

There are two varieties.
- Intact equivalent design, in which the membership of the experimental and control groups is naturally assembled. Ideally, the two groups are as alike as possible. This design is especially useful when any type of individual selection process would be reactive.
-The self-selected experimental group design is weaker because volunteers are recruited to form the experimental group, while no volunteer subjects are used for control. This design is likely when subjects believe it would be in their interest to be a subject in an experiment.

Separate Sample Pre-test-Post-test Design = Most applicable when it is unknown when and to who to introduce the treatment but it can decide when and whom to measure. This is a weaker design because several threats to internal validity are not handled adequately.

Measurement in research consists of assigning numbers to empirical events, objects or properties, or activities in compliance with a set of rules.

Mapping rules = a scheme for assigning numbers or symbols to represent aspects of the event being measured

Objects include the concepts of ordinary experience, such as touchable items like furniture. Objects also include things that are not as concrete, i.e. genes, attitudes and peer-group pressures.

Properties are the characteristics of the object. A person’s physical properties may be stated in terms of weight, height.

Psychological properties: include attitudes and intelligence.

Social properties include leadership ability, class affiliation, and status. In a literal sense, researchers do not measure either objects or properties.

Dispersion: describes how scores cluster or scatter in a distribution. Nominal data are statistically weak, but they can still be useful.

Nominal scales = with these scales, a researcher is collecting information on a variable that naturally (or by design) can be grouped into two or more categories that are mutually exclusive and collectively exhaustive.

Ordinal scales = include the characteristics of the nominal scale plus an indicator of order. Ordinal data require conformity to a logical postulate: If a > b and b > c, then a > c.

Interval scales = have the power of nominal and ordinal data plus one additional strength: they incorporate the concept of equality of interval (the scaled distance between 1 and 2 equals the distance between 2 and 3).

Ratio scales = incorporate all of the powers of the previous scales plus the provision for absolute zero or origin. Ratio data represent the actual amounts of a variable. Measures of

Content Validity – of a measuring instrument is the extent to which it provides adequate coverage of the investigative questions guiding the study. If the instrument contains a representative sample of the universe of subject matter of interest, then content validity is good. To evaluate the content validity of an instrument, one must first agree on what elements constitute adequate coverage. A determination of content validity involves judgment.

Criterion-Related Validity – reflects the success of measures used for prediction or estimation. You may want to predict and outcome or estimate the existence of a current behaviour or time perspective

Construct validity – in attempt to evaluate, we consider both the theory and the measuring instrument being used. If we were interested in measuring the effect of trust in cross functional teams, the way in which ‘trust’ was operationally defined would have to correspond to an empirically grounded theory. If a known measure of trust was available, we might correlate the results obtained using this measure with those derived from our new instrument.

Reliability has to do with the accuracy and precision of a measurement procedure. A measure is reliable to the degree that it supplies consistent results. Reliability is a necessary contributor to validity but is not a sufficient condition for validity.

Stability – a measure is said to possess stability if consistent results with repeated measurements of the same person with the same instrument can be secured.

An observation procedure is stable if it gives the same reading on a particular person when repeated one or more times.

Equivalence – a second perspective on reliability considers how much error may be introduced by different investigators (in observation) or different samples of items being studied (in questioning or scales).

Internal Consistency – a third approach to reliability uses only one administration of an instrument or test to assess the internal consistency or homogeneity among the items.

The split-half technique can be used when the measuring tool has many similar questions or statements to which participant can respond.

Practicality = concerned with a wide range of factors of economy, convenience, and interpretability. The scientific requirements of a project call for the measurement process to be reliable and valid, while the operational requirements call for it to be practical.

Scaling = the ‘procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the characteristics of numbers to the properties in question.’

Ranking scales constrain the study participant to making comparisons and determining order among two or more properties (or their indicants) or objects.

A choice scale requires that participants choose one alternative over another.

Categorization asks participants to put themselves or property indicants in groups or categories.

Sorting requires that participants sort cards (representing concepts or constructs) into piles using criteria established by the researcher. The cards might contain photos or images or verbal statements of product features.

Nominal scales classify data into categories without indicating order, distance, or unique origin.

Ordinal data show relationships of more than and less than but have no distance or unique origin.

Interval scales have both order and distance but no unique origin.

Ratio scales possess all four properties’ features. The assumptions underlying each level of scale determine how a particular measurement scale’s data will be analysed statistically.

Uni-dimensional scale, one seeks to measure only one attribute of the participant or object.

Multidimensional scale recognizes that an object might be better described with several dimensions than on a uni-dimensional continuum.

Balanced rating scale has an equal number of categories above and below the midpoint. Generally, rating scales should be balanced, with an equal number of favourable and unfavourable response choices.

Unbalanced rating scale has an unequal number of favourable and unfavourable response choices.

Unforced-choice rating scale provides participants with an opportunity to express no opinion when they are unable to make a choice among the alternatives offered.

Forced-choice scale requires that participants select one of the offered alternatives. Researchers often exclude the response choice ‘no opinion’, ‘don’t know’, or ‘neutral’ when they know that most participants have an attitude on the topic.

Hallo effect = the systematic bias that the rater introduces by carrying over a generalized impression of the subject from one rating to another. Halo is especially difficult to avoid when the property being studied is not clearly defined, is not easily observed, is not frequently discussed, involves reactions with others, or is a trait of high moral importance.

Simple category scale (also called a dichotomous scale) offers two mutually exclusive response choices. These may be ‘yes’ and ‘no’, ‘important’ and ‘unimportant’.

When there are multiple options for the rater but only one answer is sought, the multiple-choice, single-response scale is appropriate.

Likert scale is the most frequently used variation of the summated rating scale. Summated rating scales consist of statements that express either a favourable or an unfavourable attitude toward the object of interest. The participant is asked to agree or disagree with each statement. Each response is given a numerical score to reflect its degree of attitudinal favourableness, and the scores may be summed to measure the participant’s overall attitude.

Item analysis assesses each item based on how well it discriminates between those persons whose total score is high and those whose total score is low. The mean scores for the high-score and low-score groups are then tested for statistical significance by computing t values. After finding the t values for each statement, they are rank-ordered, and those statements with the highest t values are selected.

The semantic differential (SD) scale measures the psychological meanings of an attitude object using bipolar adjectives. Researchers use this scale for studies such as brand and institutional image.

Numerical/multiple rating list scales have equal intervals that separate their numeric scale points. The verbal anchors serve as the labels for the extreme points.

Staple scale = used as an alternative to the semantic differential, especially when it is difficult to find bipolar adjectives that match the investigative question.

Constant-sum scales = a scale that helps the researcher discover proportions.

Graphic rating scales – the scale was originally created to enable researchers to discern fine differences. Theoretically, an infinite number of ratings are possible if participants are sophisticated enough to differentiate and record them.

They are instructed to mark their response at any point along a continuum. Usually, the score is a measure of length (millimetres) from either endpoint. The results are treated as interval data. The difficulty is in coding and analysis. This scale requires more time than scales with predetermined categories.

Ranking scales– in ranking scales, the participant directly compares two or more objects and makes choices among them.

Arbitrary scales are designed by collecting several items that are unambiguous and appropriate to a given topic. These scales are not only easy to develop, but also inexpensive and can be designed to be highly specific. Moreover, arbitrary scales provide useful information and are adequate if developed skilfully.

Consensus scaling requires items to be selected by a panel of judges and then evaluate them on:

  • Their relevance to the topic area

  • Their potential for ambiguity

  • The level of attitude they represent

Scalogram analysis = a procedure for determining whether a set of items forms a uni-dimensional scale.

Factor scales include a variety of techniques that have been developed to address two problems:

  1. How to deal with a universe of content that is multidimensional

  2. How to uncover underlying dimensions that have not been identified by exploratory research

A disguised question = designed to conceal the question’s true purpose.

Administrative questions – identify the participant, interviewer, interview location, and conditions. These questions are rarely asked of the participant but are necessary for studying patterns within the data and identify possible error sources.

Classification questions – usually cover sociological-demographic variables that allow participants’ answers to be grouped so that patterns are revealed and can be studied.

Target questions (structured or unstructured) – address the investigative questions of a specific study. These are grouped by topic in the survey. Target questions may be structured (they present the participants with a fixed set of choices, often called closed questions) or unstructured (the do not limit responses but do provide a frame of reference for participants’ answers; sometimes referred to as open-ended questions).

Response strategy - a third major area in question design is the degree and form of structure imposed on the participant.

The various response strategies offer options that include unstructured response (or open-ended response, the free choice of words) and structured response (or closed response, specified alternatives provided).

Free-response questions - also known as open-ended questions, ask the participant a question and either the interviewer pauses for the answer (which is unaided) or the participant records his or her ideas in his or her own words in the space provided on a questionnaire.

Dichotomous question - suggest opposing responses (yes/no) and generate nominal data.

Checklist – when multiple responses to a single question are required, the question should be asked in one of three ways: the checklist, rating, or ranking strategy. If relative order is not important, the checklist is logical choice. They are more efficient than asking for the same information with a series of dichotomous selection questions, one for each individual factor. Checklists generate nominal data. 

Log in or create your free account

Waarom een account aanmaken?

  • Je WorldSupporter account geeft je toegang tot alle functionaliteiten van het platform
  • Zodra je bent ingelogd kun je onder andere:
    • pagina's aan je lijst met favorieten toevoegen
    • feedback achterlaten
    • deelnemen aan discussies
    • zelf bijdragen delen via de 7 WorldSupporter tools
Join World Supporter
Join World Supporter
Follow the author: Vintage Supporter
Comments, Compliments & Kudos

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.
Promotions
Image
The JoHo Insurances Foundation is specialized in insurances for travel, work, study, volunteer, internships an long stay abroad
Check the options on joho.org (international insurances) or go direct to JoHo's https://www.expatinsurances.org

 

WorldSupporter Resources
Samenvatting: Business Research Methods

Samenvatting: Business Research Methods

Deze samenvatting is gebaseerd op collegejaar 2012-2013. Bekijk hier ons huidige aanbod.


Druk: 3e 2011

Auteur: Bryman & Bell

 

 

 

Chapter 1: Business Research Strategies

Introduction

Business research cannot exist by itself. It means that business research always intersects with such sciences as sociology, psychology and economics (including such fields as marketing, accounting and finance).
There are two points which are especially vital in studying methodology. Firstly, methodology is not the same for everyone - different organizations and people have their own vision of how these methods must work. Secondly, it is important to take into consideration in what environment business research occurs. For example, you can research business environment in a small company where the situation is stable, or a relatively big one which plays a role in an acquisition or merge, or a company which went bankrupt and now sales all of its assets. In all three cases, different methods of research should be used in order to get a valuable result.
Quite often business researchers conduct studies based on their own past experience, which probably are of particular interest for them.
There are different opinions of how business research method should be conducted. These opinions have lead to two different “modes” of knowledge production:
Mode 1: Knowledge production is driven by academic agenda. The base of discoveries is existing knowledge. Knowledge production is a linear process. Audience and customers of this mode is academic community.
Mode 2: Knowledge production is driven by a process that makes boundaries of disciplines to be exceeded. In this mode discoveries are closely related to the case of study. This process is less linear than in “mode 1”. The audience consists of academic, policy makers and practitioners.
Some researchers suggest that “mode 2” of knowledge production is in many ways better than the first one.
Any new researcher must deal with different questions while conducting the research. The most important are what the aim of function of business research is and who the audiences of this research are.
Moreover, it is also difficult to choose a topic of business research. There are four points that must be evaluated:
The influence of researcher. It is vital to understand how the researcher collects data, analyses it and interprets the results.
Difficulty of understanding past researches. Methodological researches, as a rule, are described less detailed than, for example, sociological ones. For new researchers it may cause some difficulties in interpreting the results.
Kinds of methods. In some cases, it is hard to find past examples of researches of the topic, in others there are quite a lot of them. This must also be taken into consideration.
Research progress. While making a business research, one should carefully investigate how former researchers carried out their studies. This will help to improve and develop the ways in which methodological research is performed.
Theory and Research
Theories are divided into two groups: grand theories and theories of middle range. As a rule, grand theories are more abstract and general than of middle range, which operate in a limited domain. Management and business researches are driven by theories of middle range.
Empiricism is a theory of knowledge which asserts that knowledge arises from experience. In other words, every theory must first be tested before it can be considered as knowledge.
Deductive and Inductive Theory
There are two types of relationship between theory and research. It depends on whether we talk about deductive or inductive theory.
Deductive approach is a method in which a researcher has some data and based on this data tries to come up with a theory which later can be applied to similar cases. Deductive approach of relationship between theory and research can be described by following diagram:
 

 

Principle of deductivism: The purpose of theory is to generate hypothesis that can be tested and then explain the given theory.
As a rule, deductive method looks like the diagram above. Nevertheless, there are some cases when not all steps of the sequence are relevant to research. There are different reasons for this, some of which are:

  • someone else has published theory about his/her findings based on the same data before the researcher develops his own theory
  • the data may become relevant to the theory only after data has been collected
  • the data may not fit with original hypothesis

Some researchers prefer the inductive approach. This approach is reverse to deductive one.
 

 

Principle of inductivism: knowledge is arrived through gathering data and information and then provides a basis for theory.
Both deductive and inductive strategies are associated with qualitative approach.
Epistemological Considerations
Positivism is a position that promotes methods of the natural sciences to the study of social reality and beyond. Positivism contains elements of both deductive and inductive strategies. As a matter of fact, deductive method is expressed a little more in positivism than method of induction.
Phenomenology is a philosophy that investigates how individualists make sense of the word around them. It also takes into account that philosophers perceive the world in their own way, therefore these preconceptions must bracket out.
Interpretivism is an alternative to the positivism orthodoxy that respects differences between people and the objects in the system. Therefore it allows scientists to be more subjective if we compare it with positivism. Interpretivism includes Weber’s notion of Verstehen, the hermeneutic-phenomenological tradition and symbolic interactionism.
Ontological Considerations
Objectivism is an ontological position that states that social phenomena and their meanings exist independent and separate from human actions.
Constructionism is another ontological definition. It suggests that such categories as organization and culture are pre-given. Human actions have no impact on them.
Relationship of Epistemology and Ontology to Business Research
Paradigms is “a cluster of beliefs and dictates for scientists in a particular discipline influence what should be studied, how research should be done, [and] how results should be interpreted” (Bryman 1988a: 4). One of the most important features of paradigms is its incommensurability. It means that two paradigms cannot be consistent with each other.
Each paradigm consists of assumptions which are represented in one of two ways:

  • objectivism - there is an external viewpoint from which it is possible to analyze the organization or culture
  • subjectivism - organization and culture are studied by only those individualists who are directly involved in its activities

Epistemology and ontology related to business research/Paradigms
The main influence is that ontological commitments influence the ways how research questions are formulated and how they are executed. A paradigm is a collection of beliefs and dictates which for scientists (in a particular discipline) influence what should be studied, how the research should be conducted and how the results should be interpreted. Paradigms can be divided in two ways
Objectivist. The organization will be viewed of an external viewpoint and will be encompassed of consistent process and structures
Subjectivist. The organization is socially constructed, used by individuals to make sense of social experience so it can be studied from the point of view of the individuals involved.
Each paradigm also constructs assumptions about the function and purpose of the research in business:

  • Regulatory. The purpose is to describe what goes on in the organizations, but no judgements and possibly making small changes.
  • Radical. Judgements are made how organizations ought to be and produce ideas to achieve this

There are four possible paradigmatic positions:

  • functionalist - studies organizations and bases his study on solving of the problems which leads to radical explanation
  • interpretative - believes that understanding of organization (or culture) must be based on the experience of those who works (or lives) there
  • radical humanist - thinks of organization as a social arrangement from which employees must be emancipated
  • radical structuralist - understands organization as a product of power relationships which might result in conflicts within organization

Research Strategy: Quantitative and Qualitative
There are a lot of scientists and researchers who think that the difference between quantitative and qualitative research is not useful anymore. Nevertheless, there are certain important issues which differ quantitative and qualitative approaches. They are: principal, epistemological and ontological orientations. Quantitative research use deductive strategy, positivistic epistemology and objective ontology. Qualitative approach deals with inductive, Interpretism and constructionism orientations.

 

Nevertheless, it is false to think that it is impossible for research to deviate from these orientations. For example, qualitative researched is usually concerned with generation of the theory (inductive method), but not with its testing (deductive method). However, there were cases when qualitative research was used to test the theory rather than create it.
Influences on the Conduct of Business Research
Business research is influenced by five factors. Three of them were already discussed earlier (theory, epistemology and ontology). The other two are values and practical considerations.
 

Values reflect either personal beliefs of society or the feelings that create value for researcher. In second case, researcher must use objective orientation, otherwise experiment would be biased. A value can be the choice of research area and methods, implementation of data collection, its analysis and interpretation, etc.
The last factor influencing business research, practical considerations, should never be underestimated. This term implies choices of research strategy, design, method, the nature of people and topic being studied, and some other issues than change from case to case depending on nature of study.

Chapter 2: Research Design

Introduction
A research design provides a guideline for collection, analysis and interpretation of data. A research method is a way for collecting data.
Criteria in Business Research
There are three most important criteria for the evaluation of business and management research. They are: reliability, replication and validity.
Reliability reflects a question of whether the results of a certain study are repeatable. After all, if different researchers would finish the same experiment with different outcomes, it would be a question who of them was right and who made a mistake in his research. A reliable research is that one which outcome is the same every time whoever the researcher is.
Sometimes researchers replicate the findings of others for different reasons. Therefore, studies must be replicable. The reason is straightforward - if a researcher does not describe his outcomes in details, it will the impossible for other experimenters to interpret the results.
Validity is concerned with the integrity of conclusions of a research. In other words, it examines whether the research is conducted in a right way. Scientists distinguish four types of validity.
Measurement validity (or construct validity). This type of validity deals mainly with quantitative research. It is concerned with a question whether the measurement really reflects the measuring variable. If it does not, then both validity and reliability are questionable.
Internal validity. This type of validity deals with the question whether a conclusion that incorporates relationship between two variables really works. If we suggest that certain independent variable causes variation in dependent variable, we must be sure that this is true and not other factor is the reason for this variation.
External validity. It concerns with a question of whether the results of an experiment also work beyond specific research context.
Ecological validity. This last type of validity checks that scientific findings also work in everyday life. For example, if an experiment is made in a laboratory or in a special room, there is a great probability that findings will be ecologically invalid.
A variable is an attribute on which cases vary. If an attribute does not vary, we say that it is constant. Constant attributes are of less interest to the researchers than variables. Two most common types of variables are independent and dependent variables.
There are four aspects of trustworthiness that are parallel to some quantitative research criteria described before:

  • Credibility (parallel to internal validity) - the extent to which findings are believable.
  • Transferability (parallel to external validity) - question whether findings in particular research apply to other contexts.
  • Dependability (parallel to reliability) - concerns with a question whether findings also apply in other studies.
  • Confirmability (parallel to objectivity) - deals with the problem whether researcher’s values and views interfered experiment to a high degree.

Naturalism has many different meanings; the most common ones are the following. Naturalism (1) means a commitment to adopting the ideology of natural science methods; (2) means being fair to the nature of the phenomenon being studied; (3) is a type of research that tries to minimize the interruption of methods of data collection.
Qualitative research often concerns with naturalism. It means that it tries to collect data that is valid in environment, not only in laboratory. By and large, qualitative research in a greater degree deals with ecological validity than quantitative one.
Research Designs
In the rest of the chapter, we will discuss five types of research design: experimental, cross-sectional, longitudinal, case study and comparative designs.
Experimental Design
One of the characteristics of experimental design is that it involves significant confidence in the strength and trustworthiness of casual findings - it means that this design is strong in terms of internal validity.
If we conduct an experiment, we have to manipulate an independent variable in order to find out its impact on dependent one. There is a big disadvantage though. This disadvantage is the reason that there is much less experiments than it could be. There are a lot of variables that we cannot manipulate, for example, gender, age, share prices or interest rates.
It is necessary to distinguish between two types of experiments - laboratory experiment and field experiment.  The former one is conducted in a laboratory, while the latter one takes place in real-life settings.
The classical experimental design looks like following: there are two groups, one is called experimental group, the other is called control group.  In case of studying an effect of treatment, experimental group gets the treatment, control group does not.
Before getting a treatment, both groups are being observed for a certain measurement (which the researchers are interested in). After treatment, both groups are evaluated again. After this evaluation, researchers are able to find out whether there is a difference between the two groups.
This can be represented as a following schema:

 

T1 - before getting a treatment.
T2 - after getting a treatment.
Obsexp,1 and Obsexp,2 - measurements of experimental group before and after treatment, respectively.
Obscon,1 and Obscon,2 - measurements of control group before and after treatment, respectively.
If we talk about research design, a question arises - what is the purpose of a control group. In fact, it eliminates the influence of factors on the experiment other than independent variable (in our case, treatment). If experimental group is affected by any external factor, the control group is affected as well (sometimes to less or more degree, but still affected). Therefore, in the end, when we compare two groups, these side effects cancel each other out. Here are some examples of such factors:
Testing. It is possible that experimental group, knowing that it is being studied, behaves other than in everyday life. The control group also experiences this effect and therefore this effect is omitted when comparing the groups.
History. Some events in the past may cause the changes in the observations. If there would not be control group, we could not be sure that our experiment is not the case.
Maturation. Like everything else, people change, and these changes may have effects on the dependent variables. Since these changes apply to control group as well, we can discount this effect.
Selection. If both experimental and control groups have been selected randomly, there is less probability that differences between Obsexp,2 and Obscon,2 are caused by pre-existing differences.
Ambiguity about the direction of casual influence. Sometimes it is difficult to determine which variable causes the other. In some cases, existence of control group may help to solve this problem.
There are five main threats to external validity of an experiment: interaction of selection and treatment, interaction of setting and treatment, interaction of history and treatment, interaction effects of pre-testing and reactive effects of experimental arrangements.
In laboratory experiment, researcher has greater influence on the study than in field experiment. In laboratory, there might be interaction of selection and treatment and interaction of setting and treatment. On the other hand, there is no possibility of pre-testing effects since there are no pre-tests.
Quasi-experiments have some characteristic of experiments, but do not fulfill all internal validity requirements. As a rule, experiments without control group cannot be considered as quasi-experiments; however, thereRead more

Summary with Business Research Methods by Blumberg

Summary with Business Research Methods by Blumberg


Chapter 6: Sampling

The unit of analysis depicts the level at which the research is performed and which objects are researched.

The essential application of sampling is that it allows for drawing conclusions about the entire population, by studying some of the elements in a population.
A population element is the unit of study - the individual participant or object on which the measurement is taken. A population is the total collection of elements about which some conclusion is to be drawn.
A census is a count of all the elements in a population. The listing of all population elements from which the sample will be drawn is called the sample frame.

There are several compelling reasons for sampling:

  1. Lower cost: the difference between the sample costs and census costs is substantial.

  2. Greater accuracy of results: some argue that the quality of a study is often better with sampling than with a census.

However, when the population is small, accessible, and highly variable, accuracy is expected to be greater with a census than a sample (Thus, a census study is: feasible when the population is small and necessary when the elements are quite different from each other).

  1. Greater speed of data collection: the time between the recognition of a need for information and the availability of that information is reduced.

  2. Availability of population elements: Some situations simply require sampling. This is the case where e.g. the population is and infinite conditions are appropriate for a census study.

The advantages of sampling over census studies are less compelling when the population is small and the variability within the population high.

  • Feasible when the population is small

  • Necessary when the elements are quite different from each other

However, when the population is small and variable, any sample we draw may not be representative of the population from which it is drawn. The resulting values we calculate from the sample are incorrect as estimates of the population values.

The ultimate test of a sample design is how well it represents the characteristics of the population it claims to represent  Thus, the sample must be valid.
Validity of a sample depends on two considerations: Accuracy and precision.

Accuracy: is the degree to which bias is absent from the sample. When the sample is drawn properly, the measure of behaviour, attitudes or knowledge of some sample elements will be less than the measure of those same variables drawn from the population. Also, the measure of the behaviour, attitudes, or knowledge of other sample elements will be more than the population values. Variations in these sample values offset each other, resulting in a sample value that is close to the population value.

Thus, an accurate (unbiased) sample is one in which the underestimators offset the overestimators.

Systematic variance has been defined as “the variation in measures due to some known or unknown influences that ‘cause’ the scores to lean in on direction more than another.” The systematic variance may be reduced by e.g. increasing the sample size.

Precision: precision of estimate is the second criterion of a good sample design. In order to interpret the findings of research, a measurement of how closely the sample represents the population is needed. The numerical descriptors that describe samples may be expected to differ from those that describe populations because of random fluctuations natural to the sampling process. This is called sampling error (or random sampling error) and reflects the influence of chance in drawing the sample members.

Sampling error is what is left after all known sources of systematic variance have been accounted for. Precision is measured by the standard error of estimate, a type of standard deviation measurement; the smaller the standard error of estimate, the higher is the precision of the sample. The ideal sample design produces a small standard error of estimate.

Sample design

Two approaches of sample design are as follows: Different decisions researcher have to make can be found in exhibit 6.2 on page 179. Different types of sampling designs are described in table 6.3 on page 180.

Representation - The members of a sample are selected using probability or non-probability procedures.
Probability sampling is based on the concept of random selection – a controlled procedure which ensures that each population element is given a known non-zero change of selection. Non-probability sampling is arbitrary and subjective; when elements are chosen subjectively, there is usually some pattern or scheme used. Thus, each member of the population does not have a known chance of being included.

Element selection: Whether the elements are selected individually and directly from the population (viewed as a single pool) or additional controls are imposed, element selection may also classify samples. If each sample element is drawn individually from the population at large, it is an unrestricted sample. Restricted sampling covers all other forms of sampling.

Probability sampling is based on the concept of random selection, a controlled procedure that assures that each population element is given a known nonzero chance of selection. Only probability samples provide estimates of precision and offer the opportunity to generalize the findings to the population of interest from the sample population. The unrestricted, simple random sample is the simplest form of probability sampling. Since all probability samples must provide a known non-zero chance of selection for each population element, the simple random sample is considered a special case in which each population element has a known and equal chance of selection. In this section, we use the simple random sample to build a foundation for understanding sampling procedures and choosing probability samples.

Steps in sampling design

Steps in sampling design: There are several questions to be answered in securing a sample. Each requires unique information.

  1. What is the target population? Good operational definitions are critical in choosing the relevant population.

  2. What are the parameters of interest? Population parameters are summary descriptors (e.g., incidence proportion, mean, variance) of variables of interest in the population. Sample statistics are descriptors of those same relevant variables computed from sample data. Sample statistics are used as estimators of population parameters. The sample statistics are the basis of conclusions about the population. Depending on how measurement questions are phrased, each may collect a different level of data. Each different level of data also generates different sample statistics. The population proportion of incidence “is equal to the number of elements in the population belonging to the category of interest, divided by the total number of elements in the population.” Proportion measures are necessary for nominal data and are widely used for other measures as well. The most frequent proportion measure is the percentage.

  3. What is the sampling frame? The sampling frame is closely related to the population. It is the list of elements from which the sample is actually drawn. Ideally, it is a complete and correct list of population members only. A too inclusive frame is a frame that includes many elements other than the ones in which the researcher is interested.

  4. What is the appropriate sampling method? A researcher must follow an appropriate method and make sure that interviewers (or others) cannot modify the selections made and only the selected elements from the original sampling are included.

  5. What size sample is needed? Some principles that influence sample size include:

    • The narrower or smaller the error range, the larger the sample must be.

    • The greater the dispersion or variance within the population, the larger the sample must be to provide estimation precision.

    • The higher the confidence level in the estimate, the larger the sample must be.

    • The greater the desired precision of the estimate, the larger the sample must be.

    • The greater the number of subgroups of interest within a sample, the greater the sample size must be, as each subgroup must meet minimum sample size requirements.

  6. How much will it cost? Also the costs for each and every experiment have to be taken into consideration, since money is often the factor which limits most of the research.

Probability sampling

Simple random sampling

Since all probability samples must provide a known nonzero probability of selection for each population element, the simple random sample is considered a special case in which each population element has a known and equal chance of selection. However, Simple random sampling is often impractical, i.e. it requires a population list (sampling frame) that is often not available; and it fails to use all the information about a population, thus resulting in a design that may be wasteful. It may also be expensive to implement. Therefore alternative probability sampling approaches such as, systematic sampling, stratified sampling, cluster sampling and double sampling, will be considered.

Systematic sampling

In this approach, every kth element in the population is sampled, beginning with a random start of an element in the range of 1 to k. The kth element, or skip interval, is determined by dividing the sample size into the population size to obtain the skip pattern applied to the sampling frame.

K = skip interval = total population size / size of the desired sample

The major advantage of systematic sampling is its simplicity and flexibility. A concern with systematic sampling is the possible periodicity in the population that parallels the sampling ratio. Another difficulty may arise when there is a monotonic trend in the population elements. That is, the population list varies from the smallest to the largest element or vice versa.

Stratified sampling

Most populations can be segregated into several mutually exclusive subpopulations, or strata. A stratified random sampling is the process by which the sample is constrained to include elements from each of the segments. After a population is divided into the appropriate strata, a simple random sample can be taken within each stratum. The results from the study can then be weighted (based on the proportion of the strata to the population) and combined into appropriate population estimates.

A stratified random sample is often chosen in order to:

  • Increase a sample’s statistical efficiency;

  • Provide adequate data for analysing the various subpopulations or strata;

  • Enable different research methods and procedures to be used in different strata.

Stratification is usually more efficient statistically than simple random sampling and at worst is equal to it. With the ideal stratification, each stratum is homogeneous internally and heterogeneous with other strata. Also, the more strata used, the closer a researcher comes to maximizing interstrata differences (differences between strata) and minimizing intrastratum variances (differences within a given stratum). The size of the strata can be computed with the following pieces of information:

  • how large the total sample should be

  • how the total sample should be allocated among strata.

Proportionate stratified sampling

In proportionate stratified sampling, each stratum is properly represented so that the sample size drawn from the stratum is proportionate to the stratum’s share of the total population. This approach has higher statistical efficiency than a simple random sample and is much easier to carry out than other stratifying methods.

It also provides a self-weighting sample; the population mean or proportion can be estimated simply by calculating the mean or proportion of all sample cases, eliminating the weighting of responses. On the other hand, proportionate stratified samples often gain little in statistical efficiency if the strata measures and their variances are similar for the major variables under study. Any stratification that departs from the proportionate relationship is disproportionate.

Cluster sampling

This is where the population is divided into groups of elements with some groups randomly selected for study. Two conditions foster the use of cluster sampling:

  • The need for more economic efficiency than can be provided by simple random sampling;

  • The frequent unavailability of a practical sampling frame for individual elements

Statistical efficiency for cluster samples is usually lower than for simple random samples, mainly because clusters often don’t meet the need for heterogeneity and, instead, are homogeneous.

An area sampling is the most important form of cluster sampling. It is possible to use when a research involves populations that can be identified with some geographic area. This method overcomes the problems of both high sampling cost and the unavailability of a practical sampling frame for individual elements. In designing cluster samples, including area samples, the following questions should be answered:

  • How homogeneous are the resulting clusters? When clusters are homogeneous, this contributes to low statistical efficiency. Sometimes one can improve this efficiency by constructing clusters to increase intracluster variance.

  • Shall equal-size or unequal-size clusters be sought for? A cluster sample may be composed of clusters of equal or unequal size.

The theory of clustering is that the means of sample clusters are unbiased estimates of the population mean. This is more often true when clusters are naturally equal, such as households in city blocks. While one can deal with clusters of unequal size, it may be desirable to reduce or counteract the effects of unequal size.

  • How large a cluster should be taken? Comparing the efficiency of differing cluster sizes requires that the different costs for each size are discovered and that the different variances of the cluster means are estimated.

  • Shall a single-stage or multistage cluster be used? Concerning single-stage or multistage cluster design, for most large-scale area sampling, the tendency is to use multistage designs. Several situations justify drawing a sample within a cluster, in preference to the direct creation of smaller clusters and taking a census of that cluster using one-stage cluster sampling.

  • How large a sample is needed? It depends mainly on the specific cluster design.

Double sampling

It may be more convenient or economical to collect some information by sample and then use this information as the basis for selecting a subsample for further study. This procedure is called double sampling, sequential sampling, or multiphase sampling. It is usually found with stratified and/or cluster designs.

Non probability sampling

With a subjective approach like non-probability sampling, the probability of selecting population elements is unknown. There are a variety of ways to choose persons or cases to include in the sample. A greater opportunity for bias to enter the sample selection procedure and to distort the findings of the study exists. Any range within which to expect the population parameter cannot be estimated. There are some practical reasons for using the less precise methods.

Methods

1) Convenience: Non-probability samples that are unrestricted are called convenience samples. They are the least reliable design but normally the cheapest and easiest to conduct. Researches or field workers have the freedom to choose whomever they find.

2) Purposive sampling: A non-probability sample that conforms to certain criteria is called purposive sampling. There are two major types: judgment sampling and quota sampling:

  • Judgment sampling occurs when a researcher selects sample members to conform to some criterion. When used in the early stages of an exploratory study, a judgment sample is appropriate. When one wishes to select a biased group for screening purposes, this sampling method is also a good choice.

  • Quota sampling is the second type of purposive sampling. It is used to improve representativeness. The logic behind quota sampling is that certain relevant characteristics describe the dimensions of the population. If a sample has the same distribution on these characteristics, then it is likely to be representative of the population regarding other variables on which the researcher has no control. In most quota samples, researchers specify more than one control dimension. Each should meet two tests: (1) It should have a distribution in the population that can be estimated, and (2) be pertinent to the topic studied.

3) Snowball: In the initial stage of snowball sampling, individuals are discovered and may or may not be selected through probability methods. This group is then used to refer the researcher to others who possess similar characteristics and who, in turn, identify others.

Eventually sampling on the internet has significantly increased in the past decades and almost every firm uses the Internet to conduct research.

Short overview

The unit of analysis describes the level at which the research is performed and which objects are reached. A population element is the subject on which the measurement is being taken. A population is the total collection of elements about which we wish to make some inferences. A census is a count of all the elements in a population.

There are a couple of reasons for sampling:

  • Lower cost

  • Greater accuracy of results

  • Greater speed of data collection

  • Availability of population elements

There are two conditions for a census study, namely that it should be feasible when the population is small and that it is necessary when the elements are quite different from each other. In order for a sample to be appropriate, it has to be accurate and precise. With regards to accuracy, there should be no systematic variance within a sample. It is the ‘’variation in measures due to some known or unknown influences that ‘’cause’’ the scores to lean in one direction more than another.

Probability sampling is a controlled procedure that ensures that each population element is given a non-zero change of selection. Non-probability sampling is arbitrary and subjective. A simple random sample is the easiestRead more