List of the Important Terms of Business Research Methods (4th ed.)

List of the Important Terms of Business Research Methods by Blumberg & Cooper, 2014 edition, donated to WorldSupporter

 

A population element = the unit of study - the individual participant or object on which the measurement is taken.

A population = the total collection of elements about which some conclusion is to be drawn.

A census = a count of all the elements in a population. The listing of all population elements from which the sample will be drawn is called the sample frame.

Accuracy = the degree to which bias is absent from the sample. When the sample is drawn properly, the measure of behaviour, attitudes or knowledge of some sample elements will be less than the measure of those same variables drawn from the population. Also, the measure of the behaviour, attitudes, or knowledge of other sample elements will be more than the population values. Variations in these sample values offset each other, resulting in a sample value that is close to the population value.

Systematic variance =“the variation in measures due to some known or unknown influences that ‘cause’ the scores to lean in on direction more than another.” The systematic variance may be reduced by e.g. increasing the sample size.

Precision: precision of estimate is the second criterion of a good sample design. In order to interpret the findings of research, a measurement of how closely the sample represents the population is needed.

Sampling error = The numerical descriptors that describe samples may be expected to differ from those that describe populations because of random fluctuations natural to the sampling process.

Representation =The members of a sample are selected using probability or non-probability procedures.

Probability sampling is based on the concept of random selection – a controlled procedure which ensures that each population element is given a known non-zero change of selection.

Non-probability sampling is arbitrary and subjective; when elements are chosen subjectively, there is usually some pattern or scheme used. Thus, each member of the population does not have a known chance of being included.

Element selection - Whether the elements are selected individually and directly from the population – viewed as a single pool – or additional controls are imposed, element selection may also classify samples.

Probability sampling - is based on the concept of random selection – a controlled procedure that assures that each population element is given a known nonzero chance of selection. Only probability samples provide estimates of precision and offer the opportunity to generalize the findings to the population of interest from the sample population.

Population parameters = summary descriptors (e.g., incidence proportion, mean, variance) of variables of interest in the population.

Sample statistics = used as estimators of population parameters. The sample statistics are the basis of conclusions about the population. Depending on how measurement questions are phrased, each may collect a different level of data. Each different level of data also generates different sample statistics.

The population proportion of incidence “is equal to the number of elements in the population belonging to the category of interest, divided by the total number of elements in the population.”.

The sampling frame = is closely related to the population. It is the list of elements from which the sample is actually drawn. Ideally, it is a complete and correct list of population members only.

Stratified random sampling = is the process by which the sample is constrained to include elements from each of the segments is called.

Cluster sampling = this is where the population is divided into groups of elements with some groups randomly selected for study.

An area sampling = the most important form of cluster sampling. It is possible to use when a research involves populations that can be identified with some geographic area. This method overcomes the problems of both high sampling cost and the unavailability of a practical sampling frame for individual elements

The theory of clustering = that the means of sample clusters are unbiased estimates of the population mean. This is more often true when clusters are naturally equal, such as households in city blocks. While one can deal with clusters of unequal size, it may be desirable to reduce or counteract the effects of unequal size.

Double sampling = It may be more convenient or economical to collect some information by sample and then use this information as the basis for selecting a subsample for further study. This procedure is called double sampling, sequential sampling, or multiphase sampling. It is usually found with stratified and/or cluster designs.

Convenience = Non-probability samples that are unrestricted are called convenience samples. They are the least reliable design but normally the cheapest and easiest to conduct. Researches or field workers have the freedom to choose whomever they find.

Purposive sampling = A non-probability sample that conforms to certain criteria is called purposive sampling. There are two major types – judgment sampling and quota sampling:

Judgment sampling occurs when a researcher selects sample members to conform to some criterion. When used in the early stages of an exploratory study, a judgment sample is appropriate. When one wishes to select a biased group for screening purposes, this sampling method is also a good choice.

Quota sampling is the second type of purposive sampling. It is used to improve representativeness. The logic behind quota sampling is that certain relevant characteristics describe the dimensions of the population. If a sample has the same distribution on these characteristics, then it is likely to be representative of the population regarding other variables on which the researcher has no control. In most quota samples, researchers specify more than one control dimension. Each should meet two tests: (1) It should have a distribution in the population that can be estimated, and (2) be pertinent to the topic studied.

Snowball = In the initial stage of snowball sampling, individuals are discovered and may or may not be selected through probability methods. This group is then used to refer the researcher to others who possess similar characteristics and who, in turn, identify others.

The observation approach: involves observing conditions, behaviour, events, people or processes.

The communication approach: involves surveying/interviewing people and recording their response for analysis. Communicating with people about various topics, including participants, attitudes, motivations, intentions and expectations.

Survey: a measurement process used to collect information during a highly structured interview – sometimes with a human interviewer and other times without.

Participant receptiveness = the participant’s willingness to cooperate.

Dealing with non-respond errors - By failing to respond or refusing to respond, participants create a non-representative sample for the study overall or for a particular item or question in the study.

In surveys, non-response error occurs when the responses of participants differ in some systematic way from the responses of nonparticipants.

Response errors: occur during the interview (created by either the interviewer or participant) or during the preparation of data for analysis.

Participant-initiated error: when the participant fails to answer fully and accurately – either by choice or because of inaccurate or incomplete knowledge

Interviewer error: response bias caused by the interviewer.

Response bias = Participants also cause error by responding in such a way as to unconsciously or consciously misrepresent their actual behaviour, attitudes, preferences, motivations, or intentions.

Social desirability bias = Participants create response bias when they modify their responses to be socially acceptable or to save face or reputation with the interviewer

Acquiescence = the tendency to be agreeable.

Noncontact rate = ratio of potential but unreached contacts (no answer, busy, answering machine, and disconnects but not refusals).

The refusal rate refers to the ration of contacted participants who decline the interview to all potential contacts.

Random dialling: requires choosing telephone exchanges or exchange blocks and then generating random numbers within these blocks for calling.

A survey via personal interview is a two-way conversation between a trained interviewer and a participant.

Computer-assisted personal interviewing (CAPI): special scoring devices and visual materials are used.

Intercept interview: targets participants in centralised locations, such as shoppers in retail malls. Reduce the costs associated with travel.

Outsourcing survey services offers special advantages to managers. A professionally trained research staff, centralized location interviewing, focus group facilities and computer assisted facilities are among them.

Causal methods are research methods which answer questions such as “Why do events occur under some conditions and not under others?”

Ex post facto research designs, in which a researcher interviews respondents or observes what is or what has been, have the potential for discovering causality. In comparison, the distinction is that with causal methods the researcher is required to accept the world as it is found, whereas an experiment allows the researcher to systematically alter the variables of interest and observe what changes follow.

Experiments are studies which involve intervention by the researcher beyond what is required for measurement.

Replication = repeating an experiment with different subject groups and conditions

Field experiments =a study of the dependent variable in actual environmental conditions

Hypothesis = a relational statement as it describes a relationship between two or more variables

In an experiment, participants experience a manipulation of the independent variable, called the experimental treatment.

The treatment levels of the independent variable are the arbitrary or natural groups the researcher makes within the independent variable of an experiment. The levels assigned to an independent variable should be based on simplicity and common sense.

A control group could provide a base level for comparison. The control group is composed of subjects who are not exposed to the independent variable(s), in contrast to those who receive the experimental treatment. When subjects do not know if they are receiving the experimental treatment, they are said to be blind.

When the experimenters do not know if they are giving the treatment to the experimental group or to the control group, the experiment is said to be double blind.

Random assignment to the groups is required to make the groups as comparable as possible with respect to the dependent variable. Randomization does not guarantee that if the groups were pretested they would be pronounced identical; but it is an assurance that those differences remaining are randomly distributed.

Matching may be used when it is not possible to randomly assign subjects to groups. This employs a non-probability quota sampling approach. The object of matching is to have each experimental and control subject matched on every characteristic used in the research.

Validity = as whether a measure accomplishes its claims.

Internal validity = do the conclusions drawn about a demonstrated experimental relationship truly imply cause?

External validity – does an observed causal relationship generalize across persons, settings, and times? Each type of validity has specific threats a researcher should to guard against.

Statistical regression = this factor operates especially when groups have been selected by their extreme scores. No matter what is done between O1 and O2, there is a strong tendency for the average of the high scores at O1 to decline at O2 and for the low scores at O1 to increase. This tendency results from imperfect measurement that, in effect, records some persons abnormally high and abnormally low at O1. In the second measurement, members of both groups score more closely to their long-run mean scores.

Experimental mortality – this occurs when the composition of the study groups changes during the test.

Attrition is especially likely in the experimental group and with each dropout the group changes. Because members of the control group are not affected by the testing situation, they are less likely to withdraw.

Diffusion or imitation of treatment = if the control group learns of the treatment (by talking to people in the experimental group) it eliminates the difference between the groups.

Compensatory equalization = where the experimental treatment is much more desirable, there may be an administrative reluctance to withdraw the control group members. Compensatory actions for the control groups may confound the experiment.

Compensatory rivalry = this may occur when members of the control group know they are in the control group. This may generate competitive pressures.

Resentful demoralization of the disadvantaged = when the treatment is desirable and the experiment is obtrusive, control group members may become resentful of their deprivation and lower their cooperation and output.

Reactivity of testing on X – the reactive effect refers to sensitising subjects via a pre-test so that they respond to the experimental stimulus (X) in a different way. This before-measurement effect can be particularly significant in experiments where the IV is a change in attitude.

Interaction of selection and X = the process by which test subjects are selected for an experiment may be a threat to external validity. The population from which one selects subjects may not be the same as the population to which one wishes to generalize results.

Static Group Comparison – the design provides for two groups, one of which receives the experimental stimulus while the other serves as a control.

Pre-test-Post-test Control Group Design – this design consists of adding a control group to the one-group pre-test-post-test design and assigning the subjects to either of the groups by a random procedure (R).

Post-test-Only Control Group Design – The pre-test measurements are omitted in this design. Pre-tests are well established in classical research design but are not really necessary when it is possible to randomize.

Non-equivalent Control Group Design – this is a strong and widely used quasi-experimental design. It differs from the pre-test-post-test control group design - the test and control groups are not randomly assigned.

There are two varieties.
- Intact equivalent design, in which the membership of the experimental and control groups is naturally assembled. Ideally, the two groups are as alike as possible. This design is especially useful when any type of individual selection process would be reactive.
-The self-selected experimental group design is weaker because volunteers are recruited to form the experimental group, while no volunteer subjects are used for control. This design is likely when subjects believe it would be in their interest to be a subject in an experiment.

Separate Sample Pre-test-Post-test Design = Most applicable when it is unknown when and to who to introduce the treatment but it can decide when and whom to measure. This is a weaker design because several threats to internal validity are not handled adequately.

Measurement in research consists of assigning numbers to empirical events, objects or properties, or activities in compliance with a set of rules.

Mapping rules = a scheme for assigning numbers or symbols to represent aspects of the event being measured

Objects include the concepts of ordinary experience, such as touchable items like furniture. Objects also include things that are not as concrete, i.e. genes, attitudes and peer-group pressures.

Properties are the characteristics of the object. A person’s physical properties may be stated in terms of weight, height.

Psychological properties: include attitudes and intelligence.

Social properties include leadership ability, class affiliation, and status. In a literal sense, researchers do not measure either objects or properties.

Dispersion: describes how scores cluster or scatter in a distribution. Nominal data are statistically weak, but they can still be useful.

Nominal scales = with these scales, a researcher is collecting information on a variable that naturally (or by design) can be grouped into two or more categories that are mutually exclusive and collectively exhaustive.

Ordinal scales = include the characteristics of the nominal scale plus an indicator of order. Ordinal data require conformity to a logical postulate: If a > b and b > c, then a > c.

Interval scales = have the power of nominal and ordinal data plus one additional strength: they incorporate the concept of equality of interval (the scaled distance between 1 and 2 equals the distance between 2 and 3).

Ratio scales = incorporate all of the powers of the previous scales plus the provision for absolute zero or origin. Ratio data represent the actual amounts of a variable. Measures of

Content Validity – of a measuring instrument is the extent to which it provides adequate coverage of the investigative questions guiding the study. If the instrument contains a representative sample of the universe of subject matter of interest, then content validity is good. To evaluate the content validity of an instrument, one must first agree on what elements constitute adequate coverage. A determination of content validity involves judgment.

Criterion-Related Validity – reflects the success of measures used for prediction or estimation. You may want to predict and outcome or estimate the existence of a current behaviour or time perspective

Construct validity – in attempt to evaluate, we consider both the theory and the measuring instrument being used. If we were interested in measuring the effect of trust in cross functional teams, the way in which ‘trust’ was operationally defined would have to correspond to an empirically grounded theory. If a known measure of trust was available, we might correlate the results obtained using this measure with those derived from our new instrument.

Reliability has to do with the accuracy and precision of a measurement procedure. A measure is reliable to the degree that it supplies consistent results. Reliability is a necessary contributor to validity but is not a sufficient condition for validity.

Stability – a measure is said to possess stability if consistent results with repeated measurements of the same person with the same instrument can be secured.

An observation procedure is stable if it gives the same reading on a particular person when repeated one or more times.

Equivalence – a second perspective on reliability considers how much error may be introduced by different investigators (in observation) or different samples of items being studied (in questioning or scales).

Internal Consistency – a third approach to reliability uses only one administration of an instrument or test to assess the internal consistency or homogeneity among the items.

The split-half technique can be used when the measuring tool has many similar questions or statements to which participant can respond.

Practicality = concerned with a wide range of factors of economy, convenience, and interpretability. The scientific requirements of a project call for the measurement process to be reliable and valid, while the operational requirements call for it to be practical.

Scaling = the ‘procedure for the assignment of numbers (or other symbols) to a property of objects in order to impart some of the characteristics of numbers to the properties in question.’

Ranking scales constrain the study participant to making comparisons and determining order among two or more properties (or their indicants) or objects.

A choice scale requires that participants choose one alternative over another.

Categorization asks participants to put themselves or property indicants in groups or categories.

Sorting requires that participants sort cards (representing concepts or constructs) into piles using criteria established by the researcher. The cards might contain photos or images or verbal statements of product features.

Nominal scales classify data into categories without indicating order, distance, or unique origin.

Ordinal data show relationships of more than and less than but have no distance or unique origin.

Interval scales have both order and distance but no unique origin.

Ratio scales possess all four properties’ features. The assumptions underlying each level of scale determine how a particular measurement scale’s data will be analysed statistically.

Uni-dimensional scale, one seeks to measure only one attribute of the participant or object.

Multidimensional scale recognizes that an object might be better described with several dimensions than on a uni-dimensional continuum.

Balanced rating scale has an equal number of categories above and below the midpoint. Generally, rating scales should be balanced, with an equal number of favourable and unfavourable response choices.

Unbalanced rating scale has an unequal number of favourable and unfavourable response choices.

Unforced-choice rating scale provides participants with an opportunity to express no opinion when they are unable to make a choice among the alternatives offered.

Forced-choice scale requires that participants select one of the offered alternatives. Researchers often exclude the response choice ‘no opinion’, ‘don’t know’, or ‘neutral’ when they know that most participants have an attitude on the topic.

Hallo effect = the systematic bias that the rater introduces by carrying over a generalized impression of the subject from one rating to another. Halo is especially difficult to avoid when the property being studied is not clearly defined, is not easily observed, is not frequently discussed, involves reactions with others, or is a trait of high moral importance.

Simple category scale (also called a dichotomous scale) offers two mutually exclusive response choices. These may be ‘yes’ and ‘no’, ‘important’ and ‘unimportant’.

When there are multiple options for the rater but only one answer is sought, the multiple-choice, single-response scale is appropriate.

Likert scale is the most frequently used variation of the summated rating scale. Summated rating scales consist of statements that express either a favourable or an unfavourable attitude toward the object of interest. The participant is asked to agree or disagree with each statement. Each response is given a numerical score to reflect its degree of attitudinal favourableness, and the scores may be summed to measure the participant’s overall attitude.

Item analysis assesses each item based on how well it discriminates between those persons whose total score is high and those whose total score is low. The mean scores for the high-score and low-score groups are then tested for statistical significance by computing t values. After finding the t values for each statement, they are rank-ordered, and those statements with the highest t values are selected.

The semantic differential (SD) scale measures the psychological meanings of an attitude object using bipolar adjectives. Researchers use this scale for studies such as brand and institutional image.

Numerical/multiple rating list scales have equal intervals that separate their numeric scale points. The verbal anchors serve as the labels for the extreme points.

Staple scale = used as an alternative to the semantic differential, especially when it is difficult to find bipolar adjectives that match the investigative question.

Constant-sum scales = a scale that helps the researcher discover proportions.

Graphic rating scales – the scale was originally created to enable researchers to discern fine differences. Theoretically, an infinite number of ratings are possible if participants are sophisticated enough to differentiate and record them.

They are instructed to mark their response at any point along a continuum. Usually, the score is a measure of length (millimetres) from either endpoint. The results are treated as interval data. The difficulty is in coding and analysis. This scale requires more time than scales with predetermined categories.

Ranking scales– in ranking scales, the participant directly compares two or more objects and makes choices among them.

Arbitrary scales are designed by collecting several items that are unambiguous and appropriate to a given topic. These scales are not only easy to develop, but also inexpensive and can be designed to be highly specific. Moreover, arbitrary scales provide useful information and are adequate if developed skilfully.

Consensus scaling requires items to be selected by a panel of judges and then evaluate them on:

  • Their relevance to the topic area

  • Their potential for ambiguity

  • The level of attitude they represent

Scalogram analysis = a procedure for determining whether a set of items forms a uni-dimensional scale.

Factor scales include a variety of techniques that have been developed to address two problems:

  1. How to deal with a universe of content that is multidimensional

  2. How to uncover underlying dimensions that have not been identified by exploratory research

A disguised question = designed to conceal the question’s true purpose.

Administrative questions – identify the participant, interviewer, interview location, and conditions. These questions are rarely asked of the participant but are necessary for studying patterns within the data and identify possible error sources.

Classification questions – usually cover sociological-demographic variables that allow participants’ answers to be grouped so that patterns are revealed and can be studied.

Target questions (structured or unstructured) – address the investigative questions of a specific study. These are grouped by topic in the survey. Target questions may be structured (they present the participants with a fixed set of choices, often called closed questions) or unstructured (the do not limit responses but do provide a frame of reference for participants’ answers; sometimes referred to as open-ended questions).

Response strategy - a third major area in question design is the degree and form of structure imposed on the participant.

The various response strategies offer options that include unstructured response (or open-ended response, the free choice of words) and structured response (or closed response, specified alternatives provided).

Free-response questions - also known as open-ended questions, ask the participant a question and either the interviewer pauses for the answer (which is unaided) or the participant records his or her ideas in his or her own words in the space provided on a questionnaire.

Dichotomous question - suggest opposing responses (yes/no) and generate nominal data.

Checklist – when multiple responses to a single question are required, the question should be asked in one of three ways: the checklist, rating, or ranking strategy. If relative order is not important, the checklist is logical choice. They are more efficient than asking for the same information with a series of dichotomous selection questions, one for each individual factor. Checklists generate nominal data. 

Image

Access: 
Public

Image

Join WorldSupporter!
This content is related to:
Samenvatting: Business Research Methods
Summary with Business Research Methods by Blumberg
Search a summary

Image

 

 

Contributions: posts

Help other WorldSupporters with additions, improvements and tips

Add new contribution

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.

Image

Spotlight: topics

Check the related and most recent topics and summaries:
Activities abroad, study fields and working areas:

Image

Check how to use summaries on WorldSupporter.org

Online access to all summaries, study notes en practice exams

How and why use WorldSupporter.org for your summaries and study assistance?

  • For free use of many of the summaries and study aids provided or collected by your fellow students.
  • For free use of many of the lecture and study group notes, exam questions and practice questions.
  • For use of all exclusive summaries and study assistance for those who are member with JoHo WorldSupporter with online access
  • For compiling your own materials and contributions with relevant study help
  • For sharing and finding relevant and interesting summaries, documents, notes, blogs, tips, videos, discussions, activities, recipes, side jobs and more.

Using and finding summaries, notes and practice exams on JoHo WorldSupporter

There are several ways to navigate the large amount of summaries, study notes en practice exams on JoHo WorldSupporter.

  1. Use the summaries home pages for your study or field of study
  2. Use the check and search pages for summaries and study aids by field of study, subject or faculty
  3. Use and follow your (study) organization
    • by using your own student organization as a starting point, and continuing to follow it, easily discover which study materials are relevant to you
    • this option is only available through partner organizations
  4. Check or follow authors or other WorldSupporters
  5. Use the menu above each page to go to the main theme pages for summaries
    • Theme pages can be found for international studies as well as Dutch studies

Do you want to share your summaries with JoHo WorldSupporter and its visitors?

Quicklinks to fields of study for summaries and study assistance

Main summaries home pages:

Main study fields:

Main study fields NL:

Follow the author: Vintage Supporter
Work for WorldSupporter

Image

JoHo can really use your help!  Check out the various student jobs here that match your studies, improve your competencies, strengthen your CV and contribute to a more tolerant world

Working for JoHo as a student in Leyden

Parttime werken voor JoHo

Statistics
3843 1