Organizations are all around us (businesses, hospitals, social clubs etc.) and all have their own particular set of objectives. To function effectively, organizations must subdivide their objectives into various jobs which require people of differing aptitudes. This makes the use of human resources essential. This book considers how applied psychology can contribute to a wiser, more humane use of our human resources.
How are organizations pervasive?
We are all confronted by organizations in one form or another in our lives. Children are exposed to school organizations, after leaving school they may choose to join a military, business, or government organization, and will later on probably move through several different organizations. Our everyday lives are intertwined with organizational memberships.
What characteristics unite various activities under the collective label “organization”? Multiple definitions of organizations have been suggesting, each reflecting various theoretical points of view. But certain fundamental elements recur. In general, an organization is a collection of people working together in a division of labour to achieve a common purpose. Another concept views an organization as a system of inputs (raw materials), throughputs (materials transformed/modified), and outputs (exported/sold back to the environment as finished products). People are the basic ingredients of all organizations.
The focus is on people as members and resources of organizations and what applied psychology can contribute toward helping organizations make the wisest use of human resources. Personnel psychology concerns individual differences in behaviour and job performance and methods for measuring and predicting such differences. The sources of these differences can come from differences in jobs but also differences in performance and between people.
A utopian ideal
In an ideal world, the goal would be to assess everyone’s individual aptitudes, abilities, personalities, and interests; profile these characteristics; then place individuals in jobs perfectly suited to them and society. This ideal falls short in practice.
Point of view
It is useful to make explicit underlying assumptions.
- In a free society, every individual has a fundamental and inalienable right to compete for any job for which they are qualified.
- Society can and should do better at making the wisest and most humane use of its human resources.
- Individuals working in human resources and managers responsible for making employment decisions must be as technically competent and well informed as possible.
What is personnel psychology?
Personnel psychology is a subfield within I/O (industrial and organizational) psychology. It is an applied discipline focusing on individual differences in behaviour and job performance and methods of measuring and predicting these differences. Major areas of interest include job analysis and job evaluation, recruitment, screening, selection, training and development, and performance management.
There is also overlap between psychology and HRM, which is concerned with the management of staffing, retention, development, adjustment, and change in order to achieve individual and organizational objectives. Psychologists have already made substantial contributions to the field of human resource management. The last decade has seen changes in markets, technology, organizational designs, and the roles of managers and workers inspiring a renewed emphasis and interest in personnel psychology. The following sections will consider each of these in more detail.
Changing nature of product and service markets
Globalization refers to commerce without borders and the interdependence of business operations in different locations. In a world where the transfer of capital, goods, and labor happens seamlessly, globalization brings both positive and negative changes.
To facilitate globalization, some firms consider outsourcing. They send teams to dissect the workflow of an entire department and then help build a new IT platform, redesign processes, and administer programs. The contractor then disperses work among global networks of staff across the world. These structural changes have consequences that are beneficial for the global economy but promise more frequent career changes for workers.
It takes trade agreements, technology, capital investment, and infrastructure as well as the skills, and competencies of a well-trained workforce to deliver world-class products and services. Attracting, developing, and retaining talent in a culture that supports ongoing learning is a challenge for all organizations. Human resource professionals are at the center of this effort.
The psychological contract
This has an impact on jobs and the psychological contract. Jobs are not being temporarily lost because of a recession, but they are being wiped out permanently as a result of new technology and new ways of organizing work. The final 20 years of the twentieth century saw many corporate cultures and workforces being characterized by downsizing and the loss of the perceived ‘psychological contract’ of lifelong employment with a single employer. The psychological contract refers to an unwritten agreement in which the employee and employer develop expectations about their mutual relationship.
Stability and predictability characterized the old psychological contract. Change and uncertainty are now hallmarks of the new psychological contract. It Is more common nowadays to job hop and hold multiple jobs during your life than it was a few decades ago.
Effects of technology on organizations and people
Millions of workers use many products of the digital age, computers, phones, digital assistants, email etc. Anything digital is borderless and the digital revolution is breaking down departmental barriers, making sharing vast amounts of information easier. To succeed in a world where the only constant thing is the increasing pace of change, companies need motivated, technically literature workers who are willing to train continually.
Like other new developments, there are negatives and positives associated with new technology, that need to be acknowledged. Negatives include, mass junk email, potential attacks by hackers, and invasion of employees’ privacy. A common assumption is since production and service processes have become more sophisticated, high technology can substitute for skill in a managing workforce. However, technology ideally will help workers make decisions in organizations that encourage them to do so.
Changes in the structure and design of organizations
Many factors are driving change, but none are more important than the rise of Internet technologies. The Web enables everyone in an organization to access an array of information instantaneously from anywhere. Organizations these days are global in orientation and all bout speed with no guarantees to workers or managers. Organizations are becoming leaner, with better trained multispecialists. Organizations of the future will come to rely on cross-trained multispecialists. The role of managers is changing dramatically.
Changing role of the manager
In the traditional hierarchy, managers ruled by command from the top, using rigid controls to ensure tasks could be coordinated, and partitioned information into compartments. Information was/is power, and managers clung to power by hoarding information and aimed for stability, predictability, and efficiency. In today’s hypercompetitive work environment, organizations have to respond quickly to shifting market conditions. Key tasks for managers are to articulate a vision of what their organizations stand for, what they are trying to accomplish, and how they compete for business in the marketplace.
A growing number of organizations now recognize that they need to emphasize workplace democracy in order to achieve the vision, which involves breaking down barriers, sharing information, using a collaborative approach to problem solving, and orienting employees toward continuous learning and improvement. This does not necessarily imply a move toward a universal model of organizational and leadership effectiveness. Today’s networked, interdependent, culturally diverse organizations require transformational leadership, which is effective under unstable or uncertain conditions.
Additionally, much of the work resulting in a product, service, or decision is now done in teams. Such teams have many names – autonomous work groups, process teams, self-managing work teams etc. This all implies a reorientation from the traditional view of a manager’s work. In this environment, workers act more like managers, and managers more like workers. Flattened hierarchies also means that there are fewer managers in the first place.
The empowered worker
21st century organizations differ in structure, design, and demographics from those of even a decade ago. Demographically, they are more diverse (more women, more multicultural workers, older workers, more disabled workers, robots etc. here is more pressure to do more with less and emphasis on empowerment, cross-training, personal flexibility, self-management, and continuous learning.
What are some implications for organizations and their people?
Today, the quality of a nation’s workforce is a crucial determinant of its ability to compete and win in world markets. Human resources can be sources of sustained competitive advantage if they meet three requirements: (1) they add positive economic benefits to the process of producing goods and services, (2) skills of the workforce are distinguishable from those of competitors, (3) such skills are not easily duplicated. A human resource system can enhance or destroy this potential competitive advantage.
As personnel psychology moves forward into the 21st century, the biggest challenge is changing the way we think about organizations and their people. There is more demand for comprehensive training policies that focus training efforts on organizational needs 3-5 years out. From an employee’s perspective, these programs are valuable because job security (retaining employment with one organization until retirement) has become less important to workers than employment security (having skills that employers are willing to pay for). Demographic changes are making recruitment and staffing top priorities in many organizations. A diverse workforce is now not something a company should have, but something they all do have or soon will have.
Aside from demographic changes, there are also changes in the nature of work and its impact on workers and society. Potential problems that could arise include insecurity, uncertainty, stress, and social friction. On the other hand, however, work could provide compensations such as challenge, creativity, flexibility, control, and interrelatedness.
This all taken together shows that the need for competent HR professionals with broad training in a variety of areas has never been greater than now.
Comprehensive employment-related legislation, combined with increased motivation from individuals to rectify unfair employment practices, makes the legal aspects of employment one of the more dominant issues in HRM today. All branches of the federal government (in the U.S.) have been actively involved in efforts to guarantee equal employment opportunity as a fundamental individual right, regardless of race, color, age, gender, religion, national origin, or disability. I/O psychologists and HR professionals are being called on to work with attorneys, courts, and federal regulatory agencies. It is therefore important to understand the rights and obligations of individuals and employers under the law and to ensure that these are translated into everyday practice according to legal guidelines by federal regulatory agencies.
How does the legal system in the United States work?
The United States Constitution stands as the supreme law of the land. Certain powers and limitations are prescribed to the federal government by the Constitution, those not given to the federal government are reserved for the states. In turn, the states have their own constitutions that are subject to the U.S. Constitution. Certain activities are regulated exclusively by the federal government (e.g., interstate commerce), whereas other areas are subject to concurrent regulation by federal and state governments (e.g., equal employment opportunity).
The legislative branch of government (Congress) enacts laws, which are considered primary authority. Court decisions and guidelines of regulatory agencies are not laws, but interpretations of the law given for situations in which the law is not specific. The judicial power of the U.S. is vested “in one Supreme Court and in such inferior courts as Congress may from time to time ordain and establish” according to the Constitution. The system of ‘inferior’ (lower) courts includes District Courts, the federal trial courts in each state. The state court structure parallels the federal court structure, with state district courts at the lowest level, followed by state appellate (review) courts, and finally by a state supreme court.
Equal Employment Opportunity (EEO) complaints may take any one of several alternative routes. The simplest and least costly alternative is to arrive at an informal, out-of-court settlement with the employer. But often the employer does not have an established mechanism for dealing with such problems. So, the complainant has to choose more formal legal means. But solutions then become time consuming and expensive. Perhaps the wisest course of action an employer can take is to establish a sound internal complaint system to deal with problems before they escalate to formal legal proceedings.
What is unfair discrimination?
Now law has ever attempted to precisely define the term discrimination. In the employment context, it can be viewed broadly as the giving of an unfair advantage (or disadvantage) to members of a particular group in comparison the members of other groups. Usually this is in the form of denial or restriction of employment opportunities or in an inequality in the terms/benefits of employment. Discrimination is a subtle and complex phenomenon that may assume two broad forms:
- Unequal (disparate) treatment is based on an intention to discriminate and intention to retaliate against a person who opposes discrimination. Three sub-theories of this kind of discrimination are cases relying on direct evidence of intention to discriminate; cases that are proven through circumstantial evidence; mixed-motive cases often relying on both direct evidence and proof of the employer’s state legitimate basis for its employment decision that is actually a pretext for illegal discrimination.
- Adverse impact (unintentional discrimination) happens when identical standards or procedures are applied to everyone, despite the fact that they lead to differences in employment outcomes for members of a certain group and they are unrelated to success on a job.
What does the legal framework for civil rights requirements look like?
Employers are subject to the various nondiscrimination laws. Government contractors and subcontractors are subject to executive orders. While it is beyond the scope of this chapter to analyze all the legal requirements pertaining to EEO, HR professionals should understand the major legal principles as articulated in the following laws of broad scope.
The U.S. Constitution (13th and 14th Amendments): the 13th amendment prohibits slavery and involuntary servitude. The 14th amendment guarantees equal protection of the laws for all citizens. It is from this source of constitutional power that all subsequent civil rights legislation originates.
The Civil Rights Acts of 1866 and 1871: the 1866 act grants all citizens the right to make and enforce contracts for employment. The 1871 act grants all citizens the right to sue in federal court if they feel they have been deprived of any rights or privileges guaranteed by the Constitution and laws.
The Equal Pay Act of 1963: this act was passed as an amendment to the Fair Labour Standards Act of 1938. The Equal Pay Act requires that men and women working for the same establishment be paid the same rate of pay for work that is substantially equal in skill, effort, responsibility, and working conditions.
Title VII of the Civil Rights Act of 1964 (amended by the Equal Employment Opportunity Act of 1972): The Civil Rights Act of 1964 is divided into several sections or titles, each dealing with a particular facet of discrimination. Title VII has been the principal body of federal legislation in the area of fair employment. It established the EEOC to ensure compliance with the law by employers, employment agencies, and labour organizations. Title VII considers: Nondiscrimination on the basis of race, colour, religion, sex, or national origin, apprenticeship programs, retaliation, and employment advertising, suspension of government contracts and back-pay awards. Exemptions to Title VII coverage include:
- Bona fide occupational qualifications (BFOQ).
- Seniority systems.
- Preemployment inquiries.
- Testing.
- Preferential treatment.
- Veterans’ preference rights.
- National security.
The Age Discrimination in Employment Act of 1967 (amended in 1986): The Age Discrimination in Employment Act requires employers to provide EEO on the basis of age. It proscribes discrimination on the basis of ages for employees age 40 and over unless the employer can demonstrate that the age is a BFOQ for the job in question.
The Immigration Reform and Control Act of 1986: this law applies to every employer and employee in the United States. It requires that employers do not hire or continue to employ aliens who are not legally authorized to work in the U.S. and that within three days of the hire date, employers verify the identity and work authorization of every new employee. Employers cannot discriminate on the basis of national origin, but when two applicants are equally qualified, they may choose a U.S. citizen over a non-U.S. citizen.
The Americans with Disabilities Act of 1990: it prohibits an employer for discriminating against a qualified individual with a disability as long as they are able to perform the essential functions of a job with or without accommodation.
The Civil Rights Act of 1991: amended the civil rights act of 1866 so workers are protected from intentional discrimination in all aspects of employment, not just hiring and promotion. This Act also overturned six supreme court decisions issued in 1989. Some key provisions that are likely to have the greatest impact in the context of employment include: monetary damages and jury trials, adverse impact cases, protection in foreign countries, racial harassment, challenges to consent decrees, mixed-motive cases, seniority systems, race-norming and affirmative action, and extension to U.S. senate and appointed officials.
The Family and Medical Leave Act of 1993: this law gives workers up to 12 weeks of unpaid leave each year for birth, adoption, or foster care of a child within a year of the child’s arrival; care for a spouse, parent or child with a serious health condition’ or the employee’s own serious health condition if it prevents them from working. In 2008 it was amended and expanded to include military families.
Laws with limited application (nondiscrimination as a basis for eligibility for federal funds):
- Executive Orders 11246, 11375, and 11478: presidential executive orders are aimed specifically at federal agencies, contractors, and subcontractors. EO 11246 prohibits discrimination on the basis of race, colour, religion, or national origin as a condition of employment with contracts of $10,000 or more. EO 11375 prohibited discrimination in employed based on sex. EO 11478 prohibited discrimination in employment based on all of the previous factors, plus political affiliation, marital status, and physical disability.
- The Rehabilitation Act of 1973: this Act requires federal contractors and subcontractors actively to recruit qualified individuals with disabilities and to use their talents to the fullest extent possible. Legal requirements are similar to those of the ADA. Its purpose is to eliminate systemic discrimination.
- Vietnam Era Veterans Readjustment Act of 1974: federal contractors and subcontractors are required to take affirmative action to ensure EEO for Vietnam-era veterans.
- The Uniformed Services Employment and Reemployment Rights Act of 1994: requires both public and private employers promptly to reemploy individuals returning from uniformed service in the position they would have occupied and with the seniority rights they would have enjoyed had they never left.
What regulatory agencies help enforce the laws?
- State fair employment-practices commissions: most states have nondiscrimination laws including provisions that express the public policy of the state, the people to whom the law applies, and the prescribed activities of various administrative bodies. Many states vest statutory enforcement powers in a state fair employment-practices commission.
- Equal employment opportunity commission: the EEOC is an independent regulatory agency whose five commissioners are appointed by the president and confirmed by the senate for terms of five years. It sets policy and in individual cases determines whether there is ‘reasonable cause’ to believe that unlawful discrimination has occurred.
- The complaint process: complaints filed with the EEOC first are deferred to a state or local fair employment-practices commission if there is one with statutory enforcement power. After 60 days the EEOC can start its own investigation. EEOC follows a three-step approach to resolving complaints: investigation, conciliation, and litigation. Its other major function is information gathering.
- Office of federal contract compliance programs: the OFCCP is part of the U.S. department of labor’s employment standards administration. It is responsible for ensuring that employers doing business with the federal government comply with the laws and regulations requiring nondiscrimination.
- Goals and timetables: when job categories include fewer women or minorities “than would be expected by their availability,” the contractor must establish goals and timetables for increasing their representation. Different from quotas, goals are flexible objectives that can be met in a realistic amount of time.
What are the general principles of employment case law?
Legislative and executive branches may write the law and provide for its enforcement, but it is the judicial branch’s responsibility to interpret the law and to determine how it will be enforced. Legal interpretations define what is called case law, which serves as a precedent to guide, but not completely to determine, future legal decisions. This section highlights some significant developments in certain areas:
- Testing: the 1964 Civil Rights Act sanctions the use of ‘professionally developed’ ability tests but took Several Supreme Court cases to spell out the proper role and use of tests. Employers need to be sure that there is a legitimate, job-related reason for every question raised in an employment or promotional interview.
- Personal history: qualification requirements often involve personal background information or employment history, which may include minimum education or experience requirements, past wage garnishments, or previous arrest/conviction records. If such requirements have the effect of denying or restriction EEO, they may violate Title VII.
- Sex discrimination: judicial interpretation of ITLE VII indicates that in the United States both sexes must be given equal opportunity to compete for jobs unless it can be demonstrated that sex is a bona fide occupational qualification for the job (e.g., actor, actress). Sexual harassment is a form of illegal sex discrimination prohibited by Title VII.
- Preventative actions by employers: what can an employer do to escape liability for the sexually harassing acts of its managers or workers? An effective policy should include a workable definition of sexual harassment that is publicized, an effective complaint procedure, clear statement of sanctions for violators and protection for complainants, and training of all managers and supervisor to recognize and respond to complaints (among other things).
- Age discrimination: to discriminate fairly against employees over 40 years old, an employer must be able to demonstrate a ‘business necessity’ for doing so.
- “English only” rules – national origin discrimination: the EEOC and many courts agree that blanket English-only rules that lack business justification amount to unlawful national-origin discrimination.
- Seniority: connotes length of employment. One issue is the impact of established seniority systems on programs designed to ensure EEO.
- Preferential selection: an unfortunate side of affirmative action programs designed to help minorities and women is that they may place qualified white males at a competitive disadvantage. But social policy emphasizes that ‘reverse discrimination’ is just as unacceptable as discrimination by whites against members of protected groups.
We have examined the legal and social environments within which organizations and individuals function. For both to function effectively, competent HRM is essential. Fundamental tools are needed to enable HR professionals to develop both a conceptual framework for viewing employment decisions and methods for assessing the outcomes of such decisions.
Comprehensive employment-related legislation, combined with increased motivation from individuals to rectify unfair employment practices, makes the legal aspects of employment one of the more dominant issues in HRM today. All branches of the federal government (in the U.S.) have been actively involved in efforts to guarantee equal employment opportunity as a fundamental individual right, regardless of race, color, age, gender, religion, national origin, or disability. I/O psychologists and HR professionals are being called on to work with attorneys, courts, and federal regulatory agencies. It’s therefore important to understand the rights and obligations of individuals and employers under the law and to ensure that these are translated into everyday practice according to legal guidelines by federal regulatory agencies.
Organizations and individuals frequently are confronted with alternative courses of action, and decisions are made when one alternative is chosen in preference to others.
What is utility theory?
How does one arrive at sound decisions that will ultimately spell success for the individual or organization affected? Principles are needed to help managers and individuals make the most profitable or beneficial choices among products, investments, jobs, curricula, etc.
Utility theory is engaging as it insists that costs and expected consequences of decisions always be taken into account. It stimulates the decision maker to formulate what he or she is after, as well as to anticipate the expected consequences of alternative courses of action. The ultimate goal is to enhance decisions, and the best way to do that is to identify the linkages between employment practices and the ability to achieve the strategic objectives of an organization.
Organizations as Systems
Much attention has recently been devoted to the concept of ‘systems and the use of ‘systems thinking’ to frame and solve complex scientific and technological problems. A system is a collection of interrelated parts, unified by design and created to attain one or more objectives. The objective is to be aware of the variables involved in executing managerial functions so that decisions will be made in light of the overall effect on the organization and its objectives. These decisions have to consider the organization itself but also the larger systems in which the organization operates. Classical management theories view organizations as closed or self-contained systems whose problems could be divided into their component parts and solved. The closed-system approach concentrated primarily on the internal operation of the organization and tended to ignore the outside environment.
This approach was criticized on several grounds. By concentrating solely on conditions inside the firm, management became sluggish in its response to the demands of the marketplace. The modern view of organizations is therefore that of open systems in continual interaction with multiple, dynamic environments, providing for a continuous import of inputs and a transformation of these into outputs, which are then exported back into these various environments to be consumed by clients or customers. So, the environments provide feedback on the overall process.
The hierarchy of systems should be emphasized as well. A system comprises subsystems of a lower order and is also part of a supersystem. But what constitutes a system is purely relative and largely depends on the level of abstraction/complexity on which one is focusing the analysis. There is a need for an inclusive, almost concentric mode of organizing subsystems into larger systems and supersystems to coordinate activities and processes. This provides a macro-view from which to visualize events or actions in one system and their effects on other related systems or on the organization as a whole.
Systems theory has taken us to the edge of a new awareness – that everything is one big system with infinite, interconnected, interdependent subsystems. Managers need to understand systems theory, but they should resist the rational mind’s instinctive desire to use it to predict and control organizational events. Organizational reality will not conform to any logical, systemic thought pattern.
How can the employment process be seen from a systems perspective?
To appreciate fully the relevance of applied psychology to organizational effectiveness, it is useful to view the employment process as a network or system of sequential, interdependent decisions. Each decision is an attempt to discover what should be done with one or more individuals, and these decisions typically form a long chain. It is a sequential strategy, where information gathered at one point in the overall procedure determines what, if any, information will be gathered next.
There are two general features: (1) different recruitment, selection, and training strategies are used for different jobs; and (2) the various phases in the process are highly interdependent, as the feedback loops indicate. Changes in one part of the system have a ‘reverberating’ effect on all other parts of the system.
Each link of the model will be examined below.
Job analysis and job evaluation
Job analysis is the fundamental building block on which all alter decisions in the employment process must rest. This process begins with a detailed specification by the organization of the work to be performed, skills needed, and training required to perform the job satisfactorily. It supports job evaluation, in which organizations must make value judgments on the relative importance or worth of each job to the organization as a whole – in terms of dollars.
Workforce planning
Workforce planning (WP) is concerned with anticipating future staffing requirements and formulating action plans to ensure that enough qualified individuals are available to meet specific staffing needs at some future time. Four conditions must be met:
- Organization must devise an inventory of available knowledge, abilities, skills, and experiences of present employees.
- Forecasts of the internal and external HR supply and demand must be undertaken.
- Based on information derived from the talent inventory and HR supply and demand forecasts, various action plans can be formulated to meet predicted staffing needs (programs may include training, transfers, promotion, or recruitment).
- Control and evaluation procedures are necessary to give feedback on the adequacy of the WP effort.
Recruitment
Equipped with the information derived from job analysis, evaluation, and WP, we can proceed to attracting potentially acceptable candidates to apply for the various jobs. The recruitment machinery is typically set into motion by the receipt by the HR office of staffing requisition from a particular department. Two basic decisions that the organization must make involve:
- The cost of recruiting.
- The selection ratio.
Initial screening
The resulting applications are subjected to an initial screening process that is more or less intensive depending on the screening policy or strategy adopted by the organization. Because each stage in the employment process involves a cost to the organization and because the investment becomes larger and larger with each successive stage, it is important to consider the likely consequence of decision errors at each stage. There are two types of decision errors:
- Erroneous acceptance: An individual who passed from a preceding stage but fails the following stage.
- Erroneous rejection: An individual who is rejected at one stage, but who can succeed at the following stage if allowed to continue.
Selection
Information is collected judgmentally (e.g., by interviews), mechanically (e.g., by written tests), or in both ways. Gathered data must be combined judgmentally, mechanically, or via some mixture of both methods. The resulting combination is the basis for hiring, rejecting, or placing on a waiting list every applicant who reaches the selection phase.
Training and development
HR professions can increase the effectiveness of workers and manages of an organization by employing a wide range of training and development techniques. Payoffs are significant only when training techniques accurately match individual and organizational needs. Most individuals have a need to feel competent. Training programs designed to modify or develop competencies range from basic skills training and development for individuals, to team training, supervisory training and cross-cultural training for employees who will work in other countries.
Performance management
When selecting and training an individual for a specific job, an organization is essentially taking a risk in the face of uncertainty. It is only after employees have been performing their jobs for a reasonable length of time that we can evaluate their performance and our predictions. When observing and evaluating job behaviour and providing timely feedback, we are evaluating the degree of success of the individual or team in reaching organizational objectives. Promotions, compensation decisions, transfers, and disciplinary actions are dependent on performance management. Performance management focuses on improving performance at the level of the individual or team every day. Performance appraisals on the other hand are done once or twice a year to identify and discuss the job-relevant strengths and weaknesses of individuals or teams.
Organizational exit
Eventually everyone who joins an organization must leave. For some this process is involuntary (e.g. termination or forced layoff), for others it is voluntary, and the employee has control over the timing of their departure. Retirement is also a form of organizational exit but is likely to have fewer adverse effects than involuntary exits. The organizational exit influences and is influenced by prior phases in the employment process.
This urges us to consider both costs and anticipated consequences in making decisions. Nowhere are systems thinking more relevant than in the HRM systems of organizations. The very concept of a system implies a design to attain one or more objectives, involving a consideration of desired outcomes.
Organizations and individuals frequently are confronted with alternative courses of action, and decisions are made when one alternative is chosen in preference to others.
Adequate and accurate criterion measurement is a fundamental problem in HRM. Criteria are operational statements of goals or desired outcomes. Although criteria are sometimes used for predictive purposes and sometimes for evaluative purposes, in both cases they represent that which is important or desirable.
In general, applied psychologists are guided by two principal objectives: (1) to demonstrate the utility of their procedures and programs and (2) to enhance their understanding of the determinants of job success.
The development of criteria that are adequate and appropriate is at once a stumbling block and a challenge to the HR specialist. The criterion problem refers to difficulties involved in the process of conceptualizing and measuring performance constructs that are multidimensional, dynamic, and appropriate for different purposes. The effectiveness and future progress of knowledge with respect to most HR interventions depend fundamentally on our ability to resolve this question.
The challenge is to develop theories, concepts, and measurements that will achieve the twin objectives of enhancing the utility of available procedures and programs and deepening our understanding of the psychological and behavioural processes involve din job performance. We should aim to develop a comprehensive theory of the behaviour of men and women at work.
Defining terms
Criteria have been defined from multiple views. From one perspective, criteria are standards that can be used as yardsticks for measuring employees’ degree of success on the job. This definition is useful when prediction is involved, but there are times when we simply want to evaluate without necessarily predicting. If evaluative standards (like written or performance tests) are administered before an employment decision is made, the standards are predictors. If they are administered after an employment decision has been made, the standards are criteria.
A more comprehensive definition is required, regardless of whether we are predicting or evaluating. So, a more general definition is that a criterion represents something important or desirable and is an operational statement of the goals or desired outcomes of the program under study. It is an evaluative standard that can be used to measure a person’s performance, attitude, motivation etc. There are several other requirements of criteria in addition to desirability and importance, but, before examining them, we must first consider the use of job performance as a criterion.
How is job performance a criterion?
Performance may be defined as observable things people do that are relevant for the goals of the organization. Job performance is multidimensional, and behaviours that constitute performance can be scaled in terms of the level of performance they represent. It is important to distinguish between performance from the outcomes or results of performance, which constitute effectiveness.
The ultimate criterion describes the full domain of performance and includes everything that ultimately defines success on the job. It is ultimate because you cannot look beyond it for any further standard by which to judge the outcomes of performance.
What dimensions of criteria exist?
Operational measures of the conceptual criterion may vary along several dimensions. Ghiselli (1956) identified three different types of criterion dimensionality:
- Static dimensionality: this type of multidimensionality refers to two issues: (1) the fact that individuals may be high on one performance facet and simultaneously low on another and (2) the distinction between maximum and typical performance. We can consider two facets of performance: task performance and contextual performance. They do not necessarily go hand in hand. An employee can be highly proficient at her task but be fan underperformer with regard to contextual performance. Contextual performance also constitutes workplace deviance and counterproductive behaviours.
- Dynamic/temporal dimensionality: the optimum times to measure criterion vary greatly between situations, and conclusions therefore need to be couched in terms of when criterion measurements were taken. Different results may occur depending when measurements are taken, and failure to consider the temporal dimension can lead to misinterpretations. Criterion measurements are not independent of time. Temporal dimensionality is a broad concept, and criteria can be dynamic in three ways: (1) changes over time in average levels of group performance, (2) changes over time in validity coefficients, (3) changes over time in the rank ordering of scores on the criterion.
- Individual dimensionality: it is possible that individuals performing the same job may be considered equally good but the nature of their contributions to the organization may be quite different. So different criterion dimensions should be used to evaluate them.
What challenges exist in criterion development?
Competent criterion research is a pressing need for personnel psychology today. It is been shown that continuing attention to the development of better performance measures results in better predictions of performance. Now we consider three types of challenges faced in the development of criteria, point out potential pitfalls in criterion research, and sketch a logical scheme for criterion development.
- Challenge 1 (Job performance (un)reliability)): reliability in this context refers to the consistency or stability of job performance over time. Throndike (1949) identified two types of unreliability. Intrinsic unreliability is due to personal inconsistency in performance, and extrinsic unreliability is due to sources of variability that are external to job demands or individual behaviour. A solution to this problem is to aggregate behaviour over situations or occasions, thereby canceling out the effects of uncontrollable factors. It is also important to pay more careful attention to the factors that produce this phenomenon.
- Challenge 2 (job performance observation): this issue is important for prediction because all evaluations of performance depend ultimately on observation of one sort or another, but different methods of observing performance can lead to different conclusions. The study of reliability of performance becomes possible only when the reliability of judging performance is adequate. But even though we know the problem exists, there is no silver bullet that will improve the reliability of judging performance.
- Challenge 3 (dimensionality of job performance): even the most cursory examination of HR research reveals great variety of predictors typically in use. Conversely, the majority of studies only use a global criterion measure of the job performance. Despite the problems associated with global criteria, they seem to work quite well in most personnel selection situation. If there is more than one specific problem, then more than one specific criterion is called for.
How are performance and situational characteristics related?
Most people would agree that individual levels or performance may be affected by conditions surrounding the performance. But most research investigations are done without regard for possible effects of variables other than those measured by predictors. Here we will mention six possible extraindividual influences on performance. Taken together this is defined as in situ performance: “the specification of the broad range of effects – situational, contextual, strategic, and environmental – that may affect individual, team or organizational performance”.
- Environmental and organizational characteristics: these include organization wide factors, interpersonal factors, job-related factors, and personal factors.
- Environmental safety: injuries and loss of time can affect job performance. Positive safety climate, high management commitment, and sound safety communications programs can increase safe behaviour on the job.
- Lifespace variables: measure important conditions that surround the employee on and off the job. They concern the individual employee’s interactions with organizational factors, task demands, supervision, and conditions of the job.
- Job and location: performance depends not only on job demands but also other structural and contextual factors like the policies and practices of particular companies.
- Extraindividual differences and sales performance: Cravens and Woodruff’s (2973) study was noteworthy because a purer estimate of individual job performance was generated by combining the effects of extraindividual influences with two individual difference variables (sales experience and rate sales effort).
- Leadership: variations in job performance are due to characteristics of individuals, groups, and organizations. Until we are able to partition the total variability in job performance into intraindividual and extraindividual components, we cannot expect predictor variables measuring individual differences to correlate appreciably with measures of performance that are influenced by factors not under an individual’s control.
What steps are there in criterion development?
Guion (1961) outlines a five-step procedure for criterion development.
- Analysis of job and/or organizational needs.
- Development of measures of actual behaviour relative to expected behaviour as identified in job and need analysis. These measures should supplement objective measures of organizational outcomes such as turnover, absenteeism, and production.
- Identification of criterion dimensions underlying such measures by factor analysis, cluster analysis, or pattern analysis.
- Development of reliable measures, each with high construct validity, of the elements so identified.
- Determination of the predictive validity of each independent variable (predictor) for each one of the criterion measures, taking them one at a time.
How can we evaluate criteria?
How can we evaluate the usefulness of a given criteria? There are three yardsticks:
- Relevance: the principal requirement of any criterion. A relevant criterion is one that reflects the relative standing of employees with respect to important work behaviour(s) or outcome measure(s).
- Sensitivity/discriminability: in order to be useful, any criterion measure has to be sensitive and capable of discriminating between effective and ineffective employees. The use of a particular criterion measure is warranted only if it serves to reveal discriminable differences in job performance.
- Practicality: it is important that management is informed thoroughly of the real benefits of using carefully developed criteria.
What is criterion deficiency?
Criterion measures differ in the extent to which they cover the criterion domain. Criteria are deficient when they fail to include an important component of the job. The importance of considering criterion deficiency was highlighted by a study examining the economic utility of companywide training programs addressing certain skills. The amount of change observed in an employee’s performance after attending a training program depends on the percentage of job tasks measured by the evaluation criteria. A measure including only a subset of the tasks learned during training will underestimate the value of the program.
What is criterion contamination?
When criterion measures are gathered carelessly with no checks on their worth before use either for research purposes or in the development of HR policies, they are often contaminated. Criterion contamination happens when the operational or actual criterion includes variance that is unrelated to the ultimate criterion. It can be subdivided into two distinct parts.
- Error: random variation and cannot correlate with anything but by chance alone.
- Bias: represents systematic criterion contamination and can correlate with predictor measures.
There are three important and likely sources of bias:
- Bias due to knowledge of predictor information.
- Bias due to group membership.
- Bias in ratings.
What is criterion equivalence?
If two criteria correlate (nearly) perfectly after correcting both for unreliability, then they are equivalent. If two criteria are equivalent, then they contain exactly the same job elements, are measuring the same individual characteristics, and occupy the same portion of the conceptual criterion space. They are equivalent if it makes no difference which one is used.
What is the difference between composite criterion and multiple criteria?
It is agreed that job performance is multidimensional in nature and that adequate measurement of job performance requires multidimensional criteria. But should one combine various criterion measures into a composite score, or should each criterion measure be treated separately?
- Composite criterion: criterion should provide a yardstick or overall measure of success or value to the organization of each individual. Even if the criterion dimensions are treated separately in validation, they have to somehow be combined into a composite when a decision is required. If it is decided to form a composite of several criterion measures, then the question is whether all the measures should be given the same weight or not. Forming a composite requires careful consideration of the relative importance of each criterion measure.
- Multiple criteria: advocates of multiple criteria contend that measures of demonstrably different variables should not be combined. It can lead to a composite that is ambiguous and psychologically nonsensical.
The two positions differ in terms of (1) the nature of the underlying constructs represented by the respective criterion measures and (2) what they regard to be the primary purpose of validation process itself. Arguments for the composite criterion say that criterion should represent an economic rather than a behavioural construct. The criterion should measure the overall contribution of the individual to the organization. Advocates of multiple criteria argue that criterion should present a behavioural or psychological construct, one that is behavioural homogenous.
There are many possible uses of job performance and program evaluation criteria. Generally, they can be used for research purposes where the emphasis is on the psychological understanding of the relationship between predictors and criterion dimensions. But they can also be used for managerial decision-making purposes they must be combined into a composite representing overall economic worth to the organization.
Adequate and accurate criterion measurement is a fundamental problem in HRM. Criteria are operational statements of goals or desired outcomes. Although criteria are sometimes used for predictive purposes and sometimes for evaluative purposes, in both cases they represent that which is important or desirable.
In general, applied psychologists are guided by two principal objectives: (1) to demonstrate the utility of their procedures and programs and (2) to enhance their understanding of the determinants of job success.
Performance management is a continuous process of identifying, measuring, and developing the performance of individuals and teams and aligning performance with the strategic goals of the organization. It is assessed at regular intervals, and feedback is provided so performance is improved on an ongoing basis. But researchers’ inability to resolve definitively the knotty technical and interpersonal problems of performance appraisal has led to the term the “Achilles’ heel” of HRM. Performance management systems will not be successful if they are not linked to broader work unit and organizational goals. This chapter will focus on both the measurement and the social/motivational aspects of performance management.
What purposes are served in performance management systems?
- They serve a strategic purpose because they help link employee activities with the organization’s mission and goals.
- They serve an important communication purpose because they allow employees to know how they are doing and what the organizational expectations are regarding their performance.
- They serve as bases for employment decisions – to promote outstanding performers, terminate marginal or low performers, train, transfer, or discipline others, and to award merit increases. Information gathered by the system can serve as predictors.
- They can serve as a criteria in HR research.
- They also serve a developmental purpose because they can help establish objectives for training programs.
- They can provide concrete feedback to employees.
- They can facilitate organizational diagnosis, maintenance, and development.
- They allow organizations to keep proper records to document HR decisions and legal requirements.
What are the realities of performance management systems?
Independently of any organizational context, the implementation of performance management system at work confronts the appraiser with five realities.
- This activity is inevitable in all organizations. Organizations have to know if individuals are performing competently, and appraisals are essential features of an organization’s defence against challenges to adverse employment actions.
- Appraisal is fraught with consequences of individuals and organizations.
- As job complexity increases, it becomes more difficult to assign accurate, merit-based performance ratings.
- There is an ever-present danger of parties being influenced by political consequences of their actions when sitting in judgment on co-workers.
- The implementation of performance management systems takes time and effort, and participants must be convinced the system is useful and fair.
What barriers exist in implementing effective performance management systems?
- Organizational barriers: result when workers are held responsible for errors that may be the result of built-in organizational systems. Common causes are built into the system due to prior decisions, defects in materials, flaws in the design of the system, or other managerial shortcomings. Special causes are attributable to a particular event, particular operator, or subgroup within the system.
- Political barriers: stem from deliberate attempts by raters to enhance or to protect their self-interests when conflicting courses of action are possible. Many managers attempt to use the appraisal process to their own advantage rather than giving excessively accurate ratings to cause problems for themselves.
- Interpersonal barriers: arise from the actual face-to-face encounter between subordinate and superior. Employees may think they are being judge according to one set of standards when their superiors use different ones, due to a lack of communication.
What are the fundamental requirements of successful performance management systems?
For any performance management system to be used successfully, it has to have the following nine characteristics:
- Congruence with strategy.
- Thoroughness.
- Practicality.
- Meaningfulness.
- Specificity.
- Discriminability.
- Reliability and validity.
- Inclusiveness.
- Fairness and acceptability.
These nine key characteristics indicate that performance appraisal should be embedded in the broader performance management system and that a lack of understanding of the context surrounding the appraisal is likely to result in a failed system.
What behavioural basis is there for performance appraisal?
Performance appraisal has two distinct processes: observation and judgment. Observation processes are more basic and include the detection, perception, and recall or recognition of specific behavioural events. Judgment processes include the categorization, integration, and evaluation of information. In practice, observation and judgment represent the last elements of a three-part sequence:
- Job analysis: identifies the components and requirements of a particular job.
- Performance standards: provide the critical link in the process and translate job requirements into levels of acceptable/unacceptable performance.
- Performance appraisal: the actual process of gathering information about individuals based on requirements and describes the job-relevant strengths and weaknesses of each individual.
Who shall rate?
Who does the rating is important. Raters must have direct experience with, or first-hand knowledge of, the individual to be rated. In many jobs individuals with varying perspectives have such first-hand knowledge.
- Immediate supervisor: responsible for managing the overall appraisal process. The supervisor is probably the person best able to evaluate each subordinate’s performance in light of the organization’s overall objectives. They must be able to tie effective (ineffective) performance to the employment actions taken. But this is not the case for all jobs where the supervisor rarely observes performance of their employees (e.g., teaching, law enforcement, self-managed work etc.).
- Peers: this refers to three of the more basic methods used by members of a well-defined group in judging each other’s job performance. These include peer nominations, peer rating, and peer ranking. Peer assessments reach favourable conclusions regarding the reliability, validity, and freedom from biases of this approach. However, bias still exists in this perspective (e.g., perceived friendship).
- Subordinates: they know directly the extent to which a manager does or does not delegate, the extent to which they plan and organize, their leadership style etc. However, subordinate ratings are of significantly better quality when used for developmental purposes rather than administrative purposes.
- Self: the opportunity to self-rate should improve the individual’s motivation and reduce their defensiveness during an appraisal interview. However, comparisons with appraisals from other perspectives suggest that self-appraisals tend to show more leniency, less variability, more bias, and less agreement with the judgments of others. However, there are cultural differences in this aspect (Eastern versus Western cultures, modesty versus leniency).
- Clients served: in jobs that require a lot of interaction with the public or certain individuals, appraisals can sometimes be done by the consumers of the organization’s services.
Appraising performance: individual versus group tasks
So far, it is assumed that ratings are given as an individual exercise. But in practice, appraising performance is not a strictly individual task. Supervisors often use information from outside sources in making performance judgments. It also seems that the presence of indirect information is more likely to change ratings form positive to negative than form negative to positive. If the process of assigning performance ratings is not entirely an individual task, might it pay off to formalize appraisals as a group task? A study found that groups are more effective than individuals at remembering specific behaviours over time, but they also demonstrate greater response bias. Results suggest that groups can be helpful but are not a cure-all for the problems of rating accuracy. Groups can be useful in two conditions: (1) the task needs to have a necessarily correct answer, and (2) the magnitude of the performance cue should not be too large.
Agreement and equivalence of ratings across sources
To assess interrater agreement (convergent validity) and the ability of raters to make distinctions in performance across dimensions (discriminant validity), a matrix listing dimensions as rows and raters as columns might be prepared. However, across-organizational-level interrater agreement for ratings on all performance dimensions is an unduly severe expectation, and also may be erroneous. Although we should not always expect agreement, we should expect that the construct underlying the measure used should be equivalent across raters. It does not make sense to assess interrater agreement without first establishing measurement equivalence (or measurement invariance (because a lack of agreement could be due to a lack of measurement equivalence. This means that the underlying characteristics being measured are not on the same psychological measurement scale, implying that differences across sources are possibly artifactual, contaminated, or misleading.
Measurement equivalence needs to be established before ratings can be assumed to be directly comparable. Several methods exist for this purpose, including confirmatory factor analysis (CFA) and item response theory.
Are there judgmental biases in rating?
In the traditional view, judgmental biases result from some systematic measurement error on the part of a rater. So, it is easier to deal with than errors that are unsystematic or random. But every bias has been defined and measured differently in literature. this can lead to opposite conclusions even in the same study. However, in the minds of managers, these behaviours are not errors at all. Here we list some of the most commonly observed judgmental biases, with ways to minimize them.
Leniency and severity
Objectivity is often violated. Raters subscribe to their own sets of assumptions, and most people have encountered raters who seem either inordinately easy or difficult. Leniency and severity biases can be controlled/eliminated by:
- Allocating ratings into a forced distribution, ratees are apportioned according to an approximately normal distribution.
- Requiring supervisors to rank order their subordinates.
- Encouraging raters to provide feedback on a regular basis, reducing rater and ratee discomfort with the process
- Increasing raters’ motivation to be accurate by holding them accountable for their ratings.
Central tendency
When political considerations predominate, raters may assign their subordinates ratings that are neither too good nor too bad and avoid using extremes of rating scales. This can be minimized by specifying clearly what the various anchors mean.
Halo
A rater subject to the halo bias assigns ratings on the basis of a general impression of the ratee. The rater fails to distinguish among levels of performance on different performance dimensions.
What types of performance measures exist?
- Objective measures: include product data, as well as employment data. These variables directly define the goals of the organization but also suffer from performance unreliability and modification of performance by situational characteristics. They are intuitively attractive, but theoretical and practical limitations often make them unsuitable.
- Subjective measures: depend on human judgment and are thus prone to the kinds of biases previously discussed. To be useful they have to be based on a careful analysis of the behaviours viewed as necessary and important for effective job performance.
What is the difference between relative and absolute rating systems?
We can classify rating systems into two types: relative and absolute, within which various methods may be distinguished.
Relative rating systems (employee comparisons)
- Rank ordering: simple ranking requires that a rater order all ratees from highest to lowest. Alternation ranking requires that the rater initially list all ratees on a sheet of paper.
- Paired comparisons: systematic ratee-to-ratee comparison. The rater must choose the better of each paper and each individual’s rank is determined by counting how often they were rated superior.
- Forced distribution: not assuming a normal distribution.
Absolute rating systems
Enable a rater to describe a ratee without making a direct reference to other ratees.
- Essay: rater is asked to describe, in writing, an individual’s strengths, weaknesses, and potential, and make suggestions for improvement. They can provide detailed feedback to ratees regarding their performance. But they are also unstructured and vary in length and content.
- Behavioural checklist: rater is provided with a series of descriptive statements of job-related behaviour. They have to indicate statements that describe the ratee.
- Forced-choice system: a technique developed to reduce leniency errors and establish objective standards of comparison between individuals. Choosing statements that are most or least descriptive of the ratee. They are constructed according to discriminability and preference.
- Critical incidents: reports by observers of things employees did that were especially effective or ineffective in accomplishing their jobs.
- Graphic rating scale: most widely used method of performance appraisal. Each point is defined on a continuum, so to make meaningful distinctions in performance within dimensions, scale points must be defined unambiguously for the rater (anchoring).
- Behaviourally anchored rating scale (BARS).
What factors affect subjective appraisals?
Factors that can influence subjective appraisals include personal characteristics (gender, race, age, education, interests etc.) and job-related variables (accountability, job experience, leaderships style, job satisfaction etc.). These factors are relevant for both the rater and ratee as well as how they interact during these performance ratings.
How do we evaluate the performance of teams?
Different types of teams require different emphases on performance measurement at the individual and team levels. Depending on the complexity of the task and the membership configuration, we can identify three types of teams:
- Work or service teams.
- Project teams.
- Network teams.
Assessing team performance should be seen as complementary to the assessment and recognition of (1) individual performance, and (2) individual’s behaviours and skills that contribute to team performance.
How are raters trained?
There are three broad objectives in rater training: (1) to improve the observational skills of raters by teaching them what to attend to, (2) to reduce or eliminate judgmental biases, and (2) to improve the ability of raters to communicate performance information to ratees in an objective and constructive manner. Rater training also focuses on teaching raters to eliminate judgmental biases and systematic errors.
- Rater error training (RET) exposes raters to different errors and their causes.
- Frame-of-reference (FOR) training is effective in improving the accuracy of performance appraisals. It provides trainees with a. theory of performance allowing them to understand the various performance dimensions.
What is the social and interpersonal context of performance management systems?
In implementing a system, information of the social and interpersonal contexts is just as important as the knowledge of systematic errors and biases. To reinforce the view that context must be taken into account and that performance management must be tackled as both a technical as well as an interpersonal issue. The following are recommendations regarding issues that should be explored further:
- Social power, influence, and leadership.
- Trust.
- Social exchange.
- Group dynamics and close interpersonal relationships.
Performance feedback: appraisal and goal – setting interviews
One of the central purposes of performance management systems is to serve as a personal development tool. To improve, there has to be feedback regarding present performance. A formal system for giving feedback should be implemented because, without such a system, some employees are more likely to seek and benefit from feedback from others. Ideally, there should be a continuous feedback process between superior and subordinate. There are several activities supervisors should engage in before, during, and after appraisal interviews. These include:
- Before: communicate frequently; get training in appraisal; judge your own performance first; encourage subordinate preparation; use “priming” information.
- During: warm up and encourage participation; judge performance, not personality or self-concept; be specific; be an active listener; avoid destructive criticism and threats to the employee’s ego; set mutually agreeable and formal goals.
- After: continue to communicate and assess progress toward goals regularly; make organizational rewards contingent on performance.
Performance management is a continuous process of identifying, measuring, and developing the performance of individuals and teams and aligning performance with the strategic goals of the organization. It is assessed at regular intervals, and feedback is provided so performance is improved on an ongoing basis. But researchers’ inability to resolve definitively the knotty technical and interpersonal problems of performance appraisal has led to the term the “Achilles’ heel” of HRM. Performance management systems will not be successful if they are not linked to broader work unit and organizational goals. This chapter will focus on both the measurement and the social/motivational aspects of performance management.
Measurement of individual differences is the heart of personnel psychology. Individual differences in physical and psychological attributes may be measured on nominal, ordinal, interval, and ratio scales. Psychology’s first law is that “People are different.” Physical and psychological variability is all around us.
What is measurement?
Measurement is the assignment of numerals to objects or events according to rules. But this definition says nothing about the quality of the measurement procedure, only that somehow numerals are assigned to objects or events. Psychological measurement is principally concerned with individual differences in psychological trait. A trait is a descriptive label applied to a group of interrelated behaviours that may be inherited or acquired.
What are scales of measurement?
The first step in a measurement procedure is to specify the dimension or trait to be measured. Then a series of operations can be developed that will allow us to describe individuals in terms of that dimension or trait. Variation among individuals can be:
- Qualitative: in terms of kind (sex, hair colour), more classification.
- Quantitative: in terms of amount, or degree, more measurement.
There are four levels of measurement that are hierarchically related.
- Nominal scales: lowest level of measurement and represents differences in kind. Individuals are assigned into qualitative categories. Categories cannot be ordered or numbered. The fundamental operation is equal. (e.g. sex, hair colour).
- Ordinal scales: allows for classification by category but also provides an indication of magnitude. Categories are rank ordered according to greater or lesser amounts of characteristics or dimension. They satisfy the requirement of equality and transivity/ranking. (e.g. age groups).
- Interval scales: have the properties of equality, transivity/ranking, and additivity/equal-sized units. Can establish equivalent distances along scale; any linear transformation is permissible. No absolute zero (e.g. temperature).
- Ratio scales: highest level of measurement in science. In addition to equality, transivity, and additivity, ratio scales have a natural or absolute zero point (e.g. height, distance, weight etc.).
How are scales used in psychological measurement?
Psychological measurement scales are mostly nominal- or ordinal-level scales, although many scales and tests commonly used in behavioural measurement and research approximate interval measurement well enough for practical purposes. Strictly speaking, intelligence, aptitude, and personality scales are ordinal-level measures. They indicate their rank order with respect to the traits in question, not the amounts. Yet we can often assume an equal interval scale.
Consideration of social utility in the evaluation of psychological measurement
Should the value of psychological measures be judged in terms of the same criteria as physical measurement? Physical measurements are evaluated in terms of the degree to which they satisfy the requirements of order, equality, and addition. In behavioural measurement, the operation of addition is undefined, since there seems to be no way physically to add one psychological magnitude to another to get a third, even greater in amount. But other more practical criteria exist by which psychological measures may be evaluated.
Psychological measures are more appropriately evaluated in terms of their social utility. The important question is how psychological measures’ predictive efficiency compares to other available procedures and techniques. It is important for HR specialists to be well grounded in applied measurement concepts.
How do you select and create the right measure?
We use the word test in the broad sense to include any psychological instrument, technique, or procedure. Testing is systematic in three areas: content, administration, and scoring. Item content is chosen systematically from the behavioural domain to be measured. Procedures for administration are standardized. Scoring is objective.
Steps for selecting and creating tests
- Determining a measure’s purpose.
- Defining the attribute.
- Developing a measure plan.
- Writing items.
- Conducting a pilot study and traditional item analysis.
- Conducting an item analysis using item response theory (IRT).
- Selecting items.
- Determining reliability and gathering evidence for validity.
- Revising and updating items.
Selecting an appropriate test: test-classification methods
When selecting a test, as opposed to evaluating its technical characteristics, important factors to consider are its content, the ease to which it may be administered, and the method of scoring.
- Content: tests may be classified in terms of the task the pose for the examinee but also in terms of process – that is, what the examinee is asked to do. Cognitive tests measure the products of mental ability and frequently are subclassified as tests of achievement and aptitude. Aptitude and achievement tests are measures of ability. Affective tests are designed to measure aspects of personality.
- Administration: tests can be classified in terms of the efficiency with which they can be administered or in terms of the time limits they impose on the examinee. Individual tests are less efficient than group tests. Pure speed tests consist of many easy items, but time limits are stringent. A pure power test has a time limit generous enough to allow everyone the chance to try all items, but the questions are harder.
- Standardized and non-standardized tests: standardized tests have fixed directions for administration and scoring. To standardize a test, it has to be given to a large, representative sample of individuals. This group (normative sample) is used to establish norms to provide a frame of reference.
- Scoring: the method of scoring a test may be objective or nonobjective. Objective scoring is appropriate for employment because there are fixed, impersonal standards for scoring. On the other hand, scoring essay tests and personality inventories may be subjective, and considerable “rater variance” may be introduced.
Further considerations in selecting a test
Additional factors need to be considered in selecting a test – cost, interpretation, and face validity.
- Measurement cost is a practical consideration. Most users operate within a budget and have to choose a procedure that will satisfy their cost restraints.
- Managers frequently assume that since a test can be administered by almost any educated person, it can be interpreted by almost anyone. This is not the case but is one aspect of staffing that is frequently overlooked.
- Face validity is whether the measurement procedure looks like it is measuring the trait in question. It does not refer to validity in that technical sense but is concerned with establishing rapport and good public relations.
Reliability as consistency?
The process of creating new tests involves evaluating the technical characteristics of reliability and validity. But reliability and validity information should be gathered not only for newly created measures but also for any measure before it I put to use. Why is reliability so important? The main purpose of psychological measurement is to make decisions about individuals. If measurement procedures are to be useful practically. They have to produce dependable scores. Reliability of a measurement procedure refers to its freedom from unsystematic errors of measurement.
How do we estimate reliability?
Since all types of reliability are concerned with the degree of consistency or agreement between two sets of independently derived scores, the correlation coefficient (or reliability coefficient) is a particularly appropriate measure of such agreement. In practice, reliability coefficients may serve one of both of two purposes: (1) estimating the precision of a particular procedure as a measuring instrument, and (2) estimating the consistency of performance on the procedure by the examinees. These purposes can easily be seen in the various methods used to estimate reliability.
- Test-retest: the simplest and most direct estimate of reliability is obtained by administering the same form of a test to the same group of examinees on two different occasions. Scores from both occasions are correlated to yield a coefficient of stability.
- Parallel (or alternate) forms: theoretically it is possible to construct a number of parallel forms of the same procedure. With parallel forms, we seek to evaluate the consistency of scores form one form to another (alternate) form of the same procedure. The correlation between scores is obtained on the two forms (known as the coefficient of equivalence) is a reliability estimate.
- Internal consistency: most reliability estimates indicate consistency over time or forms of a test.
- Stability and equivalence: a combination of the test-retest and equivalence methods can be used to estimate reliability simply by lengthening the time interval between administrations. Th correlation between the two sets of scores represents a coefficient of stability and equivalence.
- Interrater reliability: can be estimated using three methods: (1) interrater agreement, (2) interclass correlation, and (3) intraclass correlation. Interrater agreement focuses on the exact agreement between raters. Interclass correlation is used when two raters are rating multiple objects or individuals. Intraclass correlation estimates how much of the differences among raters is due to individual difference on the attribute measured and how much is due to errors of measurement.
How can we interpret reliability?
There is no fixed value below which reliability is unacceptable and above which it is satisfactory. It depends what one plans to do with the scores. The more important the decision to be reached, the greater the need for confidence in the precision of the measurement procedure and the higher the required reliability coefficient. A procedure used to compare individuals should have a reliability above 0.9. but many standard tests with reliabilities as low as 0.7 prove to be very useful, and even lower may be useful for research purposes.
This statement needs to be tempered by considering other factors that may influence the size of an obtained reliability coefficient, including:
- Speed.
- Test length.
- Interval between administrations.
- Range of individual differences.
- Difficulty of the measurement procedure.
- Size and representativeness of sample.
- Standard error of measurement.
What is scale coarseness?
Scale coarseness is related to measurement error, but it is a distinct phenomenon that also results in lack of measurement precision. A measurement scale is coarse when a construct that is continuous in nature is measured using items such that different true scores are collapsed into the same category. Errors are introduced because continuous constructs are collapsed.
The lack of precision introduced by coarse scales has a downward biasing effect on the correlation coefficient computed using data collected form such scales for the predictor, the criterion, or both variables.
What is generalizability theory?
Generalizability theory conceptualizes the reliability of a test score as the precision with which that score, or sample, represents a more generalized universe value of the score. Observations are seen as samples from a universe of admissible observation. An examinee’s universe score is defined as the expected value of his or her observed scores over all admissible observations.
How can we interpret results of measurement procedures?
In personnel psychology, knowledge of each person’s individuality is essential in programs designed to use human resources effectively. This allows us to make predictions about how individuals are likely to behave in the future. To interpret the results of measurement procedures intelligently, we need some information about how relevant others have performed on the same procedure. Norms must provide a relevant comparison group for the person being tested.
- Percentile ranks are easy to compute and understand but have two major limitation. First, they are ranks and so ordinal-level measures that cannot legitimately be added, subtracted, multiplied, or divided. Second, percentile ranks have a rectangular distribution, while test score distributions generally approximate the normal curve.
- Standard scores are interval-scale measures and can be subjected to the common arithmetic operations. They allow direct comparison of an individual’s performance on different measures.
- Normalized standard scores are satisfactory for most purposes, since they serve to smooth out sampling errors, but all distributions should not be normalized as a matter of course. Normalizing transformations should be carried out only when the sample is large and representative and when there is reason to believe that the deviation from normality results from defects in the measurement procedure rather than from characteristics of the sample or other factors affecting the behaviour under consideration.
Measurement of individual differences is the heart of personnel psychology. Individual differences in physical and psychological attributes may be measured on nominal, ordinal, interval, and ratio scales. Psychology’s first law is that “People are different.” Physical and psychological variability is all around us.
Scores from measures of individual differences derive meaning only insofar as they can be related to other psychologically meaningful characteristics of behaviour. Reliability is a necessary but not sufficient property for two scores to be useful in HR research and practice.
What is the relationship between reliability and validity?
Theoretically it’s possible to develop a perfectly reliable measure whose scores were wholly uncorrelated with any other variable. Such a measure would have no practical value, nor could it be interpreted meaningfully, since its scores could be related to nothing other than scores on another administration of the same measure. It would be highly reliable but have no validity.
High reliability is a necessary, but not sufficient, condition for high validity. The concepts of reliability and validity are closely interrelated. We can’t understand whether the inferences made based on test scores are correct if our measurement procedures are not consistent. So, reliability places a ceiling on validity, and the use of reliability estimates in correcting validity coefficients requires careful thought about what the sources of error affecting the measure in question and how the reliability coefficient was computed.
What evidence is there for validity?
Validity was traditionally viewed as the extent to which a measurement procedure actually measures what it is designed to measure. This kind of view is inadequate, it implies that a procedure only has one validity determine by a single study. On the contrary, a thorough knowledge of the interrelationships between scores from a particular procedure and other variables typically requires many investigations. The investigative process of gathering or evaluating the necessary data are called validation. Methods of validation revolve around two issues:
- What a test or other procedure measures (i.e., the hypothesized underlying trait or construct).
- How well it measures (i.e., the relationships between scores form the procedure and some external criterion measure.
Thus, validity is not a dichotomous variable, but a matter of degree. It is also a unitary concept, there aren’t different “kinds” of validity, only different kinds of evidence for analyzing validity. Though there are numerous procedures available for evaluating validity, Standards for Educational and Psychological Measurement describes three principal strategies: content-related evidence, criterion-related evidence, and construct-related evidence.
Content-related evidence
Inferences about validity based on content-related evidence are concerned with whether or not a measurement procedure contains a fair sample of the universe of situations it is supposed to represent. An evaluation of content-related evidence is made in terms of the adequacy of the sampling. The criterion is expert judgment. Three assumptions underlie the use of content-related evidence:
- The area of concern to the user can be conceived as a meaningful, definable universe of responses
- A sample can be drawn from the universe in some purposeful, meaningful fashion
- The sample and sampling process can be defined with sufficient precision to enable the user to judge how adequately the sample of performance typifies performance in the universe
Criterion-related evidence
When measures of individual differences are used to predict behaviour, and it’s technically feasible, criterion-related evidence is called for. With this approach, we test the hypothesis that test scores are related to performance on some criterion measure. The criterion is a score or rating that is either available at the time of predictor measurement (concurrent evidence) or will become available at a later time (predictive evidence).
- Predictive studies: are oriented toward the future and involve a time interval during which events take place. These designs for obtaining evidence of criterion-related validity are the cornerstone of individual differences measurement. They demonstrate in an objective, statistical manner the actual relationship between predictors and criteria in a particular situation. The procedure for conducting predictive studies are as follows: (1) measure candidates for the job, (2) select candidates without using the results of the measurement procedure, (3) obtain measurements of criterion performance at some later date, and (4) assess the strength of the relationship between the predictor and the criterion.
- Concurrent studies: are oriented towards the present and reflects only the status quo at a particular time. These designs for obtaining evidence of criterion-related validity are useful to HR researchers in several ways. concurrent evidence of validity is important in the development of performance management systems and also in evaluating tests of job knowledge or achievement, trade tests, work samples, or any other measures designed to describe present performance.
Requirements of criterion measures in predictive and concurrent studies
Any predictor measure will be no better than the criterion used to establish its validity. As is true for predictors, anything that introduces random error into a set of criterion scores will reduce validity. It’s too often assumed that criterion measures are relevant and valid. It is important that the criterial be reliable. The performance domain must be defined clearly before we proceed to developing tests that will be used to make predictions about future performance. Finally, we should beware of criterion contamination in criterion-related validity studies. It’s essential that criterion data is gathered independently of predictor data.
Construct-related evidence
Neither content- nor criterion-related validity strategies have as their basic objective the understanding of a trait or construct that a test measures. Content-related evidence concerns the extent to which items cover the intended domain, and criterion-related evidence is concerned with the empirical relationship between a predictor and a criterion. A conceptual is required to organize and explain our data and provide direction for further investigation. The framework specifies the meaning of the construct, distinguishes it from other constructs, and indicates how measures of the construct should relate to other variables. This is the construct-related evidence of validity’s function; it provides the evidential basis for the interpretation of scores. The construct is defined not by an isolated event, but by a nomological network – a system of interrelated concepts, propositions, and laws that relates observable characteristics to another theoretical construct.
What factors affect the size of obtained validity coefficients?
- Range enhancement.
- Range restriction.
- Position in the employment process.
- Form of the predictor-criterion relationship.
What is cross-validation?
The prediction of criteria using test scores is often implemented by assuming a linear and additive relationship between the predictors and the criterion. These relationships are typically operationalized using ordinary least squares (OLS) regression, in which weights are assigned to the predictors so that the difference between observed criterion scores and predicted criterion scores is minimized. Cross-validity refers to whether the weights derived from one sample can predict outcomes to the same degree in the population as a whole or in other samples drawn from the same population. There are two approaches:
- Empirical cross-validation: consists of fitting a regression model in a sample and using the resulting regression weights with a second independent cross-validation sample. The multiple correlation coefficient obtained by applying the weights from the first sample to the second sample is used as an estimate of cross-validity (pc).
- Statistical cross-validation: consists of adjusting the sample-based multiple correlation coefficient (R) by a function of sample size (N) and the number of predictors (k).
- Cross-validation including rescaling and reweighting of items should be continual, for as values change, jobs change, and people change, so does the appropriateness and usefulness of inferences made from test scores.
How do we gather validity evidence when local validation isn’t feasible?
Often, local validation may not be feasible due to logistics or practical constraints, including lack of access to large samples, inability to collect valid and reliable criterion measures, and lack of resources to conduct a comprehensive validity study. There are several strategies available to gather validity evidence in such situations: synthetic validity, test transportability, and validity generalization.
Synthetic validity
The process of inferring validity in a specific situation from a systematic analysis of jobs into their elements, a determination of test validity for these elements, and a combination or synthesis of the elemental validities into a whole. Research shows that synthetic validation is feasible and legally acceptable and resulting coefficients are comparable to validity coefficients resulting from criterion-related validation research.
Test transportability
To be able to use a test that has been used elsewhere locally without the need for a local validation study, evidence must be provided regarding the following:
- Results of a criterion-related validity study conducted at another location.
- Results of a test fairness analysis based on a study conducted at another location where technically feasible.
- Degree of similarity between the job performed locally and that performed at a location where the test has been used.
- Degree of similarity between the applicants in the prior and local settings.
Validity generalization (VG)
Meta-analyses are literature reviews that are quantitative as opposed to narrative in nature, they aim to understand the relationship between two variables across studies and the variability of this relationship across studies. Meta-analyses conducted with the goal of testing the situational specificity hypothesis have been labelled psychometric meta-analysis or VG studies.
Empirical Bayes Analysis
Local validation and VG both have weaknesses, so the use of empirical Bayesian estimation is proposed as a way to capitalize on the advantages of both of these approaches. This approach involves first calculating the average inaccuracy of meta-analysis and a local validity study under a wide variety of conditions and then computing an empirical Bayesian estimate, which is a weighted average of the meta-analytically derived and local study estimates.
Application of alternative validation strategies: illustration
The various strategies available to gather validity evidence when the conduct of a local validation study isn’t possible are not mutually exclusive. There is evidence supporting validation efforts that include a combination of strategies.
Scores from measures of individual differences derive meaning only insofar as they can be related to other psychologically meaningful characteristics of behaviour. Reliability is a necessary but not sufficient property for two scores to be useful in HR research and practice.
Fairness is a social, not a statistical, concept. But when it is technically feasible, users of selection measures should investigate potential bias, which involves examining possible differences in prediction systems for racial, ethnic, and gender subgroups. A complete test bias assessment involves an examination of possible differences in standard errors of estimate and in slopes and intercepts of subgroup regression lines, not just subgroup validity coefficients.
Measures of individual differences are discriminatory. This makes sense since in employment settings random acceptance of candidates can only lead to misuse of human and economic resources. Ignoring individual differences means abandoning potential economic, societal, and personal advantages to be gained by taking into account individual patterns of abilities and job requirements. Matching people and jobs accurately start with appraising individual patterns of abilities through selection measures. These measures are designed to discriminate and possess adequate validity. It is recommended that users of selection measures investigate differences in patterns of association between test scores and other variables for groups based on variables like sex, ethnicity, age, etc. – known as differential prediction/predictive bias. Differential validity refers to differences in validity coefficients across groups.
How can we assess differential validity?
In a bivariate scatterplot of predictor and criterion data, each dot represents a person’s score on the predictor and the criterion. Depending how the dots cluster – in the shape of an ellipse in opposite quadrants (positive validity) or uniformly around the center, equally spread across the quadrants (zero validity) – we can see what kind of validity exists. When there is no differential validity, the predictor is useless because it supplies no information of a predictive nature. So, there is no point in investigating differential validity in the absence of an overall pattern of predictor-criterion scores that allows for the prediction of relevant criteria.
Differential validity and adverse impact
An important consideration in assessing differential validity is whether the test in question produces adverse impact. Adverse impact means that members of one group are selected at substantially greater rates than members of another group. To understand if this is the case, one compares selection ratios across the considered groups. Numerous possibilities exist when heterogenous groups are combined in making predictions. When differential validity exists, the use of a single regression line, cut score, or decision rule can lead to serious errors in prediction. While one legitimately may question the use of race or gender as a variable in selection, the problem is really one of distinguishing between performance on the selection measure and performance on the job. The implementation of differential systems is difficult in practice because fairness of any procedure that uses different standards for different groups is likely to be viewed with suspicion.
Differential validity: the evidence
Evidence of differential validity provides information only on whether a selection device should be used to make comparisons within groups. Evidence of unfair discrimination between subgroups cannot be inferred from differences in validity alone; mean job performance must also be considered. A selection procedure may be fair and yet predict performance inaccurately or discriminate unfairly yet predict performance within a given subgroup accurately.
Differential validity exists when (1) there is a significant difference between the validity coefficients obtained for two subgroups and (2) the correlations found in one or both of these groups are significantly different from zero. Single-group validity is different from but related to differential validity, where a given predictor exhibits validity significantly different from zero for one group only, and there is no significant difference between the two validity coefficients.
How do we assess differential prediction and moderator variables?
The possibility of predictive bias in selection procedures is a central issue in any discussion of fairness and EEO. It requires a consideration of the equivalence of prediction systems for different groups. Lack of differential validity does not assure lack of predictive bias. When there is differential prediction based on a grouping variable such as gender or ethnicity, this grouping variable is called a moderator.
Testing for predictive bias involves using moderated multiple regression (MMR), where the criterion measure is regressed on the predictor score, subgroup membership, and an interaction term between the two. One can test the overall hypothesis of differential prediction by comparing R2’s resulting from the two models. If there is a statistically significant difference, you then explore if differential prediction is due to differences in slopes, intercepts, or both.
Differential prediction: the evidence
When prediction systems are compared, usually slope-based differences are not found, and intercept-based differences, if found, are such that they favour members of the minority group. Could it be that researchers find lack of differential prediction partially because the criteria themselves are biased? The assumption based on research is that if performance data are provided by supervisors of the same ethnicity as the employees being rated, the chances that the criteria are biased are minimized or even eliminated. Evidence indicates an overall lack of differential prediction based on ethnicity and gender for cognitive abilities and other types of tests. When differential prediction is found, results indicate that differences lie in intercept differences and not slope differences across groups and that the intercept differences are such that the performance of women and ethnic minorities is typically overpredicted, which means that the use of test scores supposedly favours these groups.
Problems in testing for differential prediction
Anguinis et al. (2010) challenged conclusions based on 40 years of research on test bias in preemployment testing. Results indicate that the established and accepted procedure to assess test bias is itself biased: slope-based bias is likely to go undetected, and intercept-based bias favouring minority-group members is likely to be “found” when it does not exist. Preemployment testing is often described as the cradle of the I/O psychology field. These results open an important opportunity for I/O psychology researchers to revive the topic of test bias and make contributions with measurable and important implications for organizations and society.
Suggestions for improving the accuracy of slope-based differential prediction assessment
There are several remedies for the low-power problem of moderated MMR. These include:
- Planning research design so that sample size is large enough to detect expected effect size.
- Implementing a synthetic validity approach to the differential prediction test.
- Drawing random samples from the population.
- Developing and using reliable measures.
And many more. The bottom line is to carefully plan a validation study so that the differential prediction test is technically feasible and the results credible.
What are further considerations regarding adverse impact, differential validity, and differential prediction?
The previous section on validity and adverse impact showed that a test can be valid and still yield adverse impact. So, the presence of adverse impact is not a sufficient basis for a claim of unfair discrimination. A selection measure is unfairly discriminatory when some specified group performs less well than a comparison group on the measure but performs just as well as the comparison group on the job for which the selection measure is a predictor. This is what is meant by differential prediction or predictive bias.
It would be efficient to reduce adverse impact by using available test procedures. The following strategies are available before, during, and after test administration:
- Improve the recruiting strategy for minorities.
- Use cognitive abilities in combination with noncognitive predictors.
- Use multiple regression and other methods for combining predictors into a composite.
- Use measures of specific, as opposed to only general, cognitive abilities.
- Use differential weighting for the various criterion facets.
- Use alternate modes of presenting test stimuli.
- Enhance face validity.
- Implement test-score banding to select among the applicants: Tests are never perfectly reliable, and the relationship between test scores and criteria is never perfect. Test-score banding is a decision-making process that is based on these two premises. In this process, individuals with similar scores on a test are grouped together on the basis of the reliability of the selection instruments and their standard errors of measurement. This method has generated a lot of controversy.
Adverse impact may occur even when there is no differential validity across groups. But the presence of adverse impact is likely to be concurrent with the differential prediction test, and specifically with differences in intercepts.
How are fairness and the interpersonal context linked to employment testing?
So far, we have emphasized mostly technical issues around test fairness, but we should not minimize the importance of social and interpersonal processes in test settings. An organization’s adherence to fairness rules is not simply required because it is part of good professional practice. When applicants and examinees perceive unfairness in the testing procedures, their perceptions of the organization and the procedures can be negatively affected. So to understand fairness and the impact of the selection system in place, it is necessary to conduct technically analyses on the data as well as consider the perceptions of people who are subjected to the system. There are two dimensions of fairness from the perspective of applicants:
- Distributive (perceptions of fairness of the outcome).
- Procedural (perceptions of fairness of the procedures used to reach a hiring decision).
Employers do not have control over the distributive aspect, but they do have control over the procedural perceptions of the test processes. Although tests may be technically fair and lack predictive bias, the process of implementing testing and making selection decisions can be such that applicants still perceive unfairness.
How are fair employment and public policy linked?
Unfair discrimination is hardly endemic to employment testing, but testing is certainly a visible target for public attack. Public interest in measurement embraces three essential functions:
- Diagnosing needs.
- Assessing qualifications to do.
- Protecting against false credentials.
As far as the future is concerned, it is the position of these authors that staffing procedures will yield better and fairer results when we can specify in detail the linkages between the personal characteristics of individuals and the requirements of jobs for which the procedures are most relevant, taking contextual factors into consideration.
Fairness is a social, not a statistical, concept. But when it is technically feasible, users of selection measures should investigate potential bias, which involves examining possible differences in prediction systems for racial, ethnic, and gender subgroups. A complete test bias assessment involves an examination of possible differences in standard errors of estimate and in slopes and intercepts of subgroup regression lines, not just subgroup validity coefficients.
Organizations recruit in order to add to, maintain, or readjust their workforces, prior planning is critical to the recruiting process and includes:
- The establishment of workforce plans.
- Specification of time.
- Validation of employment standards.
The internet is revolutionizing the recruitment process, opening up labour markets and removing geographical constraints. Cost and quality are necessary to evaluate the success of the recruitment effort.
When human resources must be expanded or replenished, a recruiting system of some kind must be established. Advances in technology, as well as growing intensity of competition in domestic and international markets, have made recruitment a top priority as organizations struggle continually to gain competitive advantage through people. Recruitment is a business that demands serious attention from management because any business strategy will falter without the talent to execute it. It is difficult to find good workers and talent acquisition is becoming more difficult. There is a levelling of the information playing field brought about by Web technology. As open systems, organizations demand a dynamic equilibrium for their own maintenance, survival, and growth.
What is recruitment planning?
The process of recruitment planning starts with a clear specification of HR needs and the time frame within which such requirements must be met. This is especially relevant to the setting of workforce diversity goals and timetables. Labour-force available availability and internal workforce representation of women and minorities are critical factors in this process.
Two other important questions need to be addressed, whom and where to recruit. It is important to answer both questions to determine recruitment objectives. Objectives are also critical to recruitment evaluation, if an employer wants to compare what it hoped to accomplish with actual recruitment outcomes.
Among the questions an employer might address in establishing a recruitment strategy are:
- When to begin recruiting?
- What message to communicate to potential job applicants?
- Whom to use as recruiters?
Primed with a comprehensive workforce plan for the various segments of the workforce, recruitment planning can begin. To do this, three key parameters must be estimated: the time, the money, and the staff necessary to achieve a given hiring rate. The basic statistic needed to estimate these parameters is the ‘number of leads needed to generate a given number of hires in a given time’. The easiest way to do this is based on prior recruitment experiences.
- Yield ratios are the ratios of leads to invites, invites to interviews, interviews to offers, and offers to hires obtained over some specified time period.
- Time-lapse data provide the average intervals between events, like the extension of an offer to a candidate and acceptance or between acceptance and addition to the payroll.
If no experience data exists, then it is necessary to use best guesses or hypothesis and monitor performance as the operational recruitment program unfolds.
A labour market is a geographical area within which the forces of supply interact with the forces of demand and thereby determine the price of labour. But it is impossible to define the boundaries of a local labour market in a clear-cut manner since geographical areas where employers extend their recruiting efforts depend partly on the type of job being filled.
Traditionally, employees are brought into organizations through a small number of entry-level jobs and are then promoted up through a hierarchy of increasingly responsible and lucrative. But recently, internal labour markets have weakened, high-level jobs have not been restricted to internal candidates, and new employees have been hired from the outside at almost all levels.
Staffing requirements and cost analyses
Yield ratios and time-lapse data are valuable for estimating recruiting staff and time requirements. Recruitment planning is not complete, however, until the costs of alternative recruitment strategies have been estimated. At the most general level, the gross cost-per-hire figure can be determined by dividing the total cost of recruiting (TCOR) by the number of individuals hired (NH). This is useful for a first step but falls short of the cost information necessary for thorough advance planning and later evaluation of the recruiting effort. The following cost estimates are also essential:
- Staff costs.
- Operational costs.
- Overhead.
Source analysis
Analysis of recruiting sources facilitates effective planning. Three types of analyses are typical: cost per hire, time lapse from candidate identification to hire, and source yield. The most expensive sources generally are private employment agencies and executive-search firms. Time-lapse studies of recruiting sources are very useful for planning purposes, since the time from initial contact to report on board varies across sources. Source yield is the ratio of the number of candidates generated from a particular source to hires from that source. After examining source yield, we are almost ready to start recruiting operations. Recruiting efficiency can be heightened once employment requirements are defined thoroughly in advance. Research shows clearly that characteristics of organizations and jobs have greater influence on the likelihood of job acceptance by candidates than do characteristics of the recruiter. But there are at least three reasons why the recruiters might matter. Different recruiters may be important because:
- They vary in the amount of job-related information they possess (and therefore can share).
- They differ in terms of their credibility in the eyes of recruits.
- They signal different things to job candidates.
Operations
The first step in recruiting operations is to examine internal sources for qualified or qualifiable candidates. This is especially true of large organizations with globally distributed workforces that are likely to maintain comprehensive talent inventories with detailed information on each employee. These inventories can facilitate global staffing and expatriate assignments.
External sources for recruiting applicants
There are a variety of external recruiting sources available. Available sources include:
- Advertising.
- Employment agencies.
- Educational institutions.
- Professional organizations.
- Military.
- Labour unions.
- Career fairs.
- Outplacement firms.
- Direct application (walk-ins, write-ins, online applicants).
- Intracompany transfers and company retirees.
- Employee referrals.
In terms of the most popular sources used by employers, evidence indicates that:
- Informal contacts are widely used and effective at all occupational levels.
- Use of the public employment service declines as required job skills increase.
- The internal market is a major recruitment method.
- Larger firms are the most frequent users of direct applications, the internal market, and rehires of former employees.
- Concerning job advertisements, those that contained more information resulted in a job opening being viewed as more attractive.
Managing recruiting operations
Administratively, recruitment is one of the easiest activities to foul up – with potentially long-term negative publicity for the firm. Traditionally, recruitment was intensively paper based. But today the entire process is computer based. For example, Hiring Gateway from Yahoo! Resumix, automation replaces the entire manual process. These online recruiting processes often reduces the cost per hire and shortens hiring cycles.
How are measurement, evaluation, and control involved in the recruiting process?
If advance recruitment planning has been thorough, later evaluation of the recruitment effort is simplified considerably. A number of cost and quality analyses might be performed, but it is critical to choose those that are strategically most relevant to a given organization. Another consideration is choosing measures of recruiting success that are most relevant to various stages in the recruitment process. Ultimately, the success of recruitment efforts depends on the number of successful placements made. Other possible metrics include:
- Cost of operations.
- Cost per hire.
- Cost per hire by source.
- Total résumés received.
- Source yield and source efficiency.
- Acceptance/offer ratio.
- Offer/interview ratio.
How is the job search from the applicant’s perspective?
How do individuals identify, investigate, and decide among job opportunities? Research has found that many job applicants:
- Have an incomplete and/or inaccurate understanding of what a job opening involves.
- Are not sure what they want to from a position.
- Do not have a self-insight with regard to their knowledge, skills, and abilities,
- Cannot accurately predict how they will react to the demands of a new position.
At the same time, evidence indicates that the job-choice process is highly social, with friends and relatives playing large role in the active phase of job searching.
- Networking is very important because it is often casual contacts that point people to their next jobs.
- Applicants should exploit the vast capabilities of the Internet, using various search engines and tools.
- Interviews can focus on recruitment per se but also be dual-purpose interviews whose objective is recruitment as well as selection.
- Meta-analytic evidence revealed that work environment and organizational image are strong predictors of organizational attraction. But attraction is not directly related to job choice, it is at least partially mediated by job-pursuit and acceptance intentions.
- Organizational image, but not mere familiarity, has been found to be related to attitudes toward an organization.
- It was found that most applicants prefer decentralized organizations and performance-based pay to centralized organizations and seniority-based pay. But this varies with subjects’ need for achievement.
Realistic job previews (RJP)
Individuals possessing inflated job expectations are thought to be more likely to become dissatisfied with their positions and more likely to quit than applicants who have more accurate expectations. A way to counter this tendency is to provide realistic information to job applicants. Generally, though, it has been found that when the naïve expectations of job applicants are lowered (through a realistic job preview) to match organizational reality, results show that:
- There is a small tendency of applicants to withdraw.
- Job-acceptance rates are lower.
- Job performance is unaffected.
- Job survival tends to be higher.
Finally, the effect of RJPs on voluntary turnover is moderated by job complexity. Smaller reductions in turnover can be expected in low-complexity jobs than in high-complexity jobs.
At the level of the individual job applicant, RJPs are likely to have the greatest impact when the applicant:
- Can be selective about accepting a job offer.
- Has unrealistic job expectations.
- Would have difficulty coping with job demands without the RJP.
RJP’s should be balanced in their orientation, they should be conducted to enhance overly pessimistic expectations and reduce overly optimistic expectations. This helps bolster the applicant’s perceptions of the organization as caring, trustworthy, and honest. However, a multimethod approach to RJPs makes the most sense if the objective is to develop realistic expectations among job applicants.
Organizations recruit in order to add to, maintain, or readjust their workforces, prior planning is critical to the recruiting process and includes:
- The establishment of workforce plans.
- Specification of time.
- Validation of employment standards.
The internet is revolutionizing the recruitment process, opening up labour markets and removing geographical constraints. Cost and quality are necessary to evaluate the success of the recruitment effort.
Despite dramatic changes in the structure of work, individual jobs remain the basic building blocks necessary to achieve broader organizational goals. The objective of job analysis is to define each job in terms of the behaviours necessary to perform it and develop hypothesis about the personal characteristics necessary to perform those behaviours. Job descriptions specify the work to be done. Job specifications indicate the personal characteristics necessary to do the work.
To appreciate why the analysis of jobs and work is relevant and important, consider the following situation. If we start a brand-new organization, or new division of a larger organization, we are immediately faced with a host of problems, many which involve decisions about people. What are the broad goals of the new organization/division, and how should it be structured in order to achieve these goals? How many positions will we have to staff, and what will be the nature of these positions? What knowledge, abilities, skills, and other characteristics (KSAO’s) will be required? Before any decisions can be made, we must first define the jobs in question, specify what employee behaviours are necessary to perform them, and then develop hypotheses about personal characteristics necessary to perform those work behaviours. This process is known as job analysis.
Job analysis can underpin an organization’s structure and design by clarifying roles and is a fundamental tool that can be used in every phase of employment research and administration.
What terminology is relevant?
HR has its own jargon:
- An element: the smallest unit into which work can be divided without analyzing the separate motions, movements, and mental processes involved.
- A task: a distinct work activity caried out for a distinct purpose.
- A duty: includes a large segment of the work performed by an individual and may include any number of tasks.
- A position: consists of one or more duties performed by a given individual in a given firm at a given time. There are as many positions as there are workers.
- A job: a group of positions that are similar in their significant duties. May involve only one position, depending on the size of the organization.
- A job family: a group of two or more jobs that either call for similar worker characteristics or contain parallel work tasks as determined by job analysis.
- An occupation: a group of similar jobs found in different organizations and times.
- A vocation: is similar to an occupation, but the term vocation is more likely to be used by a worker than an employer.
- A career: a sequence of positions, jobs, or occupations that one person engages in during their working life.
Aligning method with purpose
It is important to emphasize that there’s a wide variety of methods and techniques for collecting information about jobs and work. They vary on a number of dimensions and this variation creates choices.
A job analyst is confronted with at least eight different choices. These choices, briefly, include the following:
- Activities or attributes?
- General or specific?
- Qualitative or quantitative?
- Taxonomy-based or blank slate?
- Observers or incumbents and supervisors?
- KSAs or KSAOs?
- Single job or multiple-job comparison?
- Descriptive or prescriptive?
How do we define a job?
Job analysis consists of defining a job, specifying what employee behaviours are necessary to perform them, and developing hypotheses about the personal characteristic necessary to perform those work behaviours. The analyst produces a job description or written statement of what a worker actually does, how he or she does it, and why. This information can be used to determine what KSAOs are required to perform the job. Elements include:
- Job title.
- Job activities and procedures.
- Working conditions and physical environment.
- Social environment.
- Conditions of employment.
This is a traditional, task-based job description. But some organizations are starting to develop behavioural job descriptions, which comprise broader abilities that are easier to alter as technologies and customer needs change.
What are job specifications?
Job specifications represent the KSAOs deemed necessary to perform a job. For example, astronauts and test pilots are required to have 20/20 uncorrected vision. But many job specifications are not rigid and inflexible and serve only as guidelines for recruitment, selection, and placement. Job specifications depend on the level of performance deemed acceptable and the degree to which some abilities can be substituted for others. Some people might be restricted from certain jobs because the specifications are inflexible, artificially high, or invalid. So, job specifications should indicate minimally acceptable standards for selection and later performance.
Establishing minimum qualifications
Job specifications identify personal characteristics that are valid for screening, selection, and placement. How are minimal qualifications (MQs) set? Levine et al. (1997) developed a methodology for determining MQs that was court approved.
- Working independently with a draft list of tasks and KSAs for a target job, separate groups of subject matter experts (SMEs) rate tasks and KSAs on a set of four scales.
- Ratings are aggregated subsequently in terms of means or percentage, so no need for consensus.
- Tasks and KSAs meeting the criteria are used to form domains of tasks and KSAs from which MQs are derived.
- SMEs provide suggested types or amounts of education, work experience, and other data they view as appropriate for MQs.
- Job analysts prepare a draft set of MQ profiles.
- Then a new set of SMEs establishes a description of a barely acceptable employee; decides if the list of MQ profiles is complete; and rates the finalized profile on level and clarity.
How is the reliability and validity of job-analysis information?
A recent meta-analysis identified average levels of inter- and intra-rater reliability of job-analysis ratings. Job descriptions are valid to the extent that they accurately represent job content, environment, and conditions of employment. Job specifications are valid to the extent that persons possessing the personal characteristics believed necessary for successful job performance in fact do perform more effectively on their jobs than persons lacking such personal characteristics. However, many job-analysis processes are based on human judgment, and this judgment is often fallible. Sources of inaccuracy can be due to social and cognitive factors.
- Social sources: settings where groups, rather than individuals, make job-analysis judgments.
- Cognitive sources: reflect problems that result primarily form our limited ability to process information.
In actual organizational settings, there is not a readily available standard to assess the accuracy of job analysis. Job analysis reflects subjective judgment and is best viewed as an information-gathering tool to aid researchers in deciding what to do next.
How can we obtain job information?
Many methods exist for describing jobs, but they differ widely in the assumptions they make. Some are work oriented and some are worker oriented, each method has their own advantages and disadvantages. They will be discussed.
Direct observation and job performance
Observation of job incumbents and actual performance of the job by the analyst are two methods of gathering job information. Job observation is appropriate for jobs requiring a lot of manual, standardized, short-cycle activities, and job performance is appropriate for jobs that the job analyst can learn readily. Observations should include a representative sample of job behaviours. A job analyst should also be unobtrusive in their observations, lest the measuring process per se distort what is being measured, they shouldn’t get ‘in the way’. Observations and job performance are inappropriate for jobs requiring mental activity and concentration (lawyer, architect, network analyst etc.).
A functional job analysis (FJA) is often used to record observed tasks and attempts to identify exactly what the worker does in the job, as well as results of the worker’s behaviour (what gets done). There are certain work settings where direct, in-person observation isn’t feasible, for example in restaurants. But it is possible to obtain good views of work activity using digital cameras. The video can then be reviewed and coded offline.
Interview
The interview is probably the most commonly used technique for establishing the tasks, duties, and behaviours necessary both for standardized or non-standardized activities and for physical as well as mental work. The worker acts as their own observer and can report activities/behaviours that wouldn’t often be observed as well as those occurring over long time spans. The worker can report information that may not be available to the analyst from any other source. Questions used by interviewers may be checked for their appropriateness against the following criteria:
- The question should be related to the purpose of analysis.
- Wording should be clear and unambiguous.
- Question shouldn’t ‘lead’ the respondent.
- Question should not be ‘loaded’.
- Question should not ask for knowledge or information the interviewee doesn’t have.
- There shouldn’t be personal or intimate material that the interviewee might resent.
SME panels
Panels of SMEs are often convened for different purposes in job analysis:
- To develop information on tasks or KSAOs to be used in constructing job-analysis questionnaires.
- In test development, to establish linkages between tasks and KSAOs, KSAOs and test items, and tasks and test items.
Failure to include a broad cross-section of experience in a sample of SMEs could lead to distorted ratings. SMEs are encouraged to discuss issues and resolve disagreements openly.
Questionnaires
Questionnaires are usually standardized and require respondents either to check items that apply to a job or to rate items in terms of their relevance to the job in question. Generally, they are cheaper and quicker to administer than other job-analysis methods and can be completed at the respondent’s leisure, thereby avoiding lost production time. However, they are often time consuming and expensive to develop, and ambiguities or misunderstandings that might have been clarified in an interview are likely to go uncorrected. It can also be more difficult to follow up and augment information obtained in the questionnaires. Task inventories and checklists are questionnaires that are used to collect information about a particular job or occupation.
The position analysis questionnaire
Since task inventories are work oriented and make static assumptions about jobs, behavioural implications are hard to established. Conversely, worker-oriented information describes how a job gets done and is concerned with generalized worker behaviours. The position analysis questionnaire (PAQ) is based on statistical analyses of primarily worker-oriented job elements and lends itself to quantitative statistical analysis. It consists of 194 items/job elements that fall into the following categories:
- Information input.
- Mental processes.
- Work output.
- Relationships with other persons.
- Job context.
Individual items require the respondent either to check a job element if it applies or to rate it on an appropriate rating scale such as importance, time, or difficulty. Personal and organizational factors seem to have little impact on PAQ results. Research seems to indicate that the PAQ is more suited for use with blue-collar manufacturing jobs than it is for professional, managerial, and some technical jobs. Some limitations include:
- Behavioural similarities in jobs may mask genuine tasks differences between them.
- Second problem with the PAQ is readability, a college-graduate reading level is required in order to comprehend the items.
To make the worker-oriented approach more applicable. The job element inventory (JEI) was developed, which is a structured questionnaire modelled after the PAQ, but with a lower reading level.
Fleishman job analysis survey (F-JAS)
The F-JAS is one of the most thoroughly researched approaches to job analysis. Its objective I to describe jobs in terms of the abilities required to perform them. The ability-requirements taxonomy is intended to reflect the fewest independent ability categories that describe performance in the widest variety of tasks.
Critical incidents
The critical-incidents approach involves the collection of a series of anecdotes of job behaviour that describe especially good or especially poor job performance. The method has value as it typically yields static and dynamic dimensions of jobs. Each anecdote describes:
- What led up to the incident and the context in which it occurred.
- Exactly what the individual did that was so effective or ineffective.
- The perceived consequences of this behaviour.
- Whether or not such consequences were within the control of the employee.
What are other sources of job information and job-analysis methods?
Several sources of job information are available and may serve as useful supplements to the methods already described.
- The job analysis wizard: it capitalizes on advances in computer technology and the availability of sophisticated information search-and-retrieval methods.
- Incorporating personality dimensions into job analysis.
- Strategic or future-oriented job analyses.
- Competency models.
What are the interrelationships among jobs, occupational groups, and business segments?
The general problem of how to group jobs together for purposes of cooperative validation, validity generalization, and administration of performance appraisal, promotional, and career-planning systems has a long history. Jobs may be grouped based on the abilities required to do them, task characteristics, behaviour description, or behaviour requirements.
Occupational information – from the dictionary of occupational titles (DOT) to the O*Net
The DOT contains descriptive information on more than 12,000 jobs. But that information is job specific and doesn’t provide a cross-job organizing structure that would allow comparisons of similarities and differences across jobs. To solve this, the U.S. Department of Labour sponsored a large-scale research project called the occupational informational network (O*Net). O*Net is a national occupational information system that provides comprehensive descriptions of the attributes of workers and jobs. It is based on four broad design principles:
- Multiple descriptor domains that provide ‘multiple windows’ into the world of work.
- A common language of work and worker descriptions that covers the entire spectrum of occupations.
- Description of occupations based on a taxonomy from broad to specific.
- A comprehensive content model that integrates the previous three principles.
The O*Net remains a work in progress. The basic framework for conceptualizing occupational information is now in place and future research will enhance the value of the O*Net. Once behavioural requirements have been specified, organizations can increase their effectiveness if they plan judiciously for the use of available human resources.
Despite dramatic changes in the structure of work, individual jobs remain the basic building blocks necessary to achieve broader organizational goals. The objective of job analysis is to define each job in terms of the behaviours necessary to perform it and develop hypothesis about the personal characteristics necessary to perform those behaviours. Job descriptions specify the work to be done. Job specifications indicate the personal characteristics necessary to do the work.
People are among any organization’s most critical resources; yet systematic approaches to workforce planning (WP), forecasting, and action programs designed to provide trained people to fill needs for particular skills are still evolving. Ultimate success in WP depends on many factors: the degree of integration of WP with strategic planning activities, the quality of the databases used to produce the talent inventory and forecasts of workforce supply and demand, the calibre of the action programs established, and the organization’s ability to implement the programs.
The judicious use of human resources is a perpetual problem in society. Emphasis on improved HR practice has arisen as a result of recognition by many top managers of the crucial role that talent plays in gaining and sustaining a competitive advantage in a global marketplace. It’s the source of innovation and renewal.
What is workforce planning?
The purpose of WP is to anticipate and respond to needs emerging within and outside the organization, determine priorities, and allocate resources where they can do the most good. WP can mean different things to different people, but general agreement exists on its ultimate objective – the wisest, most effective use of scarce or abundant talent in the interest of the individual and the organization. So, we can broadly define WP as an effort to anticipate future business and environmental demand on an organization and to meet the HR requirements dictated by these conditions. This view of WP suggests several interrelated activities that together comprise a WP system:
- Talent inventory: assess current resources and analyze current use of employees.
- Workforce forecast: predict future HR requirements.
- Action plans: enlarge the pool of qualified individuals by recruitment, selection, training, placement, transfer, promotion, development, and compensation.
- Control and evaluation: provide closed-loop feedback to the rest of the system and monitor the degree of attainment of HR goals and objectives.
With a clear understanding of the surpluses or deficits of employees in terms of their numbers, skills, and experience that are projected at some future point in time, it is possible to initiate action plans to rectify projected problems.
Strategic business and workforce plans
Strategies are the means that organizations use to compete, for example, through innovation, quality, speed, or cost leadership. How firms compete with each other and how they attain and sustain competitive advantage are the essence of what is known as strategic management. But organizations need to plan in order to develop strategies. Planning leads to success and helps organizations do a better job of coping with change. It also requires managers to define the organization’s objectives (thereby providing context, meaning, and direction for work) and without planning, objectives effective control is impossible.
- Levels of planning: planning can take place at strategic, operational, or tactical levels. Strategic planning is long range in nature and differs from shorter-range operational or tactical planning. Strategic planning decisions involve substantial commitments of resources, resulting in either a fundamental change in the direction of a business or in the speed of its development along the path it’s travelling.
- The strategic planning process: strategic planning is the process of setting organizational objectives and deciding on comprehensive action programs to achieve these objectives. Strategic planning typically includes the following processes:
- Defining company philosophy
- Formulating company and divisional statements of identity, purpose, and objectives
- Evaluating the company’s strengths, weaknesses, opportunities, and threats (SWOT)
- Determining the organization design
- Developing appropriate strategies for achieving objectives
- Devising programs to implement the strategies
An alternative approach
The methodology above is a conventional view of the strategy-development process, and it answers two fundamental questions that are critical for managers:
- What business are we in?
- How shall we compete?
While this is an exciting exercise for those crafting the strategy, it is not particularly engaging to those charged with implementing the strategy. In the alternative, or values-based, approach to developing strategy, organizations begin with a set of fundamental values that are energizing and capable of unlocking the human potential of their people. Then they sue these values to develop, or evaluate, management policies and practices that express organizational values in pragmatic ways on day-to-day basis.
Payoffs from strategic planning
The biggest benefit of strategic planning is its emphasis on growth, as it encourages managers to look for new opportunities rather than simply cutting workers to reduce expenses. But the danger of strategic planning is that it may lock companies into a particular vision of the future – one that may not come to pass. So how does one plan for the future when the future changes so quickly? The answer is to make the planning process more democratic. It needs to include a wide range of people, form line managers, to customers to suppliers.
Relationship of HR strategy to business strategy
HR strategy parallels and facilitates implementation of the strategic business plan. HR strategy is a set of priorities a firm uses to align its resources, policies, and programs with its strategic business plan. It requires a focus on planned major changes in the business and on critical issues. Planning proceeds top-down while execution proceeds bottom-up. There are four links in Boudreau’s model of HR strategy, starting from the top:
- How do we compete?
- What must we execute well?
- How do we delight our internal and external customers?
- What competencies, incentives, and work practices support high performance?
What is talent inventory?
A talent inventory is a fundamental requirement of an effective WP system. It is an organized database of the existing skills, abilities, career interests, and experience of the current workforce. Prior to actual data collection, certain questions must be addressed:
- What should be included in the inventory?
- What specific information must be included for each individual?
- How can this information best be obtained?
- What is the most effective way to record such information?
- How can inventory results be reported to top management?
- How often must this information be updated?
- How can the security of this information be protected?
Answers to these questions will provide direction and scope to subsequent efforts. When a talent inventory is linked to other databases, the set of such information can be used to form a complete human resource information system that is useful in a variety of situations.
Information type
Specific information to be stored in the inventory varies across organizations. At a general level, information is typically included in a profile developed for each individual, including the following:
- Current position information
- Previous positions
- Other significant work experience
- Education
- Language skills
- Training and development programs attended
- Awards received
Uses
Although secondary uses of the talent-inventory data may emerge, it’s important to specify the primary uses at the concept-development stage. This provides direction and scope regarding who and what kinds of data should be included. Some common uses of a talent inventory include identification of candidates for promotion, succession planning, assignments to special projects, transfer, training, work-force diversity planning, and more.
How can we forecast workforce supply and demand?
Talent inventories and workforce forecasts must complement each other; an inventory of present talent is not particularly useful for planning purposes unless it can be analyzed in terms of future workforce requirements. But workforce requirement forecasts are useless unless it can be evaluated relative to the current and projected future supply of workers available internally.
Workforce forecasts are attempts to estimate future labour requirements. There are two component processes in this task:
- Anticipating the supply of human resources (inside and outside the organization) at some future time period.
- Anticipating organizational demand for various types of employees.
Forecasts of labour supply should be considered separately from forecasts of demand because each depends on a different set of variables and assumptions.
External workforce supply
When an organization plans to expand, recruitment and hiring of new employees may be anticipated. Even when an organization isn’t growing, the aging of the present workforce, coupled with normal attrition, makes some recruitment and selection a virtual certainty for most firms. It’s therefore wise to examine forecasts of the external labour market for the kinds of employees that will be needed.
Internal workforce supply
An organization’s current workforce provides a base from which to project the future supply of workers. It is a form of risk management. Perhaps the most common type of internal supply forecast is the leadership-succession plan.
Leadership-succession plan
Succession planning is one activity that is pervasive, well accepted, and integrated with strategic business planning among firms that do WP. Succession planning focuses on a few key objectives:
- identify top talent (high potential individuals)
- develop pools of talent for critical positions
- identify development plans for key leaders
What is workforce demand?
Demand forecasts are largely subjective, mainly because of multiple uncertainties regarding trends like changes in technology; consumer attitudes and patterns of buying behaviour; local, national, and international economics; number, size, and types of contracts won or lost; and government regulations that might open new markets or close off old ones. These forecasts are consequently more subjective than quantitative, though in practice a combination of the two is often used.
Pivotal jobs
Pivotal jobs drive strategy and revenue and differentiate an organization in the marketplace. The objective is to deconstruct the business strategy to understand its implications for talent.
Assessing future workforce demand
To develop a reasonable estimate the numbers and skills mix of people needed over some future time period, it’s important to tap into the collective wisdom of managers who are close to the scene of operations.
How accurate must demand forecasts be?
Accuracy in forecasting the demand for labour varies considerably by firm and by industry type. Factors like the duration of the planning period, the quality of the data on which forecasts are based, and the degree of integration of WP with strategic business planning all affect accuracy.
Integrating supply and demand forecasts
If forecasts provide genuinely useful to managers, they must result in an end product that is understandable and meaningful. Initial attempts at forecasting may result in voluminous printouts, but what’s really required is a concise statement of projected staffing requirements that integrates supply and demand forecasts.
Matching forecast results to action plans
Workforce demand forecasts affect a firm’s programs in many different areas, including recruitment, selection, performance management, training, transfer, and many other types of career-enhancement activities. These activities comprise “action programs” that help organizations adapt to changes in their environments.
What are control and evaluation?
Control and valuation are necessary features of any planning system, but organization-wide success in implementing HR strategy won’t occur through disjointed efforts. Broader systems are necessary to monitor performance. The function of control and evaluation is to guide the WP activities through time, identifying deviations from the plan and their causes.
Goal and objectives are fundamental to this process to serve as yardsticks in measuring performance. Qualitative as well as quantitative standards may be necessary in WP, though quantitative standards are preferable, since numbers make the control and evaluation process more objective and deviations from desired performance may be measured more precisely.
Monitoring performance
Effective control systems include period sampling and measurement of performance. In long-range planning efforts, the shorter-run, intermediate objectives must be established and monitored in order to serve as benchmarks on the path to more remote goals. Shorter-run objectives allow the planner to monitor performance through time and to take corrective action before the ultimate success of longer-range goals is jeopardized.
Identifying an appropriate strategy for evaluation
We noted earlier that qualitative and quantitative objectives can both play useful roles in WP. But the nature of evaluation and control should always match the degree of development of the rest of the WP process. An obvious advantage of quantitative information is that it highlights potential problem areas and can provide the basis for constructive discussion of the issues.
Responsibility for workforce planning
Responsibility for WP is a basic responsibility of every line manager in the organization. The line manger ultimately is responsible for integrating HR management functions, which include planning, supervision, performance appraisal, and job assignment. The role of the HR professional is to help line managers manage effectively by providing tools, information, training, and support.
To summarize, we plan in order to reduce the uncertainty of the future. We don’t have an infinite supply of any resources, and it’s important not only that we anticipate the future, but that we also actively try to influence it. Ultimate WP success rests on the quality of the action programs established to achieve HR objectives and on the organization’s ability to implement these programs.
People are among any organization’s most critical resources; yet systematic approaches to workforce planning (WP), forecasting, and action programs designed to provide trained people to fill needs for particular skills are still evolving. Ultimate success in WP depends on many factors: the degree of integration of WP with strategic planning activities, the quality of the databases used to produce the talent inventory and forecasts of workforce supply and demand, the calibre of the action programs established, and the organization’s ability to implement the programs.
There are many selection methods available. When selection is done sequentially, the earlier stages often are called screening, which the term selection being reserved for the more intensive final stages. New technological developments now allow for the collection of information using procedures other than the traditional paper pencil. These technologies allow for more flexibility regarding data collection, but also present some unique challenges.
What are recommendations and reference checks?
Most initial screening methods are based on the applicant’s statement of what he or she did in the past. But recommendations and reference checks rely on the opinions of relevant others to help evaluate what and how well the applicant did in the past. Generally, four kinds of information are obtainable:
- Employment and educational history.
- Evaluation of the applicant’s character, personality, and interpersonal competence.
- Evaluation of the applicant’s job performance ability.
- Willingness to rehire.
Certain preconditions must be satisfied for a recommendation to make a meaningful contribution to the screening and selection process.
- The recommender must have had an adequate opportunity to observe the applicant in job-relevant situations.
- They must be competent to make such evaluations.
- They must be willing to be open and candid.
- Evaluations must be expressed so that the potential employer can interpret them in the manner intended.
Decisions are made on the basis of letters of recommendations, although they are considered by some to be of little value for not discriminating between candidates enough. If the letters are to be meaningful, they should contain the following information:
- Degree of writer familiarity with the candidate.
- Degree of writer familiarity with the job in question.
- Specific examples of performance.
- Individuals or groups to whom the candidate is compared.
Records and references checks are the most frequently used method to screen outside candidates for all types and levels of jobs. Reference checking is a valuable screening tool. To be useful, however, reference checks should be:
- Consistent
- Relevant
- Written
- Based on public records, if possible
Reference checking can also be done via telephone interviews. Implementing a procedure called structured telephone reference check (STRC). Questions focus on measuring three constructs: conscientiousness, agreeableness, and customer focus. Recruiters ask each referee to rate the applicant compared to others they have known in similar positions and to elaborate on their responses.
Recommendations and reference checks can provide valuable information despite some sources providing sketchy information for fear of violating some legal or ethical constraint. Few organizations are willing to abandon the practice of recommendation and reference checking despite the shortcomings. A key issue to consider is the extent to which the constructs assessed by recommendations and reference checks provide unique information above and beyond other data collection methods, such as the employment interview and personality tests.
How is personal history data used?
Selection and placement decisions can often be improved when personal history data are considered along with other information. One of the most widely used selection procedures is the application form. They can be sued to sample past or present behaviour briefly but reliably. To avoid potential problems, consider omitting questions that:
- Might lead to an adverse impact on members of protected groups,
- Doesn’t appear jo related or related to a bona fide occupational qualification, or
- Might constitute an invasion of privacy.
The scoring of application forms capitalizes on three hallmarks of progress in selection: standardization, quantification, and understanding.
Weighted application blanks (WABs)
One might suspect that certain aspects of an individual’s total background should be related to later job success in a specific position. The WAB technique provides a means of identifying which aspects reliably distinguish groups of effective and ineffective employees. Weights are assigned according to the predictive power of each item.
Biographical information blanks (BIBs).
It is a self-report instrument, but items are exclusively in a multiple-choice format, typically a larger sample of items is included, and items are included that are not normally covered in a WAB. Usually, BIBs are developed specifically to predict success in a particular type of work.
Response distortion in application forms and biographical data
Can application forms and biographical data be distorted intentionally by job applicants? Yes, they can. For example, the ‘sweetening’ of résumés is not uncommon, one study reported that 20-25% of all applications include at least one major fabrication. The extent of self-reported distortion was found to be even higher when data were collected using the randomized-response technique, which guarantees response anonymity and allows for more honest self-reports.
There are numerous situational and personal characteristics that can influence whether someone is likely to fake. Some of these characteristics include beliefs about faking, which are beyond the control of an examiner. But there are situational characteristics that an examiner can influence. For example, the extent to which information can be verified. More objective and verifiable items are less amenable to distortion.
Some have advocated that only historical and verifiable experiences, events, or situations be classified as biographical items. Using this approach, most items on an application blank would be considered biographical. But if only historical, verifiable items are included on a BIB, then questions like “Did you ever build a model airplane that flew?” would not be asked.
Validity of application forms and biographical data
Properly cross-validated WABS and BIBs have been developed for many occupations. Criteria include turnover, absenteeism, rate of salary increase, performance ratings, number of publications, success in training, creativity ratings, sales volume, credit risk, and employee theft. Evidence shows that the validity of personal history data as a predictor of future work behaviour is quite good.
However, commonly, biodata keys are developed on samples of job incumbents, and it is assumed that the results generalize to applicants. So, the implication is to match incumbent and application samples as closely as possible, do not assume that predictive and concurrent validities are similar for the derivation and validation of BIB scoring keys.
Bias and adverse impact
Since the passage of TITL VII of the 1964 Civil Rights Act, personal history items have come under intense legal scrutiny. While not necessarily unfairly discriminatory, such items legitimately may be included in the selection process only if it can be shown that (1) they are job related and (2) do not fairly discriminate against either minority or non-minority subgroups. Results from several studies have concluded that biodata inventories are relatively free of adverse impact, particularly when compared to the degree of adverse impact typically observe din cognitive ability tests.
What do biodata mean?
Criterion-related validity is not the only consideration in establishing job relatedness. Items that bear no rational relationship to the job in question are unlikely to be acceptable to courts or regulatory agencies, especially if total scores produce adverse impact on a protected group. The rational approach is more prudent and reasonable, including job analysis information to deduce hypotheses concerning success on the job under study and to seek form existing, previously researched sources either items or factors that address the hypothesis. The rational approach has the advantage of enhancing both the utility of selection procedures and our understanding of how and why they work. It is also probably the only legally defensible approach for the use of personal history data in employment selection. But in this approach, the validity of biodata items can be affected by the life stage in which the item I anchored. Framing an item around a specific, hypothesized developmental time is likely to help applicants provide more accurate responses by giving them a specific context to which to relate their response.
What are honest tests?
Written honesty tests (integrity tests) fall into two major categories:
- Overt integrity tests: typically include two types of questions. one assesses the attitude toward theft and other forms of dishonesty. The other deals with admissions of theft and other illegal activities.
- Personality-oriented measures: are not designed as measures of honesty per se, but rather as predictors of a wide variety of counterproductive behaviours, like substance abuse, insubordination, absenteeism, bogus workers’ compensation claims, and various forms of passive aggression. They assess broader dispositional traits like socialization and conscientiousness.
Both these tests have a common latent structure reflecting conscientiousness, agreeableness, and emotional stability.
Although there are encouraging findings for the validity of honesty tests there are four issues that have yet to be resolved:
- As in the case of biodata inventories, there is a need for greater understanding of the construct validity of integrity tests given that they’re not interchangeable.
- Women tend to score approximately 0.16 standard deviation unit higher than men, and job applicants over 40 years of age tend to score 0.08 standard deviation higher than applicants under 40.
- Many writers in the field apply the same language and logic to integrity testing as to ability testing even though there is an important difference.
- There is a real threat of intentional distortion.
Researchers are thus exploring alternative ways to assess integrity and other personality-based constructs. One promising approach is conditional reasoning, which focuses on how people solve what appear to be traditional inductive-reasoning problems. But the true intent of the scenarios presented is to determine respondents’ solutions based on their implicit biases and preferences.
How can we evaluate training and experience?
Judgmental evaluations of the previous work experience and training of job applicants is a common part of initial screening. Sometimes evaluation is subjective and informal, and sometimes it’s accomplished in a formal manner according to a standardized method. Evaluating job experience isn’t as easy as you may think because experience includes both qualitative and quantitative components that interact and accrue over time. Work experience is multidimensional and temporally dynamic.
The behavioural consistency method shows the highest mean validity. It requires applicants to describe their major achievements in several job-related areas (behavioural dimensions). A similar approach to evaluation, one most appropriate for selecting professionals, is the accomplishment record (AR) method. It is an objective method for evaluating those records. It’s a type of biodata/maximum performance/self-report instrument that appears to tap a component of an individual’s history that isn’t measured by typical biographical inventories.
How does computer-based screening work?
CBS can be used to simply convert a screening tool from paper to an electronic format that is called an electronic page turner. Another method is computer-adaptive testing (CAT), which presents all applicants with a set of items of average difficulty, and, if responses are correct, items with higher levels of difficulty. CAT uses IRT to estimate an applicant’s level on the underlying trait based on the relative difficulty of the items answered correctly and incorrectly.
HR specialists now have the opportunity to implement CBS in their organizations. If implemented well, CBS can carry many advantages. The use of computers/the internet is making testing cheaper and faster, and it may serve as a catalyst for even more widespread use of tests for employment purposes.
How does drug screening work?
Drug screening tests started in the military, spread to the sports world, and now are becoming common in employment. Critics see it as an invasion of privacy, but they do concede that employees in jobs where public safety is crucial should be screened for drug use. If drug screening will be used with employees and job applicants, tell them in advance that drug testing will be a routine part of their employment. To enhance perceptions of fairness, employers should provide advance notice of drug tests, preserve the right to appeal, emphasize that drug testing is a means to enhance workplace safety, attempt to minimize invasiveness, and train supervisors.
How do polygraph tests work?
Polygraphs are intended to detect deception and are based on the measurement of physiological processes and changes in those processes. The polygraph’s accuracy in distinguishing actual or potential security violators form innocent test takers is insufficient to justify reliance on its use in employee security screening in federal agencies. In spite of the overall conclusion that polygraphs aren’t very accurate, potential alternatives (like measuring brain activity through electrical and imaging studies, haven’t yet shown to outperform the polygraph. So, it’s likely that polygraphs will continue to be used for employee security screening until other alternatives become available.
How can we use employment interviews?
Use of the interview in selection today is almost universal. Maybe because the interview serves as much more than just a selection device. It is a communication process where the applicant earns more about the job and the organization and begins to develop some realistic expectations about both. As a selection device, the interview performs two vital functions:
- It can fill information gaps in other selection devices.
- It can be sued to assess factors that can be measured only via face-to-face interaction.
Well-designed interviews can be helpful because they allow examiners to gather information on constructs that are not typically assessed via other means, like empathy and personal initiative.
Response distortion in the interview
Distortion of interview information is probable, the general tendency being to upgrade rather than downgrade prior work experience. Interviewees tend to be affected by social desirability bias. Applicants also tend to engage in influence tactics to create a positive impression by displaying self-promotion behaviours as well as impression-management behaviours. According to a study, candidates tend to report their GPAs and SAT scores more accurately to computers than in face-to-face interviews.
Reliability and validity
Interviewing is a difficult cognitive and social task. Managing a smooth social exchange while simultaneously processing information about an applicant makes interviewing uniquely difficult among all managerial tasks. Research continues to focus on cognitive factors.
Factors affecting the decision-making process
Literature attests to the fact that the decision-making process involved in the interview is affected by several factors. Posthuma et al. (2002) provided a useful framework summarizing and describing this research. This taxonomy considers factors affecting the interview decision-making process in the following areas:
- Social/interpersonal factors
- Interviewer-applicant similarity
- Verbal and non-verbal cues
- Cognitive factors
- Pre-interview impressions and confirmatory bias
- First impressions
- Prototypes and stereotypes
- Contrast effects
- Information recall
- Individual differences
- Applicant appearance and other personal characteristics
- Applicant participation in a coaching program
- Interview training and experience
- Interview cognitive complexity and mood
- Structure
- Use of alternative media
Needed improvements
Emphasis on employment interview research within a person-perception framework should continue and consider the social and interpersonal dynamics of the interview, including affective reactions on the part of both the applicant and the interviewer. The interviewer’s job is to develop accurate perceptions of applicants and to evaluate those perceptions in light of job requirements.
Toward the future: virtual-reality screening (VRT)
As technology progresses, HR specialists will be able to take advantage of new tools. It’s suggested that VRT can be one such technological advance that has the potential to alter the way screening is done. The implementation of VRT presents some challenges, however.
- VRT environments can lead to sopite syndrome.
- It is costly and lack of commercial availability. Although they are becoming increasingly affordable.
- It has technical limitations, there is a noticeable lag between the user’s movement and the change of scenery and some of the graphics, including the virtual representation of the user, may appear cartoon like.
But even the pace of technological advances we should expect that some of the present limitations will soon be overcome.
There are many selection methods available. When selection is done sequentially, the earlier stages often are called screening, which the term selection being reserved for the more intensive final stages. New technological developments now allow for the collection of information using procedures other than the traditional paper pencil. These technologies allow for more flexibility regarding data collection, but also present some unique challenges.
Managerial selection is a topic that deserves separate treatment because of the unique problems associated with describing the components of managerial effectiveness and developing behaviourally based predictor measures to forecast managerial effectiveness accurately. An assortment of data-collection techniques is currently available – cognitive ability tests, objective personality inventories, personal history data, peer ratings – each demonstrating varying degrees of predictive success in particular situations.
HR specialists engaged in managerial selection face special challenges associated with the choice of predictors, criterion measurements, and the many practical difficulties encountered in conducting rigorous research in this area. Results from several studies suggest that different knowledge, skills, and abilities are necessary for success at the various levels within management. It is appropriate to examine managerial selection in some detail.
What are the criteria of managerial success?
Objective and subjective indicators are frequently used to measure managerial effectiveness. Effective management can be defined in terms of organizational outcomes. To be a successful optimizer, a manager needs to possess implicit traits, like business acumen, customer orientation, results orientation, strategic thinking, innovation and risk taking, integrity, and interpersonal maturity. The emphasis in this definition is on managerial actions or behaviours judged relevant and important for optimizing resources. Many managerial prediction studies have used objective, global, or administrative criteria. But overall measures or ratings of success include multiple factors, so such measures often serve to obscure more than they reveal about the behavioural bases for managerial success.
To summarize the managerial criterion problem, we point out that global estimates of managerial success have proven useful in man validation studies but contribute little to our understanding of the wide varieties of job behaviours indicative of managerial effectiveness. Employers have to consider supplementing global criteria with systematic observations and recordings of behaviour, so that a richer, fuller understanding of all the paths to managerial success might emerge.
The importance of context
Management-selection decisions take place in the context of both organizational conditions and environmental conditions. This may partially explain why predictors of initial performance are not necessarily as good for predicting subsequent performance as other predictors. Contextual factors also explain differences in HR practices across organizations. A model of executive selection and performance should consider the person as well as situational characteristics.
Which instruments of prediction exist?
- Cognitive ability tests: in tests the magnitude of the total score can be interpreted to indicate greater or lesser amounts of ability and have correct and incorrect answers. Inventories do not have this. They measure, for example, general intelligence; verbal, nonverbal, numerical, and spatial relations ability; perceptual speed and accuracy; inductive reasoning; and mechanical knowledge and/or comprehension. General cognitive ability is a powerful predictor of job performance. Differences in intellectual competence are related to the degree of managerial success at high levels of management. Though cognitive tests do come with criticism, and it is therefore suggested that they be sued in combination with other instruments.
- Objective personality inventories: researchers agree that there are five robust factors of personality that can serve as a meaningful taxonomy for classifying personality attributes – extroversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Such a taxonomy makes it possible to determine if there exist consistent, meaningful relationships between particular personality constructs and job performance measures for different occupations. However, response distortion does happen in personality inventories. Fortunately, they are rarely the sole instrument used in selecting managers, so the effects of faking are somewhat mitigated.
- Leadership-ability tests: it might be expected that measures of leadership ability are more predictive of managerial success, because these measures should be directly relevant to managerial job requirements. Scales designed to measure two major constructs underlying managerial behaviour, providing consideration and initiating structure, have been developed and used in many situations. Our ability to predict successful managerial behaviours will likely improve if we measure more specific predictors and more specific criteria instead of general abilities as predictors and overall performance as a criterion.
- Projective techniques: projection refers to the process by which individuals’ personality structure influences the ways in which they perceive, organize, and interpret their environment and experiences. In a critical review of the application of projective techniques in personal psychology since 1940, Kinslinger (1966) concluded that the need exists “for thorough job specifications in terms of personality traits and extensive use of cross-validation studies before any practical use can be made of projective techniques in personnel psychology”.
- Motivation to manage: one projective instrument that has shown potential for forecasting managerial success is the Miner Sentence Completion Scale (MSCS), a measure of motivation to change. Another nonprojective approach to assessing motivation to manage has been proposed by Chan and Drasgow (2001) who defined motivation to lead (MTL) as an individual differences construct that “affects a leader’s or leader-to-be’s decisions to assume leadership training, roles, and responsibility and that affects his or her intensity of effort at leading and persistence as a leader”.
- Personal-history data: biographical information has been used widely in managerial selection – capitalizing on the simple fact that one of the best predictors of future behaviour is past behaviour. Can biodata instruments developed to predict managerial success in one organization be similarly valid in other organizations, including organizations in different industries? The answer is yes, but this answer also needs to be qualified by the types of procedures used in developing the instrument.
- Peer assessment: in typical peer-assessment, raters are asked to predict how well a peer will do if placed in a leadership or managerial role. This information can be enlightening, for peers typically draw on a different sample of behavioural interactions in predicting future managerial success. Peer assessment is a general term for three more basic methods used by members of a well-defined group in judging each other’s performance: peer nomination, peer rating, and peer ranking. Important issues in peer assessment include the influence of friendship, the need for cooperation in planning and design, and the required length of peer interaction.
What are work samples of managerial performance?
We have discussed tests as signs or indicators of predispositions to behave in certain ways rather than as samples of the characteristic behaviour of individuals. Some argue, however, that prediction efforts are likely to be much more fruitful if we focus on meaningful samples of behaviour rather than on signs or predispositions. Because selection measures are really surrogates or substitutes of for criteria, we should be trying to obtain. Measures that are as similar to criteria as possible. In the context of managerial selection, two types of work samples are used.
- Group exercises: here participants are placed in a situation in which the successful completion of a task requires interaction among the participants.
- Individual exercises: here participants complete a task independently.
The most popular types of work samples will be discussed in the next sections.
Leaderless group discussion (LGD)
A group of participants is asked to carry on a discussion about some topic for a period of time. Face validity is enhanced if the discussion is about a job-related topic. Raters observe and rate the performance of each participant. Seven characteristics are rated: aggressiveness, persuasiveness/selling ability, oral communications, self-confidence, resistance to stress, energy level, and interpersonal contact.
The in-basket test
Individual work sample designed to simulate important aspects of the manager’s position. Different types of in-basket tests may be designed, corresponding to the different requirements of various levels of managerial jobs. Each candidate faces the same complex set of problem situations, although the situation is relatively unstructured. At the conclusion of the test, each candidate leaves behind a packet of notes, memos, letters, and so forth, which constitute the record of his behaviour. The test is scored in terms of job-relevant characteristics enumerated at the outset.
The business game
The business game is a ‘live’ case. For example, in assessing candidates for jobs as Army recruiters, two exercises required participants to make phone calls to assessors who role-played two prospective recruits and then to meet for follow-up interviews with these role-playing assessors. A desirable feature of the business game is that intelligence, as measured by cognitive ability tests, seems to have no effect on the success of players. A variation focuses on the effects of measuring ‘cognitive complexity’ on managerial performance, which concerns how people think and behave. It is independent of the content of executive thought and action and reflects a style that is difficult to assess with paper-and-pencil instruments.
Situational judgment tests (SJT)
They are considered a low-fidelity work sample. They consist of a series of job-related situations presented in written, verbal, or visual form, and therefore it can be argued that they are not truly work samples as hypothetical behaviours are assessed instead of actual behaviours.
What are assessment centers (AC)?
The AC is a method that brings together many of the instruments and techniques of managerial selection. Because of its nature, the likelihood of successfully predicting future performance is enhanced. They have been found successful at predicting long-term career success. The three most popular reasons for developing an AC are selection, promotion, and development planning. The duration of the center varies with the level of candidate assessment, as does the ratio of assessors to participants.
Some organizations mix managers with HR department or other staff members as assessors. Generally, assessors hold positions about two organizational levels above that of the individuals being assessed. Few use professional psychologists despite evidence indicating that AC validities are higher when assessors are psychologists rather than line managers.
The performance-feedback process is crucial. Most organizations emphasize to candidates that the AC is only one portion of the assessment process. It’s just a supplement to other performance-appraisal information and each candidate has an opportunity on the job to refute negative insights gained from assessment.
Interrater reliabilities vary across studies between 0.6 to 0.95. raters tend to appraise similar aspects of performance in candidates. In terms of temporal stability, an important question concerns the extent to which dimension ratings made by individual assessors change over time. Standardizing an AC program, so that each candidate receives the same treatment, is important so that differences in performance can be attributed to differences in candidates’ abilities and skills, and not to extraneous factors.
Applicants tend to view ACs as more face valid than cognitive ability tests and, as a result, tend to be more satisfied with the selection process, the job, and the organization. Reviews of the predictive validity of AC ratings and subsequent promotion and performance generally have been positive. Adverse impact is less of a problem in an AC compared to an aptitude test designed to assess the cognitive abilities that are important for the successful performance of work behaviours in professional occupations.
The cost of the procedure is incidental compared to the possible losses associated with promotion of the wrong person into a management job. Given large individual differences in job performance, use of a more valid procedure has a substantial bottom-line impact.
Potential problems
- Growing concern is that assessment procedures may be applied carelessly or improperly.
- A subtle criterion contamination phenomenon may inflate assessment validities when global ratings or other summary measures of effectiveness are used as criteria.
- Studies have consistently found that correlations between different dimensions within exercises are higher than correlations between the same dimensions across exercises (construct validity).
Different combinations of predictors lead to different levels of predictive efficiency, and also to different levels of adverse impact. Both issues deserve serious attention when choosing (a combination) of selection procedures.
Managerial selection is a topic that deserves separate treatment because of the unique problems associated with describing the components of managerial effectiveness and developing behaviourally based predictor measures to forecast managerial effectiveness accurately. An assortment of data-collection techniques is currently available – cognitive ability tests, objective personality inventories, personal history data, peer ratings – each demonstrating varying degrees of predictive success in particular situations.
How can we put personnel selection in perspective?
If variability in physical and psychological characteristics were not so pervasive a phenomenon, there would be little need for selection of people to fill jobs. Without variability in abilities, aptitudes, interests, and personality traits, we’d forecast identical levels of job performance for all job applicants. In personnel selection decisions are made about individuals and are concerned with the assignment of individuals to courses of action whose outcomes are important to the institutions or individuals involved.
This chapter will look first at the traditional/classical validity approach to personnel selection. Then will consider decision theory and utility analysis and present alternative models. The overall aim is to arouse and sensitize the reader to thinking in terms of utility and the broader organizational context of selection decision making.
What is the classical approach to personnel selection?
Individual differences provide the basic rationale for selection. The goal of the selection process is to capitalize on individual differences in order to select people who possess the greatest amount of particular characteristics judged important for job success. In the classical approach, job analysis is the cornerstone of the entire process. Based on this, information sensitive, relevant, and reliable criteria are selected. Simultaneously, predictors are selected that presumably bear some relationship to the criteria to be predicted. Predictors should be chosen based on competent job analysis information; this information provides clues about the type(s) of predictor(s) most likely to forecast criterion performance accurately. After this, the predictor/criterion relationship is assessed and if the relationship is strong, the predictor is accepted and cross-validated. If it is not strong, the predictor is rejected and another one selected.
How efficient are linear models in job-success prediction?
The statistical technique of simple and multiple linear regression are based on the general linear model. Linear models are very robust, and decision makers use them in various contexts. Suppressor variables can affect a given predictor-criterion relationship, even though such variables bear little or no direct relationship to the criterion itself. But they do bear a significant relationship to the predictor. Horst (1941) pointed out that variables that have exactly the opposite characteristics of conventional predictors may act to produce marked increments in the size of multiple regression. Suppressor variables are characterized by a lack of association with the criterion and a high intercorrelation with one or more other predictors. Since the only function suppressor variables serve is to remove redundancy in measurement, comparative predictive gain often can be achieved by using a more conventional variable as an additional predictor.
What are data-combination strategies?
Following a taxonomy developed by Meehl (1954), we will distinguish between strategies for combining data and various types of instruments used. Data-combination strategies are mechanical (statistical) if individuals are assessed on some instrument(s), if they are assigned scores based on that assessment, and if the scores subsequently are correlated with a criterion measure. Predictions are judgmental (clinical) if a set of scores or impressions must be combined subjectively in order to forecast criterion status. Data collection can also be judgmental or mechanical, leading to six prediction strategies.
- Pure clinical strategy: (data are collected and combined judgmentally)
- Behaviour rating: (data collected judgmentally, combined mechanically) in combining the data, the decision maker summarizes their impression on a standardized rating form according to prespecified categories.
- Profile interpretation: (data collected mechanically, combined judgmentally) a decision maker interprets a candidate’s profile without ever having interviewed or observed them (e.g., candidate is given an objective personality inventory, which yields a pattern/profile of scores.
- Pure statistical: (data collected and combined mechanically) frequently is used in the collection and interpretation of biographical blanks, BIBs, or test batteries.
- Clinical composite: (data collected judgmentally and mechanically, but combined judgmentally)
- Mechanical composite: (data collected judgmentally and mechanically, but combined mechanically)
The best strategy of all (in that it always has proven to be either equal to or better than competing strategies) is the mechanical composite, in which information is collected both by mechanical and judgmental methods but is combined mechanically.
What alternative prediction models exist?
Although the multiple-regression approach constitutes the basic prediction model, its use in any particular situation requires that its assumptions, advantages, and disadvantages be weighed against those of alternative models. We will now discuss other approaches. When the assumptions of multiple regression are untenable, then a different strategy is called for.
Multiple-cutoff approach
In some selection situations, proficiency on one predictor cannot compensate for deficiency on another. When some minimal level of proficiency on one or more variables is crucial for job success and no substitution is allowed, a simple or multiple cutoff approach is appropriate. Selection is then made from the group of applicants who meet or exceed the required cut-offs on all predictors.
Multiple-hurdle approach
In multiple hurdle, or sequential, decision strategies, cutoff scores on some predictor may be used to make investigatory decisions. Applicants are provisionally accepted and assessed further to determine whether or not they should be accepted permanently. The investigatory decisions may continue through several additional stages of subsequent testing before final decisions are made regarding all applicants.
Extending the classical validity approach to selection decisions: decision-theory approach
The general objective of the classical validity approach can be expressed concisely: the best selection battery is the one that yields the highest multiple R. this will minimize selection errors, total emphasis is, therefore, placed on measurement and prediction. Overall, there is a need to consider broader organizational issues so that decision making is not simply legal-centric and validity-centric but organizationally sensible.
Taylor and Russel (1939) pointed out that utility depends not only on the validity of a selection measure but also on:
- The selection ratio (SR): the ratio of the number of available job openings to the total number of available applicants.
- The base rate (BR): the proportion of persons judged successful using current selection procedures.
They published tables illustrating how the interaction among these three parameters affects the success ratio (the proportion of selected applicants who subsequently are judged successful). A decision-theory approach considers not only validity, but also SR, BR, and other contextual and organizational issues, unlike the classical approach.
Utility considerations
There are four decision-outcome combinations (erroneous rejections, correct acceptances, erroneous acceptances, and correct rejections). The classical validity approach treats both kinds of decision errors as equally costly, but in most practical selection situations, organizations attach different utilities to these outcomes. The classical approach is deficient to the extent that it emphasizes measurement and prediction rather than the outcomes of decisions.
Evaluation of the decision-theory approach
By focusing only on selection, the classical validity approach neglects the implications of selection decisions for the rest of the HR system. When an organization focuses solely on selection, the exclusion of other related functions, the performance effectiveness of the overall HR system may suffer considerably. The procedure must be evaluated in terms of its total benefits to the organization. The main advantage of decision-theory to selection is that it addresses the SR and BR parameters and compels the decision maker to consider explicitly the kinds of judgments they have to make.
Speaking the language of business: utility analysis
Operating executives demand estimates of expected costs and benefits of HR programs. Unfortunately, few HR programs actually are evaluated in these terms, although techniques for doing so have been available for years. The utility of a selection device is the degree to which its use improves the quality of the individuals selected beyond what would have occurred had that device not been sued. Quality may be defined in terms of:
- The proportion of individuals in the selected group who are considered “successful”.
- The average standard score on the criterion for the selected group.
- The dollar payoff to the organization resulting from the use of ap articular selection procedure.
What is the strategic context of personnel selection?
While certain generic economic objectives (profit maximization, cost minimization) are common to all private-sector firms, strategic opportunities are not, and they don’t occur within firms in a uniform, predictable way. As strategic objectives (economic survival, growth in market share) vary, so must the “alignment” of labour, capital, and equipment resources. Strategic goals change over time, so assessment of the relative contribution of a selection system is likely to also change.
To be more useful to decision makers, utility models should therefore be able to provide answers to the following questions:
- Given all other factors besides the selection system, what is the expected level of performance generated by a manager?
- How much of a gain in performance can we expect from a new performance system?
- Are the levels of performance expected with or without the selection system adequate to meet the firm’s strategic need?
- Is the incremental increase in performance expected form selection instrument A greater than that expected from instrument B?
Russel et al. (1993) presented modifications of the traditional utility equation to reflect changing contributions of the selection system over time (validity and SDy) and changes in what is important to strategic HR decision makers (strategic needs). Such modifications yield a more realistic view of how firms benefit form personnel selection.
If variability in physical and psychological characteristics were not so pervasive a phenomenon, there would be little need for selection of people to fill jobs. Without variability in abilities, aptitudes, interests, and personality traits, we’d forecast identical levels of job performance for all job applicants. In personnel selection decisions are made about individuals and are concerned with the assignment of individuals to courses of action whose outcomes are important to the institutions or individuals involved.
Training and development imply changes – change sin skill, knowledge, attitude, or social behaviour. Although there are many strategies for effecting changes, training and development are common and important ones. Various theoretical models can help guide training and development efforts. These include the individual differences model, principles of learning and transfer, motivation theory, goal setting, and behaviour modelling. Each offers a systematic approach to training and development, and each emphasizes a different aspect of the training process.
Change, growth, and development are bald facts of organizational life. As companies lose workers in one department, they are adding people with different skills in another, continually tailoring their workforces to fit the available work and adjusting quickly to swings in demand for products and services. Additionally, modern organizations face other major challenges:
- Hyper-competition
- A power shift to the customer
- Collaboration across organizational and geographic boundaries
- The need to maintain high levels of talent
- Changes in the workforce
- Changes in technology
- Teams
These trends suggest a dual responsibility: the organization is responsible for providing an atmosphere that will support and encourage change, and the individual is responsible for deriving maximum benefit from the learning opportunities provided.
Both training and development entail the following general properties and characteristics:
- Training and development are learning experiences.
- They are planned by the organization.
- They occur after the individual has joined the organization.
- They are intended to further the organization’s goals.
What is a training design?
We start by examining organizational and individual characteristics related to effective training. Then we consider fundamental requirements of sound training practice.
Characteristics of effective training
Surveys of corporate training and development practices have found consistently that four characteristics seem to distinguish companies with the most effective training practices:
- Top management is committed to training and development; training is part of the corporate culture.
- Training is tied to business strategy and objectives and is linked to bottom-line results.
- Organizational environments are “feedback rich”; they stress continuous improvement, promote risk taking, and afford opportunities to learn from the successes and failures of one’s decisions.
- There is commitment to invest in the necessary resources, to provide sufficient time and money for training.
Additional determinants of effective training
Evidence indicates that training success is determined not only by the quality of training, but also by the interpersonal, social, and structural characteristics that reflect the relationship of the trainee and the training program to the broader organizational context. Variables like organizational support, and the individual’s readiness for training, can enhance or detract from the direct impact of training.
Fundamental requirements of sound training practice
To reach the full potential of the training and development enterprise, it is important to resist the temptation to emphasize technology and techniques; instead, define first what is to be learned and what the content of training and development should be. Program development comprises three major phases, each of which is essential for success:
- Needs assessment or planning phase: the foundation for the entire program.
- A training and development or implementation phase: design the training environment in order to achieve the objectives.
- An evaluation phase: involves establishing measures of training and job-performance success and using designs to determine what changes have occurred during the training and transfer process.
Defining what is to be learned
There are six steps in defining what is to be learned and what the substantive content of training and development should be:
- Analyze the training and development subsystem and its interaction with other systems.
- Determine the training needs.
- Specify the training objectives.
- Decompose the learning task into its structural components.
- Determine an optimal sequencing of the components.
- Consider alternative ways of learning.
The training and development subsystem
Failure to consider the broader organizational environment often contributes to programs that either result in no observable changes in attitudes or behaviour or, worse yet, produce negative results that do more harm than good. To promote better alignment, organizations should do three things:
- For any important change or organizational initiative, it is important to identify what new capabilities will be needed, how they compare to current capabilities, and what steps are necessary to bridge the gap.
- Leaders should periodically seek to identify key strategic capabilities that will be needed as the organization goes forward.
- Training organizations should compare their current programs and services against the organization’s strategic needs.
Assessing training needs
The purpose of needs assessment is determining if training is necessary before expending resources on it. In general, the methods proposed for uncovering specific training needs can be subsumed under a three-facet approach. These are:
- Organization analysis: identification of where training is needed within the organization. Managers set the organizations goals.
- Operations analysis: identification of the content of the training. Managers specify how the organization’s goals are going to be achieved.
- Person analysis: identification of who needs training and what kind is needed. Managers and workers do the work and achieve those goals.
Each of these facets contributes something but to be most fruitful, all three must be conducted in a continuing, ongoing manner and at all three levels.
A fruitful approach to identify individual training needs is to combine behaviourally based performance-management systems with individual development plans (IDPs) derived from self-analysis. IDPs provide a road map for self-development and should include statements of aims, definitions, and ideas about priorities.
Specification of training objectives becomes possible once training and development needs have been identified. This is the fundamental step in training design. Each objective should describe:
- The desired behaviour.
- The conditions under which the behaviour should occur.
- The standards by which the trainee’s behaviour is to be judged.
Creating an optimal environment for training and learning
There are seven features of the learning environment that facilitate learning and transfer:
- Trainees understand the objectives of the raining program – purpose and expected outcomes.
- Training content is meaningful and relevant.
- Trainees are given cues that help them learn and recall training content, like diagrams, models, key behaviours, and advanced organizers.
- Trainees have opportunities to practice.
- Trainees receive feedback on their learning from trainers, observers, video, or the task itself.
- Trainees have the opportunity to observe and interact with other trainees.
- The training program is properly coordinated and arranged.
The basic principles of training design consist of:
- Identifying the component tasks of a final performance.
- Ensuring that each of the component tasks is fully achieved.
- Arranging the total learning situation in a sequence that will ensure optimal mediational effect from one component to another.
Team training
There has been an increasing emphasis on team performance. A team is a group of individuals working together toward a common goal. Researchers have developed a systematic approach to team training that includes four steps.
- Conduct a team-training needs analysis.
- Develop training objectives that address both taskwork and teamwork skills.
- Design exercises and training events based on the objectives from step 2.
- Design measures of team effectiveness based on the objectives set at step 2, evaluate the effectiveness of the team training, and use this information to guide future team training.
How can theoretical models guide training and development efforts?
How to acquire appropriate responses is an important aspect to consider because different people have their own favourite ways of learning. The growing popularity of forms of technology-delivered instruction offers the opportunity to tailor learning environments to individuals and transfers more control to learners about how and what to learn. This can have a negative effect, especially among low-ability or inexperienced learners. Individual differences in abilities, interests, and personality play a central role in applied psychology. Mental ability alone predicts success in training in a wide variety of jobs, so does trainability.
Which principles enhance learning?
If training and development are to have any long-term benefit, then efficient learning, long-term retention, and positive transfer to the job situation are essential. Important learning principles include:
- Knowledge of results (feedback): knowledge of results (KR) provides information that enables the learner to correct mistakes and reinforcement as long as the learner is told why they are wrong and how they can correct the behaviour in the future.
- Transfer of training: the application of behaviours learned in training to the job itself determines the usefulness of organizational training programs.
- Self-regulation to maintain changes in behaviour: a novel approach to the maintenance of newly trained behaviours. Self-regulation refers to the extent to which executive-level cognitive systems in the learner monitor and exert control on the learner’s attention and active engagement of training content.
- Adaptive guidance: adaptive guidance is designed to provide trainees with information about future directions they should take in sequencing study and practice in order to improve their performance.
- Reinforcement: for behaviour to be acquired, modified, and sustained, it must be rewarded (reinforced). Punishment results only in a temporary suppression of behaviour and is a relatively ineffective influence on learning.
- Practice: there must be an opportunity to practice and actively use training content that is learned. It has three aspects:
- Active practice
- Overlearning
- Length of the practice session
- Motivation: one must want to learn in order to actually learn. Motivation is a force that energizes, directs, and maintains behaviour.
- Goal setting: a person who wants to develop themselves will do so; a person who wants to be developed rarely is. An effective way to raise a trainee’s motivation is to set goals.
- Behaviour modelling: based on social-learning theory which holds that we learn by observing others. This principle might profitably be incorporated into a four-step ‘applied learning’ approach to behaviour modelling:
- Modelling
- Role-playing
- Social reinforcement
- Transfer of training
Training and development imply changes – change sin skill, knowledge, attitude, or social behaviour. Although there are many strategies for effecting changes, training and development are common and important ones. Various theoretical models can help guide training and development efforts. These include the individual differences model, principles of learning and transfer, motivation theory, goal setting, and behaviour modelling. Each offers a systematic approach to training and development, and each emphasizes a different aspect of the training process.
The literature on training and development techniques is massive. Generally, however, it falls into three categories: information-presentation techniques, simulation methods, and on-the-job training. Selecting a technique will yield maximal payoff when designers of training follow a two-step sequence: first, specify clearly what is to be learned; then choose a specific method or technique that matches training requirements. When measuring training and development outcomes, be sure to include:
- Provision for saying something about the practical and theoretical significance of the results.
- A logical analysis of the process and content of the training.
- Some effort to deal with the ‘systems’ aspects of training impact.
Once defining what trainees should learn and what the content of training and development should be, the critical question becomes “how should we teach the content and who should do it?”. We will highlight some of the more popular techniques, with special attention to computer-based training, and then present a set of criteria for judging the adequacy of training methods. Training and development techniques fall into three categories:
- Information-presentation techniques: include lectures, conference methods, videos, reading lists, interactive multimedia, and systematic observation.
- Simulation methods: include case methods, role-playing, experiential exercises, business games, assessment centers, and behaviour or competency modelling.
- On-the-job training: include orientation training, apprenticeships, on-the-job training, job rotation, understudy assignments, and performance management.
Computer-based training (CBT)
CBT is the presentation of text, graphics, video, audio, or animation via computer for the purpose of building job-relevant knowledge and skill. It is a form of technology-delivered instruction. To be effective, learner-centered instructional technologies have to be maximally effective and have to be designed to encourage active learning in participants. To do that, consider incorporating the following four principles into the CBT design:
- Design the information structure and presentation to reflect both meaningful organization (or chunking) of material and ease of use.
- Balance the need for learner control with guidance to help learners make better choices about content process.
- Provide opportunities for practice and constructive feedback.
- Facilitate meta-cognitive monitoring and control to encourage learners to be mindful of their cognitive processing and in control of their learning processes.
Selection of technique
A training method can be effective only if it’s used appropriately. Here, appropriate use means rigid adherence to a two-step sequence: first, define what trainees are to learn, and only then choose a particular method that best fits these requirements. The following checklist is useful in selecting a particular technique, the technique should:
- Motivate the trainee to improve their performance.
- Clearly illustrate desired skills.
- Provide for the learner’s active participation.
- Provide an opportunity to practice.
- Provide feedback on performance while the trainee learns.
- Provide some means to reinforce the trainee while learning.
- Be structured from simple to complex tasks.
- Be adaptable to specific problems.
- Enable the trainee to transfer what is learned in training to other situations.
How can we measure training and development outcomes?
Either a program has value, or it does not. But in practice, matters are rarely so simple, for outcomes are usually a matter of degree. To assess outcomes, we need to document how trainees actually behave back on their jobs and the relevance of their behaviour to the objectives of the organization.
Why measure training outcomes?
There are at least four reasons to evaluate training:
- To make decisions about the future use of a training program/technique.
- To make decisions about individual trainees.
- To contribute to a scientific understanding of the training process.
- To further political or public relations purposes.
These can be summarized as decision making, feedback, and marketing.
What are essential elements for measuring training outcomes?
The task of evaluation is counting. The most difficult tasks of evaluation are deciding what things to count and developing routine methods for counting them. In the context of training, here is what counts:
- Use of multiple criteria to adequate reflecting of the multiple contributions of managers to the organization’s goals.
- The relationship between internal and external criteria is especially important.
- Enough experimental control to enable the causal arrow to be pointed at the training program.
- Provision for saying something about the practical and theoretical significance of the results.
- A thorough, logical analysis of the process and content of the training.
- Some effort to deal with the ‘systems’ aspect of training impact – that is, how training effects are altered by interaction with other organizational subsystems.
Trainers must address these issues before they can conduct a truly meaningful evaluation of training’s impact.
Additional considerations in measuring the outcomes of training
Regardless of the measures used, the goal is to be able to make meaningful inferences and rule out alternative explanations for results. To do this, it’s important to administer the measures according to some logical plan or procedure. Many designs are available for this purpose.
Strategies to measure the outcomes of training in terms of financial impact
There continue to be calls for establishing the return on investment (ROI) for training, particularly as training activities continue to be outsourced and new forms of technology-delivered instruction are marketed as cost effective. ROI includes the following:
- The inflow of returns produced by an investment.
- The offsetting outflows of resources required to make the investment.
- How the inflows and outflows occur in each future time period.
- How much what occurs in the future time periods should be ‘discounted’ to reflect greater risk and price inflation.
The major advantage of a ROI is that it’s simple and widely accepted, blending in one number all the major ingredients of profitability and it can be compared to other opportunities. A major disadvantage is that there is much subjectivity in some of its items and that ROI calculations focus on one HR investment at a time and fail to consider how those investments work together as a portfolio.
Influencing managerial decisions with program-evaluation data
The real payoff from program-evaluation data is that when the data lead to organizational decisions that are strategically important. To do that it’s important to embed measures in a broader framework that derives strategic change.
What is classical experimental design?
An experimental design is a plan, an outline for conceptualizing the relations among the variables of a research study. It also implies how to control the research situation and how to analyze the data. Experimental designs can be used with internal or external criteria.
The following table presents examples of several experimental designs. They are by no means exhaustive; they merely illustrate the different kinds of inferences that researchers may draw and, therefore, underline the importance of considering experimental designs before training.
Limitations of experimental designs
Exclusive emphasis on the design aspects of measuring training outcomes is rather narrow in scope. An experiment usually settles on a single criterion dimension, and the whole effort depends on observations of that dimension. So experimental designs are quite limited in the information they can provide. Ideally an experiment should be part of a continuous feedback process rather than just an isolated event or demonstration.
Meta-analytic reviews have shown that effect sizes obtained from single group pretest-posttest designs (design B) are systematically higher than those obtained from control or comparison-group designs.
It is important to ensure that any attempt to measure training outcomes through the use of an experimental design has adequate statistical power.
Lastly, experiments often fail to focus on the real goals of an organization.
What are quasi-experimental designs?
In field settings, there are often major obstacles to conducting experiments. True experiments need manipulation of at least one independent variable, the random assignment of participants and treatment groups. Some less-complete designs can provide useful data even though a true experiment is not possible. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between variables. But unlike a true experiment, a quasi-experiment does not rely on random assignment. Subjects are instead assigned to groups based on non-random criteria. Examples of quasi-experimental designs are:
- Time series designs: relevant for assessing outcomes of training and development programs.
- Non-equivalent control-group design: individuals from a common population are not randomly assigned to the experimental and control groups. For the rest it is similar to design C from the previous section.
- Non-equivalent dependent variable design or ‘internal-referencing’: based on a single treatment group and compares two sets of dependent variables (one that training should affect, and the other that training should not affect – experimental versus control variables).
- Recurrent institutional cycle design: a combination of two before-after studies that occur at different points in time. It controls history and test-retest effects, but not differences in selection.
What is the statistical, practical, and theoretical significance?
The problem of statistical versus practical significance is relevant for the assessment of training outcomes. Demonstrations of statistically significant change scores may mean little in a practical sense. Researchers must show that the effects of training do make a difference to organizational goals in terms of lowered production costs, increased sales, fewer grievances, and so on. External criteria are important.
The real test is whether a new training program is superior to previous or existing methods for accomplishing the same objectives. Firms need systematic research to evaluate the effects of independent variables that are likely to affect training outcomes to show this. The concept of statistical significance, while not trivial, doesn’t guarantee practical or theoretical significance.
Logical analysis
Experimental control is just one strategy for responding to criticisms of the internal or statistical conclusion validity of a research design. A logical analysis of the process and content of training programs can further enhance our understanding of why we obtained the results we did. The ‘systems’ aspects of training impact integrated with the consideration of qualitative and quantitative issues can make training outcomes much more meaningful.
The literature on training and development techniques is massive. Generally, however, it falls into three categories: information-presentation techniques, simulation methods, and on-the-job training. Selecting a technique will yield maximal payoff when designers of training follow a two-step sequence: first, specify clearly what is to be learned; then choose a specific method or technique that matches training requirements. When measuring training and development outcomes, be sure to include:
- Provision for saying something about the practical and theoretical significance of the results.
- A logical analysis of the process and content of the training.
- Some effort to deal with the ‘systems’ aspects of training impact.
Organizational responsibility (OR) is defined as context-specific organizational actions and policies that take into account stakeholders’ expectations and the triple bottom line of economic, social, and environmental performance. The challenge of being responsible and ethical in managing people does not lie in the mechanical application of moral prescriptions. It is found in the process of creating and maintaining genuine relationships from which to address ethical dilemmas that cannot be covered by prescription. One’s personal values play an important part in this process.
By taking into account stakeholders’ expectations, the chances of causing harm are reduced and, therefore, OR leads to more ethical actions and policies. To be ethical is to conform to moral standards or to conform to the standards of conduct of a given group.
The purpose of this chapter is to highlight emerging ethical concerns in several important areas. We cannot prescribe the content of responsible and ethical behaviour across all conceivable situations, but we can prescribe processes that can lead to an acceptable (and temporary) consensus among interested parties regarding an ethical course of action.
Some important definitions:
- Privacy: the interest that employees have in controlling the use that is made of their personal information and in being able to engage in behaviour free form regulation or surveillance.
- Confidentiality: treating information provided with the expectation that it will not be disclosed to others.
- Ethics and morality: behaviours about which society holds certain values.
- Ethical choice: considered choice among alternative courses of action where the interests of all parties have been clarified and the risks and gains have been evaluated openly and mutually.
- Ethical decisions about behaviour: those that take account not only of one’s own interests but also equally of the interests of those affected by decision.
- Validity: in this context, the overall degree of justification for the interpretation and use of an assessment procedure.
Organizational responsibility: definition and general framework
The more encompassing term ‘organizational’ instead of the narrower term ‘corporate’ is used to emphasize that responsibility refers to any type of organization. Though initially seen as an exclusive realm of large corporations, we see OR as also possible and necessary for start-ups and small and medium-sized organizations if they want to be successful in today’s globalized and hypercompetitive economy. Finally, the term ‘responsibility’ is used instead of the narrower phrase ‘social responsibility’ to highlight that responsibility refers to several types of stakeholders, including employees and suppliers, and issues that subsume but also go beyond topics defined as being in the social realm. The definition of OR refers to the triple bottom line of economic, social, and environmental performance. The traditional view is that these performance dimensions are negatively correlated.
In spite of the scepticism surrounding OR, there are two factors that now serve as important catalysts of OR: changes in twenty-first-century organizations and accountability. To summarize, twenty-first-century organizations find it increasingly difficult to hide information about their policies and actions. Additionally, the twenty-first-century organization is increasingly dependent on a global network of stakeholders who have expectations about the organization’s policies and actions. These factors have led to increased accountability, which is an important motivator for organizations to act responsibly.
What are the benefits of organizational responsibility?
Empirical evidence suggests that pursuing social and environmental goals is related to positive economic results. There are clear benefits for organizations that choose to pursue the triple bottom line instead of economic performance exclusively. Organizations are successful in the long run only if they both please shareholders and also please other stakeholders. The challenge is ‘how to ensure that the firm pays wider attention to the needs of multiple stakeholders whilst also delivering shareholder value.
Evidence thus far indicates that there is an overall positive relationship between social and environmental performance and financial performance, but the strength of this relationship varies depending on how one operationalizes social and/or environmental performance and financial performance.
How can we implement OR and what is the role of HRM research and practice?
Anguinis proposed a new concept of strategic responsibility management (SRM). It is a process allowing organizations to approach responsibility actions in a systematic and strategic manner. It involves the following steps:
- Creating a vision and values related to responsibility.
- Identifying expectations through dialogue with stakeholders and prioritizing them.
- Developing initiatives that are integrated with corporate strategy.
- Raising internal awareness through employee training.
- Institutionalizing SRM as a way of doing business on an ongoing basis by measuring and rewarding processes and results.
- Reporting on the status of the dialogue and the initiatives through a yearly OR report that is made available internally and externally.
Since its inception, the field of HRM has walked a tightrope trying to balance employee well-being with maximization of organization performance and profits. This dual role is a source of tension, as is reflected in the test-score banding literature and the staffing decision making literature. OR is consistent with HRM’s mission as well as the scientist-practitioner model. However, there is still concern and scepticism on the part of some that OR is more rhetoric and public relations than reality. OR gives an opportunity for HRM research and practitioners to make contributions consistent with the field’s mission and that have the potential to elevate the field in the eyes of society.
What is employee privacy?
The U.S. Constitution and other federal state laws and executive orders defines legally acceptable behaviour in the public and private sectors of the economy. But while illegal behaviours are by definition unethical, meeting minimal legal standards does not necessarily imply conformity to accepted guidelines of the community. These legal standards have affected HR research and practice in a few ways. Employees are aware of these issues and are willing to take legal action when they believe their privacy rights have been violated by their employers.
Attention in this area centers on three main issues: the kind of information retained about individuals, how that information is used, and the extent to which that information can be disclosed to others. Unfortunately, many companies are failing to safeguard the privacy of their employees. Privacy concerns affect applicants’ test-taking motivation, organizational attraction, and organizational intentions. Employees are likely to provide personal information electronically that they would not provide in person, so organizations should take extra care in handling electronically gathered information. It is important for employers to establish a privacy-protection policy that sets up guidelines on requests for various types of data, informs employees of information-handling policies, and is familiar with state and federal laws regarding privacy.
How does testing and evaluation work?
HR decisions to select, promote, train, or transfer are often major events in individual’s careers. Often these decisions are made with the aid of tests, interviews, situational exercises, performance appraisals, and other techniques developed by HR experts, often I/O psychologists. They must be concerned with questions of fairness, propriety, and individual rights, as well as other ethical issues. They have obligations to their profession, to job applicants and employees, and to their employers.
- Obligations to one’s profession: psychologists are expected to abide by the standards and principles for ethical practice set forth by the APA.
- Obligations to those who are evaluated: in making career decisions about individuals, issues of accuracy and equality of opportunity are critical. Beyond these ethical principles include guarding against invasion of privacy, obtaining informed consent before evaluation, imposing time limitations on data, and treating employees with respect and consideration, among others.
- Obligations to employers: ethical issues in this area go beyond the basic design and administration of decision-making procedures. They include:
- Conveying accurate expectations of evaluation procedures.
- Ensuring high-quality information for HR decisions.
- Periodically reviewing the accuracy of decision-making procedures.
- Respecting the employer’s proprietary rights.
- Balancing the vested interests of the employer with government regulations, with commitment to the profession, and with the rights of those evaluated for HR decisions.
Individual differences serving as antecedents of ethical behaviour
We have discussed regulations, policies, and procedures that encourage individuals to behave ethically. But there are individual differences in the ethical behaviour of individuals, even when contextual variables are the same. Although the implementation of ethics programs can certainly mitigate unethical behaviour, the ultimate success of such efforts depends on an interaction between how the system is implemented and individual differences regarding such variables as cognitive ability, moral development, gender, and personal values. One should expect variability in the success rate of corporate ethics programs.
What are the ethical issues in organizational research?
In field settings, researchers encounter social systems comprising people who hold positions in a hierarchy and who also have relationships with consumers, government, unions, and other public institutions. It’s proposed that most ethical concerns in organizational research arise from researchers’ multiple and conflicting roles within the organization where research is being conducted. Researchers have their own expectations and guidelines concerning research, while organizations, managers, and employees may hold different sets of beliefs concerning research. For example, a researcher may view the purpose of a concurrent validation study of an integrity test as a step to justify its use for selecting applicants. Management may perceive it as a way, unbeknown to employees, to weed out current employees who may be stealing. Ethical issues may arise in:
- The research-planning stage.
- Recruiting and selecting research participants.
- Conducting research: protecting research participants’ rights.
- Reporting research results.
Strategies for addressing ethical issues in organization research
Organizations can be viewed as role systems – as sets of relations among people that are maintained partly by the expectations people have for one another. Problems must be resolved through mutual collaboration and appeal to common goals. Ethical dilemmas arise as a result of:
- Role ambiguity: uncertainty about what the occupant of a particular role is supposed to do.
- Role conflict: the simultaneous occurrence of two or more role expectations such that compliance with one makes compliance with the other more difficult.
- Ambiguous or conflict norms: ambiguous or conflicting standards of behaviour.
Tackling the sources of ethical issues will help minimize them. The achievement of ethical solutions to operating problems is plainly a matter of concern to all parties.
Organizational responsibility (OR) is defined as context-specific organizational actions and policies that take into account stakeholders’ expectations and the triple bottom line of economic, social, and environmental performance. The challenge of being responsible and ethical in managing people does not lie in the mechanical application of moral prescriptions. It is found in the process of creating and maintaining genuine relationships from which to address ethical dilemmas that cannot be covered by prescription. One’s personal values play an important part in this process.
Globalization is a fact of modern organizational life, it refers to commerce without borders, along with the interdependence of business operations in different locations. This chapters emphasizes five main areas:
- Identification of potential for international management.
- Selection for international assignments.
- Cross-cultural training and development.
- Performance management.
- Repatriation.
Though the behavioural implications of globalization can be addressed from various perspectives, we choose to focus only on five of them.
Globalization, culture, and psychological measurement
The demise of communism, the fall of trade barriers, and the rise of networked information have unleashed a revolution in business. Market capitalism guides every major country on earth. Many factors drive change, but none are more important than the rise of Internet technologies. The Internet, as it continues to develop, has changed the ways that people live and work. Examples include:
- Research and development
- Software development
- Telecommunications
- Retail
Globalization and culture
As every advanced economy becomes global, a nation’s most important competitive asset becomes the skills and cumulative learning of its workforce. The one element that is unique about a nation or a company is its workforce. Triandis (1998; 2002) emphasizes that culture provides implicit theories of social behaviour that act like a ‘computer program’, controlling the actions of individuals. He notes that cultures include unstated assumptions, the way the world is. These assumptions influence thinking, emotions, and actions without people noticing that they do. To understand what cultural differences, imply, consider the theory of vertical and horizontal individualism and collectivism.
- Vertical cultures accept hierarchy as a given, whereas horizontal cultures accept equality as a given. Individualistic cultures emerge in societies that are complex and loose. Collectivism emerges in societies that are simple and tight. Additional culture-specific attributes define different kinds of individualism or collectivism. The following four may be the universal dimensions of these constructs:
- Definition of the self – autonomous and independent from groups (individualist) vs. interdependent with others (collectivist).
- Structure of goals – priority given to personal goals (individualist) vs. priority given to in-group goals (collectivist).
- Emphasis on norms vs. attitudes – attitudes, personal needs, perceived rights, and contracts as determinants of social behaviour (individualist) vs. norms, duties, and obligations as determinants of social behaviour (collectivist).
- Emphasis on relatedness vs. rationality – collectivists emphasize relatedness, whereas individualists emphasize rationality.
Culture determines the uniqueness of a group the same way that personality determines the uniqueness of an individual.
Country-level cultural differences
Geert Hofstede identified five dimensions of cultural variation in values in more than 50 countries and 3 regions (east Africa, west Africa, and Arab countries).
- Power distance: the extent that members of an organization accept inequality and whether they perceive much distance between those with power and those with little power.
- Uncertainty avoidance: the extent to which a culture programs its members to feel comfortable or uncomfortable in unstructured situations.
- Individualism: the extent to which people emphasize personal or group goals.
- Masculinity: found in societies that differentiate very strongly by gender. Femininity is characteristic of cultures where sex-role distinctions are minimal.
- Long-term vs. short-term orientation: the extent to which a culture programs its members to accept delayed gratification of their material, social, and emotional needs.
These five dimensions reflect basic problems that any society has to cope with, but for which solutions differ.
The globalization of psychological measurement
Psychological measurement and research in applied psychology is increasing in importance worldwide. Topics like computerized adaptive testing, item-response theory, item analysis, generalizability theory, and the multitrait-multimethod matrix are currently being studied in several countries.
Transporting psychological measures across cultures
Psychological measures are often developed in one country and then transported to another. The problem is that each culture views life in a unique fashion depending on the norms, values, attitudes, and experiences particular to that specific culture. So, the comparability of any phenomenon can pose a major methodological problem in international research that uses, for example, surveys, questionnaires, or interviews.
Before measures developed in one culture can be used in another, it is important to establish translation, conceptual, and metric equivalence. This will enhance the ability of a study to provide a meaningful understanding of cross-cultural similarities and differences.
Terminology
- Expatriate: a foreign-service employee. A generic term applied to anyone working outside their home country with a planned return to that or a third country.
- Home country: the expatriate’s country of residence.
- Host country: the country where the expatriate is working.
- Third-country national: an expatriate who has transferred to an additional country while working abroad. E.g., a German working for a U.S. firm in Spain is a third-country national.
How can we identify potential for international management?
The work of the executive is becoming more international in orientation. An international executive is one who is in a job with some international scope, whether in an expatriate assignment or in a job dealing with international issues more generally. Early identification of individuals with potential for international management is important to a growing number of organizations. Spreitzer et al. (2997) speculated that four broad processes facilitate the development of future international executives:
- Gets organizational attention and investment.
- Takes or makes more opportunities to learn.
- Is receptive to learning opportunities.
- Changes as a result of experience.
These processes can provide a starting point for the creation of a theoretical framework that specifies how current executive competencies, coupled with the ability to learn form experience and the right kind of developmental experiences, may facilitate the development of successful international executives.
How can we select candidates for international assignments?
Validities of domestic selection instruments may not generalize to international sites, because different predictor and criterion constructs may be relevant, or, if the constructs are the same, the behavioural indicators may differ. Recent reviews indicate that the selection process for international managers is, with few exceptions, largely intuitive and unsystematic. A major problem is that the selection of people for overseas assignments is often based solely on their technical competence and job knowledge. But technical competence has nothing to do with one’s ability to adapt to a new environment, to deal effectively with foreign co-workers, or to perceive and, if necessary, imitate foreign behavioural norms. Various factors determine success in an international assignment, including:
- General mental ability
- Personality
- Other characteristics related to success in international assignments:
- Tenacity-resilience
- Communication
- Adaptability
- Organizational and commercial awareness
- Teamwork
- Self-discipline
- Cross-cultural awareness
What is cross-cultural training?
To maximize the effectiveness of employees sent to other countries to conduct business, companies often provide opportunities for CCT prior to departure. Cross-cultural training (CCT) refers to formal programs designed to prepare people of one culture to interact effectively in another culture or to interact more effectively with people from different cultures. An effective way to train employees to adapt is to expose them to situations like they will encounter in their assignments that require adaptation. Such a strategy has two benefits:
- It enhances transfer of training.
- It is consistent with the idea that adaptive performance is enhanced by gaining experience in similar situations.
CCT includes several components. The first is awareness or orientation, and a second is behavioural, providing opportunities for trainees to learn and practice behaviours that are appropriate to the culture in question.
Performance management
Performance management is just as important in the international context as it is in domestic operations. The major difference is that implementation is much more difficult in the international arena. It refers to the evaluation and continuous improvement of individual or team performance and includes goals, appraisal, and feedback. Factors that may affect the performance of expatriates include:
- Technical knowledge
- Host-country environment
- Headquarters’ support
- Personal and family adjustment
- Environmental factors
Performance criteria
A thorough review of research proposes the following working model of the dimensions of expatriate job performance:
- Establishment and maintenance of business contacts
- Technical performance
- Productivity
- Ability to work with others
- Communication and persuasion
- Effort and initiative
- Personal discipline
- Interpersonal relations
- Management and supervision
- Overall job performance
This list reflects intangibles that are often difficult to measure using typical performance appraisal methods. It also suggests that performance criteria for expatriates fall into three broad categories:
- Objective criteria: include measures like gross revenues, market share, and return on investment.
- Subjective criteria: include judgments, usually by local executives, of factors like the expatriate’s leadership style and interpersonal skills.
- Contextual criteria: consider factors that result from the situation in which performance occurs.
What is repatriation?
The problems of repatriation, for those who succeed abroad as well as for those who do not, have been well documented. All repatriates experience some degree of anxiety in three areas: personal finances, reacclimation to the home-country lifestyle, and readjustment to the corporate structure. “Reverse culture shock” may be more challenging than the culture shock experienced when going overseas. Possible solutions to these problems fall into three areas:
- Planning: expatriation assignment and repatriation should be examined as parts of an integrated whole, not as unrelated events in an individual’s career.
- Career management: receiving a promotion upon repatriation has signalled that the organization values international experience and contributes to repatriates’ beliefs that the organization met their expectations regarding training and career development.
- Compensation: the loss of a monthly premium to which the expatriate has been accustomed is a severe shock financially. Some firms replace the monthly foreign-service premium with a onetime ‘mobility premium’ for each move – overseas, back home, or to another overseas assignment. There is also a need for financial counselling for repatriates, which demonstrates to repatriates that the company is willing to help with the financial problems they may encounter in uprooting their families once again to bring them home.
Globalization is a fact of modern organizational life, it refers to commerce without borders, along with the interdependence of business operations in different locations. This chapters emphasizes five main areas:
- Identification of potential for international management.
- Selection for international assignments.
- Cross-cultural training and development.
- Performance management.
- Repatriation.
Though the behavioural implications of globalization can be addressed from various perspectives, we choose to focus only on five of them.
Add new contribution