The paper will argue that job evaluation systems are an attempt to objectify employment positions within an organization through the measurement of size. ‘The word ‘size’ is used to indicate the relative significance or importance of a job in an organization’ (Armstrong & Murlis 1996:95).
This paper discusses formal and informal job evaluation systems as determinants of the relative job value and the compensable factors. The primary reason for conducting job evaluation is to increase equity in employee remuneration in comparison to other employees across similar and variant contexts.
Job evaluation is defined by Kramar, McGraw and Schuler (1997:420) as, ‘a formal and systematic process for objectively comparing the relative size and worth of jobs within an organization.’ It has also been described as, ‘an administrative procedure used to measure job worth’ (De Cieri and Kramar 2003:597). It is also important to recognize that size is not an absolute term, one which can be easily quantified and objectified. Armstrong and Murlis argue that ‘ultimately, there is no single unit of measurement which will tell us precisely how much a job is worth’ (Armstrong and Murlis 1996:98). Size can relate to the resources being controlled, for example finances, people, plant and equipment. It can also be related to outputs, such as sales, project completion or units processed. Size can also correspond to contribution made through the job to the achievement of organizational objectives. The concept of size will be further explored when examining the various analytical methods utilized in conducting a job evaluation.
It is also important to recognize that the evaluative tool of size is set in a much broader theoretical discussion of pay equity and business competitiveness – encapsulating direct labour costs, staffing and operational productivity, staff recruitment and retention. The purpose of job evaluation recognizes the need for a rational means of identifying relative internal values in jobs within and outside an organization. Determining pay structures solely through market comparisons, will not result in a sufficiently reliable basis for an equitable pay structure, due to the volatility of the market (Armstrong and Murlis 1996:98).
Two main types of job evaluation
There are to main job evaluation methods – Non Analytical, which utilises subjective ranking and paired comparison and Analytical, which focuses on a process of factoring.
Armstrong and Murlis, define the non analytical methods (as) ‘whole jobs (being) examined and compared, without being analyzed into their constituent parts or elements. While analytical methods (is where) jobs are analyzed by reference to one of more criteria, factors or elements’ (1996:99)
Non Analytical Methodologies
Job Ranking, Paired Comparisons and Job Classifications are described in the literature as non analytical job evaluation methodologies.
Kramar, McGraw & Schuler argue that ‘job ranking is the most convenient and effective job evaluation method where there are a limited number of jobs to evaluate and the job analysts are familiar with the jobs in question’ (1997:423). Job ranking seeks to compare whole jobs and does not try to assess separately different aspects of the job. This method seeks to rank jobs in a hierarchal manner according to the perception of their relative size. It is Armstrong and Murlis’ (1996:100) opinion, that ‘job ranking is the easiest and quickest form of job evaluations.’ This approach is sometimes used for benchmarking more sophisticated analytical methods, to ensure they are appropriately ranking jobs.
Several disadvantages of this approach have been highlighted by Armstrong & Murlis (1996:101) including:
• There is no clear rational which objectifies the rank order.
• Without a clear rational equity issues may surface.
• Judgements can become multi-dimensional when a number of jobs have to be ranked.
• Inconsistencies between assessors can arise when selective aspects of a job are weighted differently.
• Ranking does not provide sufficient quantification of the differences between jobs, which tends to make grading an arbitrary process.
• Whilst ranking can assist in recognizing extremes in rank order, it may make it difficult to discriminate between middling jobs.
Stone (1995:307) points out that ‘while ranking might measure relative worth, job ranking does not measure the magnitude of difference between jobs.’
Paired comparison is a method used to refine the job ranking process. ‘The underlying principle of paired comparisons is that direct comparison between two items is likely to be more sensitive and discerning than attempting to compare a number of items to one another’ (Armstrong & Murlis 1996:101). Each job is compared to another job and if the size or importance is greater a score of two is assigned. If it is thought to be the same size a score of one is given and of less size a zero is scored. The scores for each job are then tallied and jobs are ranked on the basis of the score. Appendix 1 shows an example of a paired comparison chart. While this process provides a more accurate ranking by confining comparisons to pairs of jobs, it stills lacks a rationale for justifying ranking orders.
Job classification is similar to ranking. Whole jobs are compared to a predetermined scale, in this case a grade definition. Kramar, McGraw & Schuler contend that ‘a particular advantage of this method is that it can be applied to a large number and variety of jobs (1997:423) However, ‘a major disadvantage of the job classification method is the reliance on a whole-job comparison against a limited number, or overall summary of factors.’ The other key limitation is that the method cannot deal effectively with complex jobs.
The primary analytical method of job evaluation is based upon a point factor ranking. Jobs are broken down into factors or key elements, and each factor is seen as contributing to job size in a different proportion. Numerical scales are devised and points are allocated to each factor of a job, depending upon the degree it is present. These separate scores for each job are then tallied to provide a total score for job size.
Armstrong & Murlis (1996:103) point out that the key features of the points factor method are:
• the factor plan;
• the factor rating scales;
• factor weighting.
A factor plan may have anywhere between three to twelve factors and are broadly grouped around inputs (knowledge and skills), processes (mental effort, problem solving, complexity, originality, creativity, initiative, judgement, team work and dealing with people) and outputs (impact on end results). Having a multiplicity of factors does not necessarily mean a more effective job evaluation assessment. Armstrong and Murlis (1996:103) state ‘that this is an illusion. They argue that, the more factors there are, the greater the likelihood of overlap and duplication. They conclude that, it is seldom necessary to have more than six factors.’
Factor rating scales are based on the definition of levels present in each factor. Points are awarded for each level. An example of a factor rating scale can be seen in Appendix 2. The point progression for each level can either be arithmetical (e.g. 20, 40, 60, 80, 100) or geometric, which is used for example in the Hay Chart-Profile method. Kramar, McGraw & Schuler (1996:424), describe ‘the ‘Hay System’ as probably the best know point factor method in Australia and that it is used extensively for evaluating administrative, professional, supervisory, managerial and executive positions.’
A critical decision in the point factor model is whether the individual factors are factor weighted, whereby one factor is assigned a higher value than another.
The various points factoring schemes offer several significant advantages over a job ranking approach, including:
• By job evaluators having to refer to at least three and often six or more factors, the likelihood of overly simplified judgements are dramatically lessened.
• There is greater transparency and employees often perceive the process as fairer.
• Evaluators are able refer to several external measurements making it easier to determine relative size.
• The scoring of scales is easily adaptable to running in a computerized environment.
While there is considerable value added through the using of a point factor analysis there are some disadvantages, including:
• Point factoring can be costly, time consuming and complex to administer.
• Armstrong and Murlis argue that ‘they give a somewhat spurious impression of scientific accuracy … Averaging a group of subjective judgements made by a job evaluation panel does not increase their objectivity.’
• An assumption that it is possible to reduce a complex job to a series of factors and that skills can be added to together within the framework of several scales of values.
• That adopted factor weighting can be applied to all jobs.
• The rating system can be very bureaucratic and rigid and create an unwanted organizational hierarchy.
• The conventional application of a point-factoring ignores that the workforce is in essence far more flexible and project base and contract drive.
Other Factor Methodologies
It an attempt to address the limitations with the Point Factor model several other analytical methodologies have been developed – Graduate Factor Comparison, Factor Comparison and Single Factor, which can be based on skill, competency, decision bands or time span of discretion.
Graduated Factor comparison compares job factors against a scale of factors which are graduated by descriptive levels. There is no numerical score assigned and the factors are not weighted. An example of a descriptive level would be low, medium and high. Armstrong and Murlis argue that ‘this analytical method is particularly useful in sorting out job relatives especially in equal value cases’ (1996:106).
Factor comparison compares jobs with jobs against a number of factors instead of using a scale. It was developed in the US to overcome choice and weighting of factors associated with a points factor scheme. It has been pointed out by the Armstrong and Murlis although it has many advantages, in particular because it does not use an abstract scale and it is complex to develop and administer (1996:107). Although the method has had an important influence on the Hay Guide Chart Profile method.
Single factor methods of job evaluation are based on one key factor for measurement of relative job size. According to Armstrong and Murlis ‘the assumption is made that the process demands made on job holder to deliver the expected outputs can be measures by the level of inputs required’ (1996:108). The shortcoming of the skill based approach is that inputs are rewarded with possibly inadequate attention on the delivered results.
Competency based evaluation measures the size of the jobs in accordance with the necessary competency level required for a successful performance. Like the inadequate attention to results found in skill based approach the same could be said of the competency based approach.
Application to Human Resource Management
As has been argued through this paper that there are clearly deficiencies in both non-analytical and analytical job evaluation systems and these have been outlined. But nevertheless there is still a Human Resource Management need to determine fair and equitable pay structures, which adequately accounts for the growing, flexible, workplace patterns and the rapidly changing and complex job factors.
Clearly, the job ranking and classification approaches are effective in smaller organizations where there is clear delineation between roles in the workplace. Armstrong and Murlis conclude ‘formal job evaluations do indeed work well in a stable, hierarchical organizations. But it has to be recognized that job evaluation methodologies which emphasise place in hierarchy, numbers of people supervised or resources directly controlled, without taking into account technical expertise or complex decision making have little to contribute’ (1996:110).
As well organizations and the Human Resource Management team need to carefully consider which job evaluation approach to select before embarking on the evaluative process.
Firstly, the organization needs to carefully consider if they are going to implement an existing commercial package, such as
• Hay Guide Chart Profile system – measures know-how, problem solving and accountability with each factor being scored on a two dimensional matrix
• Cullen Egan Dell system – measures cognition, education and decision making. Like the Hay system each of these factors is broken down into a further eight sub factors.
• Wyatt System – there are two systems FACTORCOMP™ and MULTICOMP™.
• Weighted Job Questionnaire (WJQ) – this measures five factors (skill and knowledge, contacts, working conditions, problem solving and scope of responsibility) through a multiple choice job analysis questionnaire.
Or design their own system in accordance with one of the non-analytical or analytical methodologies. Building a job evaluation system from the ground up can increase the probability of measuring the nominated factors, but it can be expensive and time consuming. While a commercial system is proven and has a large sample size to draw on for analysis.
In determining how to progress in job evaluation schemes Strong (1995:312) maps out a clear set of questions. These serve as a helpful starting place for a human resource management team and should be considered before undertaking any extensive job evaluation project –
• ‘What are the organisation’s objectives in introducing a job evaluation scheme? Will the expected benefits outweigh the time and costs involved?
• What is the size of the organization? As a general rule the smaller the organization the easier it will be to implement a simple ranking system.
• Are the personnel and expertise available to develop an internal plan? How much can the organization afford to spend on introducing and maintaining a plan?
• What do similar organizations in the same industry do?
• Is the selected job evaluation plan in harmony with the organization’s culture?’
Appendix 3 details a useful chart for facilitating how to choose the most appropriate job evaluation method.
Finally, Strong states, ‘no matter how good a job evaluation system is, it will fail if not understood and accepted by employees as being fair and equitable’(1995:312).
This paper has analysed the various strengths and weaknesses of job evaluation methodologies, whether they be non-analytical or analytical. It has been argued that job evaluation methods need to be viewed as guides for assisting organizations move toward greater pay equity and role clarity. It is important that they are not viewed as final and definitive in themselves but as helpful tools which support the overall job evaluation process in the workplace.
Armstrong, M, & Murlis, H. (1996) Reward Management: A Handbook of Remuneration on Strategy & Practice, Kogan Page Ltd, London.
Clark, R. (1996) Human Resources Management: Framework and Practice, McGraw-Hill, Sydney.
De Cieri, H. & Kramar, R. (2003) Human Resource Management in Australia: Strategy, People, Performance, McGraw-Hill, Sydney.
Ferris, G & Buckley, M (1996) Human Resources Management: Perspectives, Context, Functions and Outcomes, Prentice-Hall, New Jersey.
Kramar, R, McGraw, P. & Schuler, R. (1997), Human Resource Management in Australia, Addison Wesley Longman Australia, Sydney.
Mathis, R. & Jackson, J. (1994) Human Resource Management, West Publishing Corporation, Michigan.
Stone, R. (1991) Readings in Human Resource Management Volume 1, John Wiley and Sons, Brisbane.
Stone, R. (1995) Human Resource Management, John Wiley & Sons, Michigan
Survey of Job Evaluation Practices, American Compensation Association, August 1989, 1-12.
Example of paired comparison
Job A B C D E Total Score Rank Order
A – 0 2 0 2 4 2
B 2 – 2 2 2 8 1
C 0 0 – 2 0 2 5
D 2 0 0 – 1 3 3
E 0 0 2 1 – 3 3
Armstrong and Murlis (1996)
Appendix 2 – Factor Rating example
Factor 6 : contacts
This factor considers the requirement in the job for contacts inside and outside the company. Contacts may involve giving and receiving information, influencing others, or negotiation. The nature and frequency of contacts should be considered, as well as their effect on the company.
Level 1 : little or no consequences except with immediate colleagues and supervisors.
Level 2 : contacts are mainly internal and involve dealing with factual queries or
exchange of information. (20 points)
Level 3 : contacts may be internal or external and typically require tact or discretion to
gain cooperation. (30 points)
Level 4 : frequent internal/external contacts, of a sensitive nature requiring persuasive
ability to resolve non-routine issues. (40 points)
Level 5 : frequent internal/external contacts at senior level or on highly sensitive
issues, requiring advanced negotiation/persuasive skills. (50 points)
Level 6: constant involvement with internal/external contacts at the highest level or
involving negotiation/persuasion on difficult and critical issues. (60 points)
Armstrong and Murlis (1996)
Choice of Evaluation Method
Scheme Characteristics Advantages Disadvantages
Whole job comparisons made to place them in order of importance
Easy to apply and understand
No defined standards of judgement: differences between jobs are not measured
Panel members individually compare each job in turn with all the others being evaluated. Points are awarded according to whether the jobs is more, less or equally demanding than each of the jobs with which it id being compared. These points are added to determine the rank order, usually with the help of a computer. The scores are analysed and discussed in order to achieve consensus among the members of the panel.
Ranking is likely to be more valid on the principle that it is always easy to compare a job with one other job rather than with the whole range of disparate jobs
As with ranking, the system neither explains why one job is more important than another nor assesses difference between them
Job grades are defined and jobs are slotted into the grades by comparing the whole job description with the grade definition
Simple to operate and standards of judgement are provided in the shape of the grade definitions
Difficult to fit complex jobs into a grade without using elaborate grade definitions
Points factor rating and factor comparison
Separate factors are scored to produce an overall points score for the job
The analytical process of considering separately defined factors reduces subjectivity and helps assess differences in job size. Consistency in judgement is helped by having defined factor levels. In accord with equal value law
Complex to install and maintain. Objectivity is more apparent that real: subjective judgement is still required to rate jobs against different factors and level definitions
Armstrong and Murlis (1996)