Join Our Newsletter

Events Calendar

« < January 2019 > »
30 31 1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31 1 2
Home arrow Library of Research Articles arrow Employee Research arrow The Fallacies of Using Agree/Disagree Scales in Employee Surveys
The Fallacies of Using Agree/Disagree Scales in Employee Surveys PDF Print E-mail
Written by Peter Hutton   
08 Dec 2008

December 2008 

Peter Hutton
Author of the recently published book: ‘What Are Your Staff Trying to Tell You?  Revealing Best and Worst Practice in Employee Surveys’

A strange thing has happened in the world of employee surveys. 

Whereas market researchers use a wide range of question types to measure consumer preferences and attitudes, most leading employee researchers have defaulted to using just one type of question technique to the virtual exclusion of all others.  I am talking of the agree/disagree Likert Scale.

By reckoning, it started in the 1980’s.  A growing obsession in business with measurement (McKinsey’s maxim ‘What gets measured gets managed’), the rise of the service sector and a growing recognition of the value of employees and staff loyalty to delivering the brand promise, lead to an increasing demand for staff surveys.  The growth was fed, not so much by established market research firms, but by specialist consultants or specialist arms of management consultants who saw attitude surveys, or ‘cultural audits’, as a useful complement to their other services.  For reasons that are far from clear, these firms came to believe that the best, perhaps only, way to design an employee survey, was to compile a list of statements and ask employees how strongly they agreed or disagreed with each one. 

Over the years, many built up databases of normative questions (statements) that differentiated them from the competition and allowed them to charge a premium for asking their standardised proprietary sets of statements.

To me, this is the antithesis of what good employee research should be.  There is no rationale for adopting one question technique at the expense of all others.  Employee survey questions generally come in three types: scale questions, list questions and open-ended questions.  However, each of these can come in scores of versions.  Scales – e.g. five or seven- point - might measure positive vs. negative sentiments such as agreement/disagreement, satisfaction/dissatisfaction, good/poor, or acceptability/unacceptability - or degrees of positivity – e.g. awareness, understanding, usage, usefulness and so on.  Lists can enable selections to be made according to defined criteria – most important factors in a job, source of information used, characteristics of the organisation and so on.  Open-ended questions enable staff to say in their own words, for example, what they like and dislike about their job, how their managers should improve, why they express any of a number of views measured throughout the questionnaire.  The range of questions that can be asked provides a great deal of flexibility to the researcher to select the best question types to address the issues of the business.

So why has most of the industry defaulted to using just one question type – the agree/disagree Likert Scale?  It was a mystery to me, and one that inspired me to write a book on the subject:  ‘What are Your Staff Trying to Tell You?  - Revealing Best and Worst Practice in Employee Surveys’.

Defenders of the practice will say that the agree/disagree scale provides enormous flexibility; you can ask about almost anything using just the one format.  All you need to do is think up a statements that sounds like the kinds of things employees might say, then ask them how strongly they agree or disagree with each one.

They are right.  They do have enormous flexibility, but they overlook the fact that the question format is itself highly restrictive and brings with it a host of disadvantages.  Among its limitations are that:

• It is very poor at enabling staff to prioritise issues.  Just because more people agree with one statement than another, does not mean that they consider it more important.  Asking staff which of a list of (essentially attitude) statements they consider most important is a nonsense questions.  A list format question is far better suited to this, or some kind of importance scale for suitably worded factors.

• The agree/disagree scale was essentially designed for measuring attitudes and opinions.  It is very poor at measuring behaviours and motivations, both of which are critical to understanding how organisations work. 

• Attitude measures are rarely particularly actionable, especially where staff disagree with them.  They may tell you that people hold a negative view, e.g. about management or communications, but they are poor at telling you what the specific issues are that you need to address.

Many consultancies have defaulted to using their own standard list of statements that they insist on asking in each organisation they research.  This has enormous benefits, they claim, since you are able to compare yourself with the normative measures found across a large number of other organisations.  The problem with this, however, is that because the statements are meant to work in any organisation they are often too general to mean very much in any particular organisation.  The statements ‘Communications are good in this company’ and ‘My information needs are well met’ for example, are too bland and general to mean anything very much.  They are particularly problematical when people disagree with them since you have no idea what they are referring to.

Other standardised statements suffer from the fact that different businesses use different terms and just work different culturally.  For example, the statement  ‘I believe strongly in the goals and objectives of (Employer)’ will not work well where the organisation uses different terminology such as ‘aims’ and ‘values’ or where different divisions or departments have their own goals and objectives that are different to the corporate goals and objectives.  The statement ‘I am inspired by the person leading this organisation’ might be clear in an organisation like Virgin, where Richard Branson would be widely recognised as ‘the person leading Virgin’, but it would not play out well in cultures where the emphasis is on devolving power and where there are organisations within organisations.  Such norms quickly become meaningless if the statements mean different things in different organisations.

Companies that adopt a questionnaire consisting of a standardised list of agree/disagree statements sacrifice relevance for conformity.  Rather than using a survey to help them achieve their unique goals, they effectively defer to a model that says that success is to be defined in terms of achieving a particular attitudinal profile.

There is little sound rationale behind this, in my view.  For one thing, attitudes are only one aspect of what defines an organisation.  For another, different organisations will, and should, have different priorities and one standardised profile need not fit an airline as well as a retailer or a manufacturer of dog food.  They are quite different businesses with quite different needs.

The reason why some consultants insist on using the five- or seven-point agree/disagree scale for all, or most, of their questions, is that it enables the answers to be converted into a uniform numerical scale that they can use for multivariate analysis.  The so-called ‘key driver analysis’ (multiple regression) is designed to say which variables are ‘driving’ a defined dependent variable (e.g. job satisfaction).  I have a real problem with an analysis that says that one attitude is driven by a number of other attitudes.  I also have a problem with a model that only allows the inclusion of questions that can be asked in an agree/disagree format because I know that many of the most important variables cannot be defined in this way.   

Gallup have built a whole employee research business around the idea that all you really need to ask is 12 standardised agree/disagree statements and a five-point job satisfaction question.  It claims that its ‘Q12’ questions ‘capture the most information and the most important information’ about the strength of your workplace .  Its claim is based on having whittled down a long list of questions from surveys conducted by Gallup over 25 years, then conducting correlation analysis with these ‘Q12’ questions and business performance measures (such as staff turnover, productivity and profitability) across the different outlets of a relatively small number of, mainly, retail and financial services companies.  What the company does not make clear in its promotional literature is that  the correlations of the ‘Q12’  attitude statements with the performance variables were at best very small, at worst non-existent and that arguably many more important variables were excluded from the analysis for various reasons, not least because they did not conform to the agree/disagree format.  In any case, even where the correlations were positive there is no evidence that the attitudes drive the performance; indeed, it is more likely the other way round.

The company ‘Best Companies’ have also built a business based on asking a standard set of agree/disagree statements across many companies.  Indeed, scores based on these statements are used to award accreditations to companies that adopt its methodology and to compile the annual ‘Sunday Times 100 Best Companies to Work For’ listing.  However, its methodology also raises serious questions about its claim to have ‘the most accurate and valid survey instrument in the UK for measuring employee attitudes to their work and their organisation’ .  Despite its assertions, there is no validation that the statements included in its analysis are necessarily important to employees, nor any justification for the relative weightings given to each statement in the overall accreditation and Sunday Times Listing scores.  Like Gallup, its failure to include any questions that do not conform to the agree/disagree format, severely undermines the actionability of the survey results. 

The way in which employee surveys are conducted needs a fundamental rethink.  In my view, surveys consisting almost exclusively of agree/disagree statements severely shortchange the clients.  They ask too many of the wrong questions, provide results that are frequently inactionable and normative measures that frequently focus on the wrong issues.  The notion of ‘employee engagement’ has been widely hijacked to provide a spurious credibility to a poor set of measures and multivariate statistical analysis has been misapplied to imply a scientific objectivity that is wholly absent. 

Peter Hutton is founder and managing director of BrandEnergy Research Limited and author of ‘What Are Your Staff Trying to Tell You?  Revealing Best and Worst Practice in Employee Surveys’ published by

< Prev   Next >


How important is market research to start-ups in the current economic climate?

RSS Feeds

Subscribe Now