Join Our Newsletter





Events Calendar

« < August 2017 > »
S M T W T F S
30 31 1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31 1 2
Home arrow Analysing Survey Data arrow Know Your Audience: Chapter 17 - Using Research Well
Know Your Audience: Chapter 17 - Using Research Well PDF Print E-mail
Written by John Goslino- Audience Dialogue   
11 Nov 2011

Even though managers often bemoan the lack of research, a lot of research that's done is never acted on. For researchers, this is often frustrating, and for managers, irrelevant research is a money-wasting irritant.

The key principle in using research data is to plan the action at the same time you plan the survey. If the wrong population is sampled, or the questions are not fully relevant, the research will not be properly usable.

Here are five principles to follow, if you want to use research well:

1. Avoid intermediaries.

2. Haste is the enemy of quality.

3. "I knew that all along."

4. Make a contingency plan.

5. Hold a follow-up meeting.

Difference between evaluative and creative research
One obstacle to research being used is that people feel threatened. If somebody has managed a department for years, and thinks he’s done it well, he’s not going to be receptive to some of the audience (via research) informing him there are better ways. And when people think the quality of their work is being judged by research, they can become very defensive. An obvious solution is to attack the research. This is often a problem with evaluative research.

But when a new project is being planned, people are often keen to use research, to find out more about the audience. In this case, the danger is the opposite one — that vague research findings will be believed too much. When a new concept is presented — perhaps a new type of TV program — viewers can’t really grasp the idea without seeing the program a few times. They tend to be polite to the researchers, to say that this proposal sounds like a good idea, and that they’d definitely watch the program. (Maybe they don’t add that they might watch it for only a few minutes.) The result is that new programs often gain smaller audiences than the research suggests. Experienced researchers are more skeptical in such situations, not interpreting weakly expressed interest as a definite intention to view.

(1) Avoid intermediaries
One sure way to produce unusable research is for the end-users of research and the researchers not to communicate fully. Here's an example of how not to do it:

A middle manager wants some research done, and sends a written request to his or her superior. The superior makes a few changes and sends it on to say, a purchasing manager, who rephrases it again to comply with corporate policy. The purchasing manager then contacts a researcher. If that researcher is not then permitted to deal directly with the originator of the request, the original idea will by now be so distorted that any research on it will be useless!

Why is that? Because (a) the sample you really need is probably one you’ll never quite be able to reach, and (b) the art of question wording is a very subtle one. It usually takes three or four meetings between researcher and client before a questionnaire and sample design are adequate.

(2) Haste is the enemy of quality
The person who will use the results and the person who will manage the research need to spend an adequate amount of time discussing the research plan. Usually this will require at least two meetings, and several hours at least. Time spent planning the research is never wasted.

Often an organization will spend months vaguely thinking that it needs some audience research done, and at the last moment will decide that it needs the results as soon as possible. A false sense of urgency is built up. With the resultant rush, mistakes are made. As soon as the first results are released - or even in the middle of a survey - people will begin saying "If only we'd thought to ask them such-and-such..."

I've often experienced this frantic haste - specially when advertising agencies are involved. There’s an old truism about research: "It can be cheap, it can be fast, and it can be high quality. Pick any two." So hasty research will either be of low quality, or very expensive. Take your pick.

(3) "I knew that all along."
A common criticism of survey data is that you spent a lot of money to find out what you already knew. Here’s an example.

Some years ago, I organized a survey on the effectiveness of educational radio programs. The manager of educational programs didn’t really want a survey, but the network manager insisted. So I met with the educational manager, and we worked out what he needed to know. He wasn’t so much interested in audience size, which he assumed would be small. He was more interested in what kinds of people had listened to each program. I commissioned a big research company to do a survey and the results came back in the form of hundreds of pages of computer-printed tables. From these, I wrote an intelligible report, and passed it to the educational program manager.

He flicked through it, stopping at a page about (I think) a program about car maintenance. It showed that the main listeners to this program were women, older people, and those with above-average education.

"That’s obvious," said the manager. "The young blue-collar men would know it already, and these are the people who need to catch up. Why did I need a survey to tell me that?"

But there were other things about the data that seemed strange, so I went back to the computer tables and took a closer look. I rang the research company, who sheepishly confirmed what I’d begun to suspect: that the table headings were transposed, and the real listeners to the program were the opposite of what I’d put in my report. I told the education program manager there was a problem, rewrote the report and took it back to him.

This time it showed that the listeners to the car maintenance program were chiefly young men with below-average education. "That’s obvious," said the manager. " Everybody knows that. I could have told you that all along."

This was an extreme example, because he was notoriously bloody-minded, and had never wanted a survey in the first place. His concentration on the demographic breakdown of the audience (rather than its size) was, I found later, intended to sidestep the fact that he suspected the audience was almost nonexistent.

There’s a very educational way to overcome the "I knew it all along" attitude. When the questionnaire is complete, and the survey is ready to go, give all end-users a copy of the questionnaire. Ask them to estimate the percentage who will give each answer to each question, and write these figures on the questionnaire, along with their names. Collect the questionnaires, and summarize everybody’s guesses. When the survey has been finished, compare the actual results with the guesses. Then it will become obvious that:

- They didn’t know it all along. Even experienced researchers are doing well if they get as many as half the results within 20% of the actual figure;
- The act of guessing (OK, estimating) the answers will make the users more aware of the audience, and more interested in the results. I don’t know why this is so, but it always seems to work out that way.

(4) Make contingency plans
This involves deciding before the survey begins what will be done with the results. Is any action foreshadowed? Or is the purpose of the survey simply to increase broadcasters’ understanding of their audience? Or what? In practice, each question usually has a different purpose.

Here’s a useful exercise, which is best done while the questionnaire is being written. For each question, write down...

(a) the reason for its being asked, and
(b) how the results could be acted on.

The advantage of making a contingency plan is that it is often several months between the questionnaire being written and the survey results becoming available. It’s easy to forget why a question was asked.

Here’s an example of a contingency plan.

Question:
Why did you not renew your subscription to Radio Rhubarb?
(Please tick all boxes that apply.)
[1] Am no longer able to listen to it (e.g. moved to another town)
[2] Price increase was too large
[3] Don’t listen much to Radio Rhubarb these days
[4] Didn’t know subscription was due
[5] Haven’t got around to renewing, but may do so some day
[6] Subscribed mainly to get the program guide, now discontinued
[7] Other reason: . . . . . . . . . . . . . . . . . . . . . .  

Reason for asking this question:
Find out how to get more subscribers to renew.

Contingency plan:
If answer = 1 or questionnaire returned blank: delete from database
If 2 or 6: Send Letter A, pointing out increased benefits
If 3: Send Letter B, pointing out new programs
If 4 or 5: Send reminder letter C
If 7: determine which of the above is most appropriate.

That example was for a census (all subscribers) rather than a survey, and the question was very specific. Normally it would not be possible to have an individual reaction for each member of the population.

A contingency plan not be followed exactly after the results arrive. In the month or two that may pass, many things can change — or you may realize that your plan didn’t take enough into account. Even so, when you produce a plan like this, it helps to clarify your thinking. And if a group of managers make a contingency plan together, it helps them all agree on what they are really trying to achieve, and on the real purpose of the survey.

Not all questions call for a specific action. To gain an understanding of the audience is also important - and you never know when previously-collected information might suddenly become relevant. But you can ask 1,000 questions and never ask the exact one for which you’ll need an answer next month. I suggest that questions which don’t lead to any action should be given a low priority in a questionnaire.

(5) Hold a planning meeting after the survey
When the survey results are out, the researchers need to do more than simply send out a report. The most effective follow-up occurs when the initial results are presented to a group of end-users, who can then ask questions and make it clear which questions need more detailed analysis. At this presentation, everybody’s initial estimates of the answers (see above) can be brought out, and a small reward perhaps offered for the closest guess.

If the report is written after this initial presentation, it will contain more relevant data, and less irrelevant material. When the report has been finished and sent out, it’s a good idea to hold a second presentation, this time focusing on how the results can be used, what has been learned, and what further research or information may be needed to make better decisions.

Updated 14 May 2003

Article contributed by Audience Dialogue 19 October 2011
http://www.audiencedialogue.net/aboutus.html

Last Updated ( 11 Nov 2011 )
 
< Prev   Next >

Polls

How important is market research to start-ups in the current economic climate?
 

RSS Feeds

Subscribe Now