How "correct" answers of respondents can distort the results of the survey beyond recognition

When conducting a study, great attention is paid to data collection, so when the answers of the respondents are collected, they are a priori accepted as correct, and the report based on such answers is objective. However, situations often arise when a more detailed examination of individual answers reveals obvious misunderstandings by the respondents of the wording of the survey or the instructions for the questions.

1. Misunderstanding of professional terms or certain words. When compiling a survey, it is worth considering for which groups of respondents it is intended: the age and status of the survey participants, whether they live in large cities or remote villages, etc. It is worth using special terms and different slang with caution - not all respondents may understand it or not all understand it in the same way. Yet often such misunderstanding does not cause the respondent to abandon the survey (which would of course be undesirable), and he answers at random (which is even more undesirable due to data distortion).

2. Misunderstanding of the question. Many researchers are convinced that each respondent has an unambiguous and clearly formulated opinion on each issue. This is wrong. Sometimes it is difficult for survey participants to answer the question, since they have never thought about the subject as a whole or about the subject from this perspective. This difficulty may cause the respondent to drop the survey, or to answer completely uninformative. Help survey participants respond by formulating the question more clearly and offering a variety of response options.

How "correct" answers of respondents can distort the results of the survey beyond recognitionSource: news.sportbox.ru

3. Failure to understand survey instructions or individual questions. Like the rest of the questionnaire, the wording of the instructions should be adapted to all groups of intended respondents. Try to avoid a large number of questions where you need to mark a specific number of answers (“Mark the three most important ...”) or in all such questions, determine the same number of answers that need to be marked. It is also worth reducing complex types of questions (matrices, ranking, etc.), replacing them with simpler ones. If you think that respondents can answer the questionnaire from a mobile phone, try to simplify the survey structure even more.

4. Misunderstanding the rating scale. Using the rating scale in the questionnaire, explain to the respondents its meaning, even if it seems obvious to you. For example, the usual scale from 1 to 5 is usually understood by analogy with the school grading system, but sometimes respondents mark "1", attributing the value of first place to it. In verbal scales it is better to avoid subjective criteria. For example, the scale "never - rarely - sometimes - often" is very subjective. Instead, it is worth suggesting specific values ​​(“once a month”, etc.).

5. Generalizing-positive and average estimates. The tendency of respondents to generally positive assessments often interferes, for example, in surveys of software users and in other similar studies. If, on the whole, the user is satisfied with your program, it is difficult for him to divide it into parts and separately evaluate his personal account, a new functional solution, etc. Most likely, he will put down a high score everywhere. Yes, the report on the results of the survey will look very positive, but the results will not allow a realistic assessment of the situation.
Average ratings often get in the way, for example, in a 360-degree personnel assessment. Employees tend to give an average score for all competencies: if the attitude towards a colleague is positive, in the results you will see overestimated scores for the entire questionnaire, if there is tension with a colleague, then even his obviously strong leadership qualities will be underestimated.

In both cases, it is reasonable to carefully work out the answer options, replacing the usual scales with detailed verbal answers for each individual question.

6. Manipulation of opinions. This point differs from the previous ones in that the researchers consciously push the respondents to answers that are beneficial to them for a “successful” report. Frequent methods of manipulation are the illusion of choice and the focus on positive characteristics. Typically, managers studying positive survey results do not think about the correct interpretation of the data. However, it is worth taking an objective look at the questionnaire itself: what is its logic, does the questionnaire have a certain line, are the positive and negative answers evenly distributed. Another common technique for "stretching" data is the substitution of concepts. For example, if the majority of employees rated the new incentive program as "satisfactory", the report might show that "the majority of the company's employees are satisfied with the new incentive program."

Source: habr.com

Add a comment