Questions - Heart of All Polls - continued
Intensity of feeling
Some closed-end questions attempt to gauge the intensity of respondents' feelings about the issues being investigated. Respondents are asked whether they are strongly in favor, somewhat in favor, neutral, somewhat opposed, or strongly opposed to a particular position or proposal. This five-point scale, developed by Rensis Likert to measure the strength of a respondent's views, is commonly called a Likert scale. In some instances it will be expanded to a seven or 10-point scale, with respondents asked to respond with a number along a continuum from one extreme to another.
When data are collected in this way two options are available. First, the data can be reported in terms of a Likert scale, giving the percentage strongly opposed and the percentage somewhat opposed. Second, the data can be collapsed, aggregating the number strongly opposed and somewhat opposed, to report simply the total percentage opposed. If the question has only dichotomous response categories, such as favor or oppose, no such option exists.
The trade-off is clear. When data are collapsed there are fewer numbers, and they are easier to comprehend. However, information is lost to the reader, who has no way to know whether aggregating the data reflected a bias.
Bias caused by selective reporting is greatest with the responses to open-ended questions. The myriad responses must be aggregated in a way that makes sense and does not force the answers into so few categories that the intentions of the respondents are distorted, or so many categories that all the percentages are very small and little useful information is reported.
Along with the results, polling organizations usually provide some interpretation of the findings. The more latitude the organization has in reporting the results, the more latitude it has in interpreting them. Possible bias in the interpretation will be difficult to detect when it involves omitting information. In other instances, bias may be noticeable when the conclusions go beyond the data. One example would be attributing strong feelings to respondents who were simply asked if they favored or opposed a proposal. Another is going beyond the wording of the question to contend a majority of Americans agree or disagree with a broader or narrower proposition than was put to them in the question.
Still another is making too much of small differences in the percentage of respondents on either side of an issue. If a national poll finds 52% in favor of a proposal and 48% opposed, the interpretation should not say unequivocally that a majority favors the proposal. For a standard sample of roughly 1,500 respondents, it is possible that each of those percentages is wrong by as much as three percentage points in either direction. In other words, the 52% in favor could be as high as 55% or as low as 49%. The 48% opposed could be as high as 51% and as low as 45%. We do not know whether the correct numbers are 55% to 45%, with a clear majority in favor, or a slim 51% to 49% majority opposed. We are forced to conclude that the difference reported is not statistically significant; it is too close to call.
The point is quite simple: exaggerated claims about the significance of results that fall within the margin of error for the survey are misleading distortions of the findings.