Some polling organizations incorporate additional steps in their sample selection. One popular procedure is called stratifying the sample. The purpose is to ensure that all geographic areas are represented in the sample. Beyond the number of telephones assigned to a given three digit prefix, the pollster might take into account how many are to be found in large and small cities, or in different parts of the country, and structure the sample with the same proportions.
There may be as many as a dozen strata used in constructing the sample before the actual telephone numbers are selected. In order to be able to use the laws of probability and random selection, the pollster must guarantee that at each stage of the process every person in that stratum has an equal chance of being included in the final sample.
When the numbers called are generated randomly there will obviously be instances in which some are not working numbers, some are businesses, and some have no answer. In that case, additional four digit random numbers can be generated without introducing any bias. A potential problem arises when the person contacted refuses to be interviewed.
People are more willing to hang up on a person calling on the telephone than they are to turn away a person at the door. The greater the number of people who refuse to participate in the survey, the greater is the possibility that those who do answer questions represent something other than an entirely random sample of the universe being studied.
Because they are often calling from a central location, telephone interviewers can be monitored to ensure that they are conducting the interviews properly. No such quality control check is possible for interviews being conducted in-person at the respondent's home.
For pre-election polls, especially during primary season, it is important for pollsters to distinguish between respondents who are likely to vote and those who aren't. Clearly it does little good to report on the views of the general public if most of them are not going to vote. To get an accurate picture of the election's outcome the polling organization will need to focus on respondents who will actually cast a ballot. That is not always easy to determine.
Knowing it is the socially acceptable, most respondents will say they intend to vote. (They will even say they voted in the last election when they did not. When they are surveyed shortly after an election, the percentage of people saying they voted is always higher than the actual turnout.) So polling organizations have their work cut out for them when they try identify which respondents really are likely to vote in the upcoming election. Over the years polling organizations have developed different screening questions to identify likely voters.
Each organization is most confident in its own approach; none is foolproof. The screening can involve one or several questions. Negative responses to one or two questions may be enough to keep the respondent from classifying as a likely voter. Other pollsters will combine responses from several questions to compute a composite score. Some of the questions commonly asked are: in which elections the respondent voted in past elections; how closely the respondent is following the election; how certain the respondent is that he/she will vote this time; whether the respondent knows where to vote.
No matter how a polling organization makes its determination, reporting results based on likely voters shows that the organization is willing to make an extra effort to get it right.