Skip to content

The OCR Glossary


Rod Carveth

Polling is a survey in which either all members of a particular group or a randomly chosen sample of respondents from a sector of the population are asked carefully constructed questions to elicit opinions about events, issues, organizations, and individuals. This entry reviews the nature of polling; then it examines public polls focused on corporate reputation.

The Nature of Polling

In a country such as the United States, with more than 320 million people, including approximately 240 million adults, it is impossible to measure the entire population. It is too expensive and too time-consuming. That is why measuring the entire population of the United States takes place only once every 10 years. This section explains the types of polling, poll samples, poll questions, and the practice of mixing poll results.

Types of Polling

Polling can be conducted face-to-face or by telephone, with automated calls, or by e-mail or mail. The rise of mobile phone–only households has complicated polling efforts, as has the increasing reluctance of Americans to participate in telephone polls. Nevertheless, telephone polls have a better record of accuracy than Internet-based polls. Whatever the technique used, it is important to understand how a poll was conducted and to be careful about reporting any poll that seems to have employed a questionable methodology.

Online surveys have become increasingly popular in recent years as the popularity and validity of phone surveys fell victim to advances in communication technology. The rise of cell phones and the increased use of caller ID have resulted in fewer people participating in phone surveys. The ones who do participate skew older in terms of age.

Online polling has advantages and disadvantages. In terms of advantages, online polling increases the ease of data gathering, as Internet surveys can gather responses from people more quickly and over greater geographical distances than other methodologies. Online surveys are far less costly than other survey methodologies. When respondents complete online surveys, their answers are automatically entered into the survey database, thus making data collection much easier. Online surveys tend to have higher response rates than other survey methods. Finally, complex types of surveys with a variety of response formats can be conducted easily online.

Yet online polls have important drawbacks. First, as much as phone surveys skew toward older respondents, online polls are not available to everyone. They cannot be completed by those who lack access to the Internet, are older, possess less income, or live in remote rural areas. Second, the absence of an interviewer means that online surveys are not appropriate for obtaining responses to open-ended questions. Third, it is difficult to verify who is actually completing the survey. Fourth, Internet users today are inundated with messages and can easily delete solicitations for online surveys, thinking that they are “spam.” Finally, and most important, in online surveys, it is harder to draw probability samples based on e-mail addresses or website visits.

Poll Samples

Members of a poll sample—the respondents—are interviewed to estimate the opinions of the larger general population. The mathematical laws that govern probability indicate that if a sufficient number of individuals are randomly chosen to participate in a poll, their views will tend to be representative of the larger population. As a result, the sample size is key for any poll. The larger the sample size, the smaller the margin of sampling error.

In all scientific polls, respondents are chosen at random. In a random sample, any person who qualifies from the population has a chance of being in the survey. There are a number of ways of trying to ensure a scientific sample. Telephone surveys often use random digit dialing, so that every household with a landline phone would be able to participate. The key to using a random survey is that we can calculate how likely it is that the findings from the sample reflect what would be found in the overall population. That is, we can calculate the margin of sampling error. Surveys with self-selected respondents—for example, people interviewed on the street or who just happen to participate in a web-based survey—are intrinsically unscientific and do not allow for the calculation of a meaningful margin of sampling error.

The margin of sampling error describes the range of the likely answers we would have received had we talked to everyone and not used a sample. A properly drawn sample of 1,000 individuals has a sampling error of about ±3 percentage points, meaning that the finding would be ±3 points of the result if the sample had included all 240 million adults in the United States. That is, if 1,000 respondents were asked whether businesses acted ethically and 58 percent said yes, then between 55 and 61 percent would say yes if the entire population was asked. This is why larger samples are preferable to smaller ones. The margin of sampling error for a sample of 100 is ±9.8 percent; for a sample of 500, it is plus or minus ±4.4 percent; and for a sample of 2,500, it is ±2.0 percent. If the results of a poll are based on a subsample of the total sample, then the margin of sampling error is based on the subsample.

Poll Questions

Every poll involves a questionnaire that contains a consistent set of questions that are asked of every respondent. There are a number of ways in which how a question is asked can affect the answers that respondents give. First of all, questions can be asked in two forms: (1) open-ended questions, to which the respondent answers in his or her own words, or closed-ended questions, in which the respondent selects from the options offered by the pollster. The vast majority of polling questions are asked in close-ended formats.

Advantages and disadvantages exist in using either form. Open-ended questions are good for really getting at what is on people’s minds and having respondents talk about issues in their own words. But open-ended responses can be hard to code into meaningful categories, particularly within tight time frames; they take more time to administer, so the researcher must ask fewer questions; and they can be hard to draw conclusions from if only a small number of people provide a given response. Close-ended questions, on the other hand, are considerably easier to administer and analyze, but they can sometimes make the respondent feel constrained in his or her answers, particularly if the categories do not include a response the person wants to provide.

The answers to each question can be affected by the actual wording of the question. The general principle of question wording is that every respondent should understand the question and be able to answer it with reliability—that is, if a respondent were asked the same question again, he or she would give the same answer. A number of common problems have been identified, as well as solutions for dealing with them.

The use of a double negative can confuse a respondent, especially when he or she is asked to provide a simple “agree” or “disagree” response. Take this example: “Most of the time, I am unable to express how interested in business I am.” When the respondent answers “agree,” it is unclear whether the respondent means that he or she is not interested in business or that he or she is interested but just cannot express himself or herself well.

With a double-barreled question, a pair of options is offered, but the response choices are limited to “yes” or “no.” In this case, it is not possible to determine whether the response applies to one or both of the options. Take this example: “Do you believe Company C treats and pays its employees well?” “Treat” and “pay” are two different concepts, so it is not clear what the response would refer to here.

A leading question contains an initial phrase that leads the respondent by suggesting the position or stance of an authority, with which it might be difficult for the respondent to disagree. Leading questions introduce a bias in a particular direction linked to the authority. Take this example: “If it provided the increased jobs seen in other areas of the country, would you support a Store B being built in town?” The question discusses only the jobs Store B would create but not the jobs lost as a result of smaller businesses in the area closing down.

A balanced question will equally represent both sides of an issue and/or provide the respondent with answer categories with an equal number of options on each side. For example, the question “Do you support Company A providing paid maternity leave?” is an unbalanced question.

Some questions use complex language or are structured in a complicated way so that they won’t be clearly understood by all respondents—for example, “Do you think Apple should stop practicing corporate inversion?” This question includes references to a concept (“corporate inversion”) that is probably not commonly understood. With some complex issues—for example, the ethics of soft drink companies engaging in “leanwashing,” or making a product appear healthier than it is—pollsters have erroneously measured “nonopinions,” as respondents had not thought through the issue and voiced an opinion only because the polling organization contacted them. Poll results in this case can fluctuate wildly depending on the wording of the question.

In some surveys, the order of the questions may be designed to “lead” the respondent to a kind of conclusion that produces a predictable response. For example, if a pollster asks questions about a specific issue, such as corporate inversion, before asking what the most important problem is facing business today, respondents will be more likely to say that it is corporate inversion. The context for their answer is established by the order of the questions. This is also referred to as setting up a context effect.

Social desirability bias occurs when respondents provide answers they think are socially acceptable rather than their true opinions. Such bias often occurs with questions on difficult issues such as abortion, race, sexual orientation, and religion.

Mixing Poll Results

Some poll results that get reported are based on a “poll of polls,” where multiple polls are averaged together. This is usually done in the realm of politics. Prominent websites that engage in this practice are FiveThirtyEight, Real Clear Politics, and the Cook Political Report. There are, however, methodological arguments over how to do this accurately, and some statisticians have objections to mixing polls.

Polls and Corporate Reputation

The measurement of corporate reputation has become big business. Several major polling companies derive substantial portions of their revenue through assessing the corporate reputation of companies. Some of the polling companies involved in assessing corporate reputation are Ipsos, Leger, Nielsen, and YouGov.

The Harris Poll Reputation Quotient, owned by Nielsen, is a comprehensive method for measuring corporate reputation created specifically to capture the perceptions of any corporate stakeholder group, such as consumers, investors, employees, or key “influentials,” or individuals who influence many others. The poll, developed by the Harris Poll, which has measured public opinion in the United States for more than 50 years, examines six dimensions of a company’s reputation: (1) emotional appeal, (2) products and services, (3) vision and leadership, (4) workplace environment, (5) financial performance, and (6) social responsibility. The dimension of emotional appeal has been operationalized in the scholarly literature as public esteem, the dimension of corporate reputation concerned with trust, admiration, respect, and overall favorability.

The Harris Poll Reputation Quotient study is conducted each year. For the 2015 list, 4,034 respondents participated in the first (nominating) phase, and 27,278 respondents participated in the second (Reputation Quotient rating) phase. The study is conducted online. The study has two phases. The first phase—the nominating phase—asks respondents to name, without assistance, companies with the best and worst reputations. At the end of the first phase, the top 100 most visible companies based on these nominations are measured. In the second phase, respondents are randomly assigned to rate two of the companies with which they are “very” or “somewhat” familiar. Companies are rated on their reputation on 20 different attributes that constitute the Harris Poll Reputation Quotient instrument. From there, the final list of the most reputable companies is determined. The first phase of the Reputation Quotient has been used as a measure of organizational prominence in scholarly literature.

In the 2015 Harris Poll, respondents were selected from those individuals who agreed to participate in the Harris Poll and “sample partner surveys.” As a result, the sample may not truly be random, and likely skewed people who were younger, more urban, and earning a slightly higher income, although the data were weighted to reflect the composition of the U.S. population. The results might not be an accurate estimate of public opinion because no estimates of theoretical sampling error can be calculated. Consider that regional grocer Wegmans, located only in the northeastern United States, was the top-scoring company in 2015.


Polling is a common activity in the realm of politics. During election years, barely a week goes by without a new election poll. Business polls, especially those measuring public opinion about companies, are becoming more popular. Most polls are done well, but they are subject to a number of potential pitfalls, as discussed here. If done well, polls can provide businesses and the public useful information about large well-known companies.

Asher, H. (2010). Polling and the public: What every citizen should know. New York: CQ Press.

Carroll, C. E. (2009). The relationship between media favorability and firms’ public esteem. Public Relations Journal, 3(4), 1–32.

Carroll, C. E. (2010). Should firms circumvent or work through the news media? Public Relations Review, 36(3), 278–280.

Carroll, C. E. (2011). Corporate reputation and the news media in the United States. In C. E. Carroll (Ed.), Corporate reputation and the news media: Agenda setting within business news in developed, emerging, and frontier markets (pp. 221–239). New York: Routledge.

Fombrun, C. J., Gardberg, N. A., & Sever, J. M. (2000). The reputation quotient: A multi-stakeholder measure of corporate reputation. Journal of Brand Management, 7(4), 241–255.

Goidel, K., & Cook, C. (2011). Political polling in the digital age. Baton Rouge, LA: Louisiana State University Press.

Rindova, V. P., Williamson, I. O., Petkova, A. P., & Sever, J. M. (2005). Being good or being known: An empirical examination of the dimensions, antecedents, and consequences of organizational reputation. Academy of Management Journal, 48(6), 1033–1049. doi:

See Also

Agenda-Setting Theory; Prominence; Public Esteem; Public Opinion; Research Methodology in Corporate Reputation; Research Methods in Corporate Reputation; Scales for Measuring Corporate Reputation; Spiral of Silence Theory

See Also

Please select listing to show.