DIRECTORY                                                                Contact Us   Login   Subscribe

Do you know that 2019 is an election year in 16 African countries?


…Welcome to polling season! 

Public opinion polls are used to determine and/or predict what people believe, how they feel and in what way they will act (i.e. vote). To do this, polls must be valid and must have value, failing which they are simply not worth the effort. Polling validity and value are dependent on the research technique employed, the honesty and objectivity of the pollsters and polling agency, the characteristics of those participating in the polling, and how the polling results are presented and used.

The proliferation of data sources and the complexity of data coupled with the pace of the use of information, technological advancement and so on have created the perfect storm for some of the ills that plague not only polling, but especially the reporting of polling results. Addressing any one of these ills requires a significant effort to self-regulate on the one hand, and to inform and educate audiences on the other.

Separating bad polling and bad reporting of polling results here is deliberate. A discussion about bad polling invariably becomes very technical, and dare I say excruciatingly boring. However, bad reporting of polling results is much easier to spot and the discussion is far more entertaining and usually heated, as it speaks to the integrity and independence of the media, loyalty to political parties, and the trust of the public in journalists and publishers and politicians alike. Bad reporting of polling results is also a very public affair, so it really is short-sighted and self-defeating, not to mention seriously detrimental to a media and often some or other political party’s brand.

The remainder of this post focuses on reporting of polling results. Evaluating the quality of reporting of polling results can be done in three steps:

Step 1 At the very least, good reporting of polling results must cover:

  1. the name of the organisation that ran the poll
  2. the universe effectively represented (i.e. who was targeted to be interviewed)
  3. the achieved sample size (i.e. number of actual interviews reported on), geographical coverage and the number of sampling locations used
  4. the dates that the poll was conducted (i.e. when was the data collected)
  5. the sampling technique used, and in the case of full random probability samples the response rate achieved
  6. the method by which the information was collected (e.g. face-to-face, telephone interview, online panel, etc.)
  7. whether weighting or other statistical manipulations were used to adjust the results, and if so, the universe used to determine these
  8. the question(s) asked, and to avoid possible ambiguity and misunderstanding, the actual wording of the question(s) 
  9. the percentage of people who gave ‘don’t know’ or undecided answers and in the case of voting-intention studies, the percentage of people who said they will not vote, and refused to answer, especially where these percentages are likely to affect the interpretation of the polling results; when comparing findings over time, any changes in these percentages
  10. If the medium of publication has limitations (e.g. space), an instruction should be included for accessing any of the above that could not be covered.

Step 2 If all of the above is covered, reporting of polling results, furthermore, only makes the quality grade when:

  • Findings, interpretation of findings, and recommendations are clearly distinguishable.
  • The form and content of the publication is approved by the organisation or person who did the research.
  • The published polling results are not misleading.
  • Conclusions are adequately supported by the poll data.
  • The publisher and/or the research organisation involved is prepared on request to supply additional information about the poll.
  • Technical information necessary to assess the validity of published polling results are made available.

Step 3 The last step in spotting bad reporting of polling results is virtually identical to spotting propaganda, and the well-described, well-researched tactics that help us to identify propaganda evidently also apply when evaluating the reporting of polling results. Answer these questions, and you will know:

  • Who is the source of the polling results? This includes finding out if the source and indeed the author actually exist, and if so, what their mission, purpose or agenda is likely to be. Are they independent and credible? Do they have a reputation for publishing quality data and information? 
  • Are there other (original) sources that corroborate the polling results presented? Finding at least one additional source that supports the information, and that is not just parroting the source in question, can be helpful. And remember that a retweet on Twitter and a repost or share on Facebook does not corroborate the information contained in it!
  • When were the polling results published? This gives context to the information. For example, anything published on 1 April should be carefully considered these days.
  • How absurd are the results, on a scale of 1 to 10? Good old common sense can sometimes spare us from embarrassment, by recognising satire, for example, or spotting gross inaccuracies.
  • What are your own biases, and is the content triggering them? What emotions are deliberately evoked by the author? Is the information presented in an inflammatory way?
  • What do the experts say or think about the content, independently from the source?
  • Is the information well presented? For example, poor editing and informal style in what is supposed to be a formal publication should raise a number of flags.

The public has the right to interrogate polling results, and to do so must ask the above questions, and insist on answers, when they read published polling results. 

Do you know how to evaluate the quality of poll reporting…?

Leonie Vorster, SAMRA Chief Executive Office

Notes.

  • SAMRA members have to adhere to the global guideline for ethical polling.
  • AAPOR, ESOMAR AND WAPOR have together launched an online learning module for journalists that helps anyone who is not a market researcher understand how polls are conducted, what to look for in methodology and why even the most legitimate of polls sometimes miss the mark.
  • Some of the content in this post is from an interview that Leigh Andrews conducted with me in 2017 about fake news, that was published by Bizommunity

Question or Comment?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertise here!