Friday, January 30, 2015

Saturday, January 10, 2015

Polling accuracy

Leigh and Wolfers observed in 2006 that "the 'margin of error' reported by pollsters substantially over-states the precision of poll-based forecasts. Furthermore, the time-series volatility of the polls (relative to the betting markets) suggest that poll movements are often noise rather than signal" (p326). They went on to suggest, "for forecasting purposes the pollsters' published margins of error should at least be doubled" (p334).

Leigh and Wolfers are not alone. Walsh, Dolfin and DiNardo wrote in 2009, "Our chief argument is that pre-election presidential polling is an activity more akin to forecasting next year's GDP or the winner of a sporting match than to scientific probability sampling" (p316).

In this post I will examine these claims a little further. We start with the theory of scientific probability sampling.

Polling theory

Opinion polls tell us how the nation might have voted if an election was held at the time of the poll. To achieve this magic, opinion polls depend on the central limit theorem. According to this theorem, the arithmetic means from a sufficiently large number of random samples from the entire population population will be normally distributed around the population mean (regardless of the distribution in the population).

We can use computer generated pseudo-random numbers to simulate the taking of many samples from a population, and we can plot the distribution of arithmetic means for those samples. A python code snippet to this effect follows.

# --- initial
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt

# --- parameters
sample_size = 400      # we will work with multiples of this
num_samples = 200000   # number of samples to be drawn in each simulation
threshold = 0.5        # proportion of the population that "vote" in a particular way

# --- model
fig = plt.figure(figsize=(8,4))
ax = fig.add_subplot(111)

for i in [1,2,3,4,6,8]:

    # get sample size for this simulation
    s = i * sample_size

    # draw num_samples, each of size s
    m = np.random.random((s, num_samples))

    # get the population proportion (as a percent) that is less than the threshold
    m = np.where(m < threshold, 1.0, 0.0).sum(axis=0) / (s * 1.0) * 100.0

    # perform the kernel density estimation
    kde = sm.nonparametric.KDEUnivariate(m)

    # plot
    ax.plot(, kde.density, lw=1.5,
        label='Sample size: {0:}   SD: {1:.2f} TSD: {2:.2f}'.format(s, np.std(m),
            100.0 * np.sqrt(threshold * (1 - threshold) / s)))

ax.legend(loc='best', fontsize=10)
ax.set_ylabel(r'Density', fontsize=12)
ax.set_xlabel(r'Mean Vote Percentage for Party', fontsize=12)
fig.suptitle('The Central Limit Theorem: Probability Densities for Differnt Sample Sizes')
fig.savefig('./graphs/model0', dpi=125)

In each simulation we draw 200,000 samples from our imaginary population. In the first simulation each sample was 400 cases in size. In the subsequent simulations the sample sizes were 800, 1200, 1600, 2400 and finally 3200 cases. For each simulation, we assume (randomly) that half the individuals in the population vote for one party and half vote for the other party. We can plot these simulations as a set of probability densities for each sample size, where the area under the curve is one unit in size. I have also reported the standard deviation (in percentage points) from the simulation (SD) against the theoretical standard deviation you would expect (TSD) for a particular sample size and vote share.

As the sample gets larger, the width of the bell curve narrows. The mean of larger samples, when randomly selected, is more likely to be closer to the population mean than the mean of the smaller samples. And so, with a sample of 1200, we can assume that there is a 95% probability that the mean of the population is within plus or minus 1.96 standard deviations (ie. plus or minus 2.8 percentage points) of the mean of our sample. This is the oft cited "margin or error", which derives from sampling error (the error that occurs from observing a sample, rather than observing the entire population).

So far so good. 

Polling practice

But sampling error is not the only problem with which opinion polls must contend. The impression of precision from a margin of error is (at least in part) misleading, as it "does not include an estimate of the effect of the many sources of non-sampling error" (Miller 2002, p225).

Terhanian (2008) notes that telephone polling "sometimes require telephone calls to more than 25,000 different numbers to complete 1,000 15-minute interviews over a five-day period (at least in the US)". Face-to-face polling typically excludes those living in high-rise apartments and gated communities, as well as those people who are intensely private. Terhanian argues that inadequate coverage and low response rates are the most likely culprits when polls produce wildly inaccurate results.The reason for inaccuracy is that the sampling frame or approach that has been adopted does not randomly select people from the entire population. Segments of the population are excluded.

Other issues that affect poll outcomes include question design and the order in which questions are asked (McDermott and Frankovic 2003) both of which can shift poll results markedly, and house effects (the tenancy for a pollster's methodology to produce results that tend to lean to one side of politics or the other) (Jackman 2005; Curtice and Sparrow 1997). 

Manski (1990) observed that while some people hold firm opinions, others do not. For some people, their voting preference is soft: what they say they would do and what they actually do differ. Manski's (2000) solution to this problem was to encourage pollsters to ask people about the firmness of their voting intention. In related research, Hoek and Gendall (1997) found that strategies to reduce the proportion of undecided responses in a poll may actually reduce poll accuracy.

A final point worth noting is that opinion polls tell us an historical fact. On the date people were polled, they claim they would have voted in a particular way. Typically, the major opinion polls do not seek to forecast how people will vote at the next election (Walsh, Dolfin and DiNardo 2009, p317). Notwithstanding this limitation, opinion polls are often reported in the media in a way that suggests a prediction on how people will vote at the next election (based on what they said last weekend when they were polled). In this context, I should note another Wolfers and Leigh (2002) finding:
Not surprisingly, the election-eve polls appear to be the most accurate, although polls taken one month prior to the election also have substantial predictive power. Polls taken more than a month before the election fare substantially worse, suggesting that the act of calling the election leads voters to clarify their voting intentions. Those taken three months prior to the election do not perform much better than those taken a year prior. By contrast, polls taken two years before the election, or immediately following the preceding election, have a very poor record. Indeed, we cannot reject a null hypothesis that they have no explanatory power at all... These results suggest that there is little reason to conduct polls in the year following an election.


The central limit theorem allows us to take a relatively small but randomly selected sample and make  statements about the whole population. These statements have a mathematically quantified reliability, which is known as the margin of error.

Nonetheless, the margins of error that are often reported with opinion polls overstate the accuracy of those polls. These statements only refer to one of the many sources of error that impact on accuracy. While the many other sources of error are rarely as clearly identified and quantified as the sampling error, their impact on poll accuracy is no less real.

There are further complications when you want to take opinion polls and predict voter behaviour at the next election. Only polls taken immediately prior to an election are truly effective for this purpose.

All-in-all, it is not hard to see why Leigh and Wolfers (2006) said, "for forecasting purposes the pollsters' published margins of error should at least be doubled" (p334).


John Curtice and Nick Sparrow (1997),  "How accurate are traditional quota opinion polls?", Journal of the Market Research Society, Jul 1997, 39:3, pp433-448.

Janet Hoek and Philip Gendall (1997), "Factors Affecting Political Poll Accuracy: An Analysis of Undecided Respondents", Marketing Bulletin, 1997, 8, pp1-14.

Simon Jackman (2005), "Pooling the polls over an election campaign", Australian
Journal of Political Science, 40:4, pp499-51.

Andrew Leigh and Justin Wolfers (2006), "Competing Approaches to Forecasting Elections: Economic Models, Opinion Polling and Prediction Markets", Economic Record, September 2006, Vol. 82, No. 258, pp325-340.

Monika L McDermott and Kathleen A Frankovic (2003), "Horserace Polling and Survey Method Effects: An Analysis of the 2000 Campaign", The Public Opinion Quarterly, Vol. 67, No. 2 (Summer, 2003), pp244-26.

Charles F Manski (1990), “The Use of Intentions Data to Predict Behavior: A Best-Case Analysis.” Journal of the American Statistical Association, Vol 85, No 412, pp934-40.

Charles F Manski (2000), "Why Polls are Fickle", Op-Ed article, The New York Times, 16 October 2000.

Peter V Miller (2002), "The Authority and Limitations of Polls", in Jeff Manza, Fay Lomax Cook and Benjamin J Page (eds) (2002), Navigating Public Opinion: Polls, Policy and the Future of American Democracy, Oxford University Press, New York.

George Terhanian (2008), "Changing Times, Changing Modes: The Future of Public Opinion Polling?",  Journal of Elections, Public Opinion and Parties, Vol. 18, No. 4, pp331–342, November 2008.

Elias Walsh, Sarah Dolfin and John DiNardo (2009), "Lies, Damn Lies and Pre-Election Polling", American Economic Review: Papers & Proceedings 2009, 99:2, pp316–322.

Justin Wolfers and Andrew Leigh (2002), "Three Tools for Forecasting Federal Elections: Lessons from 2001", Australian Journal of Political Science, Vol. 37, No. 2, pp223–240.

Saturday, December 20, 2014

Aggregated Polls: the first 15 months of the Abbott Government

The Federal Election was held on 7 September 2013, and the Abbott Ministry was sworn in on 18 September 2013. As the Government's fortunes have ebbed and flowed over the 15 months since the change of government, I thought it time to reflect on those changes.

First let's look at the polls over the period: we will focus on Coalition two-party preferred voting intention. In the next two charts these results are first presented as a scatter plot and then a line plot.

To these raw poll results we can apply the magic of a hierarchical Bayesian model to identify underlying movements in the national voting intention. [This is pretty much the same model as I used in the lead up to last year's Federal election (except, I have dropped the rounding effects from the model). It still assumes that the house effects sum to zero].

And to get a closer look at the estimated voting intention, I will strip away the poll results.

We will apply a smoother (using a 61-term Henderson moving average) to remove the kinks but retain the broad shape of the curve.

From this chart, I think there has been five distinct periods in the polling fortunes of the Abbott Government.
  • First a short honeymoon, that had ended by early December 2013
  • We then see a four month period of stability from December 2013 to March 2014. 
  • The period immediately prior to and including the Federal Budget in April and May 2014 saw a four to five point decline in the polls.
  • This was followed by period of rebuilding from late June 2014 to mid October 2014. Much of the ground that had been lost with the budget was recovered during this period.
  • Finally, we see a second phase of rapid decline, from mid October to mid December, where that previous gain has been all but lost.

The key message here is that the 2016 Federal Election is too early to call. This is not the dead-cat bounce of the final Keating, Howard or Gillard years. The late June 2014 to mid October 2014 rebound shows that a sustained recovery is not beyond the realms of possibility. But, nor is it clear sailing for the government. It has suffered three periods of decline over the past 15 months and it is currently in a difficult position from which it needs to extract itself.

A quick word on house effects (largely for completeness): The Bayesian model estimated the house effects for each polling house as follows (relative to each other and subject to a "sum to zero constraint").

We can consolidate the above charts, adjust poll results for house effects and overlay the smoothed trajectory.

And a quick check for outliers: those polls, once adjusted for house effects, that are still beyond 1.96 standard deviations from the smoothed median Bayesian estimate of daily voting intention. Across the 80 polls used for this analysis, there were three outliers, which is pretty consistent with what one would expect for sampling error (and better than I had expected, given that polls are typically over-dispersed). These are highlighted in red (with slightly enlarged markers) in the next chart.


  • 21 December 2014, 8.30am - updated to include ReachTEL data
  • 10 January 2015 - consolidated and outlier charts added

Saturday, September 7, 2013

Saturday morning update

This morning's polls:

  • Morgan 45.5 to 54.5 in the Coalition's favour
  • Newspoll 46 to 54 in the Coalition's favour
  • Nielsen 46 to 54 in the Coalition's favour

Which gives an aggregation:

At this point in the blog, it is my normal practice to remind people that I anchor the above Bayesian aggregation with the assumption that the net bias across all of the polling houses sums to zero. You will need to come to your own view about where the actual level of collective systemic bias lies for all the pollsters. At the 2010 Election (with a different set of pollsters), the population voting intention was about one percentage point more in the Coalition's favour compared with the pollster average (see here). In light of the 2010 experience, it is arguably plausible to subtract (say) half a percentage point or more from the above aggregation to adjust for the collective systemic bias across all of the polling houses. [As an aside, you will note that Simon Jackman, who seeks to anchor his Bayesian models with respect to the outcome of past elections regularly produces an aggregated poll that tracks well below the vast majority of individual poll results]. 

If we limit our analysis to Newspoll and Nielsen.

The latest Newspoll

The latest Nielsen

My prediction? Yesterday I thought it might be in the high 50s for the number of seats won by Labor. Today, with this latest suite of polls, I suspect the low 50s for Labor is more likely. But the high 40s cannot be ruled out.

Friday, September 6, 2013

Thursday, September 5, 2013

Betting market update

This morning's 7.30am run revealed a swag of safe Coalition seats where Centrebet is not offering odds at the moment (bottom 13 rows). The results follow.

State Labor Coalition Other current favourite change
Brand WA 48.0 48.0 3.9 Labor ? ?
Lingiari NT 44.4 48.1 7.5 Labor Coalition TRUE
Blair QLD 49.0 46.5 4.5 Labor Labor FALSE
Lyons TAS 42.8 51.7 5.5 Labor Coalition TRUE
Kingsford Smith NSW 43.4 52.3 4.3 Labor Coalition TRUE
Hindmarsh SA 52.5 43.5 4.0 Labor Labor FALSE
Capricornia QLD 52.8 41.5 5.7 Labor Labor FALSE
Petrie QLD 43.4 52.9 3.7 Labor Coalition TRUE
Page NSW 53.1 41.8 5.1 Labor Labor FALSE
Indi VIC 3.3 53.6 43.1 Coalition Coalition FALSE
McEwen VIC 55.4 40.3 4.3 Labor Labor FALSE
Werriwa NSW 55.6 40.5 3.9 Labor Labor FALSE
Barton NSW 56.4 38.0 5.6 Labor Labor FALSE
Lilley QLD 38.0 58.3 3.7 Labor Coalition TRUE
Franklin TAS 59.4 33.8 6.8 Labor Labor FALSE
Bendigo VIC 62.3 31.7 5.9 Labor Labor FALSE
Melbourne VIC 62.8 2.6 34.6 Other Labor TRUE
McMahon NSW 64.2 32.7 3.1 Labor Labor FALSE
Moreton QLD 30.3 65.0 4.7 Labor Coalition TRUE
Eden-Monaro NSW 30.4 65.3 4.3 Labor Coalition TRUE
Richmond NSW 66.1 26.8 7.1 Labor Labor FALSE
Parramatta NSW 27.3 67.1 5.6 Labor Coalition TRUE
Brisbane QLD 27.5 67.8 4.7 Coalition Coalition FALSE
Rankin QLD 69.4 26.1 4.5 Labor Labor FALSE
Chisholm VIC 69.4 24.8 5.8 Labor Labor FALSE
Griffith QLD 70.3 24.4 5.3 Labor Labor FALSE
Flynn QLD 22.8 70.9 6.3 Coalition Coalition FALSE
Bruce VIC 70.9 22.8 6.3 Labor Labor FALSE
Bonner QLD 21.9 73.0 5.1 Coalition Coalition FALSE
La Trobe VIC 20.5 74.0 5.5 Labor Coalition TRUE
Wakefield SA 74.2 20.6 5.2 Labor Labor FALSE
Newcastle NSW 74.9 18.0 7.1 Labor Labor FALSE
Adelaide SA 75.3 19.4 5.3 Labor Labor FALSE
Oxley QLD 75.7 21.0 3.3 Labor Labor FALSE
Greenway NSW 21.1 75.8 3.1 Labor Coalition TRUE
Solomon NT 18.3 76.5 5.1 Coalition Coalition FALSE
Dawson QLD 17.5 76.6 5.9 Coalition Coalition FALSE
Dobell NSW 15.6 76.7 7.7 Other Coalition TRUE
Melbourne Ports VIC 76.9 16.4 6.8 Labor Labor FALSE
Fairfax QLD 5.1 78.3 16.6 Coalition Coalition FALSE
Forde QLD 16.7 78.3 5.0 Coalition Coalition FALSE
Throsby NSW 78.9 14.4 6.7 Labor Labor FALSE
Fisher QLD 12.2 79.0 8.8 Other Coalition TRUE
Swan WA 16.9 79.2 3.9 Coalition Coalition FALSE
Robertson NSW 11.2 79.3 9.5 Labor Coalition TRUE
Fowler NSW 79.4 16.9 3.7 Labor Labor FALSE
Chifley NSW 79.9 14.6 5.5 Labor Labor FALSE
Perth WA 80.0 13.3 6.7 Labor Labor FALSE
Longman QLD 14.7 80.3 5.0 Coalition Coalition FALSE
Reid NSW 14.8 80.5 4.8 Labor Coalition TRUE
Fremantle WA 80.7 13.4 5.9 Labor Labor FALSE
Makin SA 81.2 14.9 4.0 Labor Labor FALSE
Hasluck WA 15.0 81.7 3.3 Coalition Coalition FALSE
Dunkley VIC 10.8 82.1 7.1 Coalition Coalition FALSE
Ryan QLD 10.1 82.2 7.8 Coalition Coalition FALSE
Hunter NSW 82.2 12.6 5.2 Labor Labor FALSE
Macquarie NSW 12.6 82.3 5.1 Coalition Coalition FALSE
Jagajaga VIC 82.6 11.6 5.8 Labor Labor FALSE
Braddon TAS 12.6 82.6 4.7 Labor Coalition TRUE
Leichhardt QLD 10.9 82.8 6.3 Coalition Coalition FALSE
Wright QLD 5.6 83.0 11.4 Coalition Coalition FALSE
Hinkler QLD 6.5 83.1 10.4 Coalition Coalition FALSE
Herbert QLD 11.8 83.2 5.1 Coalition Coalition FALSE
Casey VIC 9.6 83.3 7.1 Coalition Coalition FALSE
Blaxland NSW 83.3 12.7 3.9 Labor Labor FALSE
Fadden QLD 7.7 83.5 8.7 Coalition Coalition FALSE
Ballarat VIC 83.9 10.3 5.9 Labor Labor FALSE
Isaacs VIC 83.9 11.0 5.1 Labor Labor FALSE
Batman VIC 83.9 5.7 10.4 Labor Labor FALSE
Gilmore NSW 9.6 84.1 6.2 Coalition Coalition FALSE
Sydney NSW 84.3 5.7 10.1 Labor Labor FALSE
Denison TAS 10.6 5.0 84.4 Other Other FALSE
Deakin VIC 9.2 84.5 6.4 Labor Coalition TRUE
Watson NSW 84.6 11.1 4.4 Labor Labor FALSE
Stirling WA 8.6 84.7 6.7 Coalition Coalition FALSE
Banks NSW 11.1 84.7 4.2 Labor Coalition TRUE
Corangamite VIC 9.2 84.8 6.0 Labor Coalition TRUE
Bowman QLD 9.2 84.9 5.9 Coalition Coalition FALSE
Shortland NSW 85.2 9.7 5.1 Labor Labor FALSE
Bass TAS 9.9 85.4 4.7 Labor Coalition TRUE
Cunningham NSW 85.4 8.7 5.9 Labor Labor FALSE
Aston VIC 8.7 85.4 5.9 Coalition Coalition FALSE
Kooyong VIC 6.6 85.5 7.9 Coalition Coalition FALSE
Lalor VIC 85.7 7.2 7.1 Labor Labor FALSE
Bennelong NSW 9.9 85.7 4.4 Coalition Coalition FALSE
Canning WA 9.9 85.7 4.4 Coalition Coalition FALSE
Kingston SA 86.0 9.3 4.7 Labor Labor FALSE
McMillan VIC 8.0 86.0 6.0 Coalition Coalition FALSE
Lyne NSW 8.8 86.1 5.2 Other Coalition TRUE
Canberra ACT 86.1 8.0 5.9 Labor Labor FALSE
Dickson QLD 8.8 86.1 5.1 Coalition Coalition FALSE
Hume NSW 7.3 86.3 6.4 Coalition Coalition FALSE
Corio VIC 86.4 7.3 6.4 Labor Labor FALSE
Flinders VIC 6.7 86.5 6.8 Coalition Coalition FALSE
Cook NSW 6.7 86.5 6.8 Coalition Coalition FALSE
Lindsay NSW 9.4 86.7 3.9 Labor Coalition TRUE
Maribyrnong VIC 86.8 7.3 5.9 Labor Labor FALSE
Cowan WA 8.9 86.8 4.3 Coalition Coalition FALSE
Sturt SA 8.0 86.8 5.1 Coalition Coalition FALSE
Higgins VIC 6.7 86.8 6.4 Coalition Coalition FALSE
Paterson NSW 8.0 86.9 5.1 Coalition Coalition FALSE
McPherson QLD 6.8 86.9 6.3 Coalition Coalition FALSE
Grey SA 6.8 87.0 6.3 Coalition Coalition FALSE
Gippsland VIC 6.8 87.0 6.2 Coalition Coalition FALSE
Barker SA 6.8 87.1 6.2 Coalition Coalition FALSE
Calare NSW 7.3 87.1 5.6 Coalition Coalition FALSE
Boothby SA 8.1 87.1 4.8 Coalition Coalition FALSE
Grayndler NSW 87.2 5.2 7.6 Labor Labor FALSE
North Sydney NSW 5.9 87.2 6.9 Coalition Coalition FALSE
Goldstein VIC 6.8 87.3 5.9 Coalition Coalition FALSE
Scullin VIC 87.3 6.8 5.9 Labor Labor FALSE
Moore WA 6.8 87.3 5.9 Coalition Coalition FALSE
Holt VIC 87.5 7.4 5.1 Labor Labor FALSE
Hotham VIC 87.5 7.4 5.1 Labor Labor FALSE
Hughes NSW 8.1 87.5 4.4 Coalition Coalition FALSE
Macarthur NSW 8.1 87.5 4.4 Coalition Coalition FALSE
Forrest WA 6.6 87.7 5.7 Coalition Coalition FALSE
Wide Bay QLD 5.9 87.7 6.4 Coalition Coalition FALSE
Warringah NSW 5.2 87.8 7.0 Coalition Coalition FALSE
Charlton NSW 87.8 2.6 9.6 Labor Labor FALSE
Calwell VIC 88.0 6.8 5.2 Labor Labor FALSE
Gorton VIC 88.0 6.8 5.2 Labor Labor FALSE
Fraser ACT 88.1 5.9 6.0 Labor Labor FALSE
Curtin WA 4.2 88.1 7.7 Coalition Coalition FALSE
Gellibrand VIC 88.3 5.2 6.4 Labor Labor FALSE
Menzies VIC 5.3 88.4 6.4 Coalition Coalition FALSE
Cowper NSW 7.5 88.6 3.9 Coalition Coalition FALSE
Mayo SA 5.3 88.7 6.0 Coalition Coalition FALSE
Tangney WA 6.0 88.8 5.2 Coalition Coalition FALSE
Port Adelaide SA 89.1 6.0 4.9 Labor Labor FALSE
Pearce WA 5.2 89.2 5.6 Coalition Coalition FALSE
Farrer NSW 6.0 89.5 4.4 Coalition Coalition FALSE
Wills VIC 89.9 3.5 6.6 Labor Labor FALSE
Kennedy QLD 4.1 5.1 90.8 Other Other FALSE
Wannon VIC 4.4 91.5 4.1 Coalition Coalition FALSE
O'Connor WA 3.3 93.0 3.7 Coalition Coalition FALSE
Durack WA 3.3 93.9 2.8 Coalition Coalition FALSE
Berowra NSW NA 0.0 0.0 Coalition Coalition FALSE
Bradfield NSW NA 0.0 0.0 Coalition Coalition FALSE
Mackellar NSW NA 0.0 0.0 Coalition Coalition FALSE
Mitchell NSW NA 0.0 0.0 Coalition Coalition FALSE
New England NSW NA 0.0 0.0 Other Coalition TRUE
Parkes NSW NA 0.0 0.0 Coalition Coalition FALSE
Riverina NSW NA 0.0 0.0 Coalition Coalition FALSE
Wentworth NSW NA 0.0 0.0 Coalition Coalition FALSE
Groom QLD NA 0.0 0.0 Coalition Coalition FALSE
Maranoa QLD NA 0.0 0.0 Coalition Coalition FALSE
Moncrieff QLD NA 0.0 0.0 Coalition Coalition FALSE
Mallee VIC NA 0.0 0.0 Coalition Coalition FALSE
Murray VIC NA 0.0 0.0 Coalition Coalition FALSE

The summary results (on the favourite wins basis):

Coalition Labor Other
Seat count 94.5 53.5 2

Cathy McGowan's chances have improved materially in the seat of Indi. Adam Bandt seems to be doing better in the seat of Melbourne. Clive Palmer's chances look unchanged in Fairfax. Andrew Wilkie looks safe in Denison and Bob Katter looks safe in Kennedy.

Wednesday, September 4, 2013

Wednesday: ReachTEL 48-52

Today's ReachTEL has the Coalition on 52 per cent (down one point from the previous ReachTEL) and Labor on 48 (up one). In terms of the primary vote: Palmer United is on 4.4 per cent nationally (and one guesses, much higher in Queensland).

ReachTEL, alone among pollsters, has Labor on the same TPP as it did when Rudd was initially restored in late June. It will be interesting to see if the other pollsters turn in a late flow to Labor

Moving to the Bayesian aggregation, and we are at one of those turning points where a single data point can significantly affect the aggregation. Will await confirmation from the other polling houses this week.