Wednesday, July 31, 2013

Further explorations in non-linearity

On the weekend I began exploring a non-linear model for aggregating polling. My test case produced nice looking graphs; but the results were in some large part an artifact of the priors I had chosen for the model (not good).

In the comments to that post, I suggested that part of the problem may have been how I had defined the model.

Thinking about this some more, I think the problem was in this statement: << beta-for-pollster * (poll-result - minimum-poll-result) >>. Rather than the minimum, it should be a central tendency of some type (mid-point, mean, median, etc.).

I have now redefined the model to use the midpoint for each polling house (defined as the (min+max)/2 for that house). The use of house-specific mid-points acknowledges that the raw x scores include the alpha house effect I am trying to estimate. The revised model is:

house-effect-for-pollster = alpha-for-pollster + beta-for-pollster * (poll-result - pollster-mid-point)

The priors to the beta effect have been completely freed-up, so that they are uninformative. As a result, this model works better internally than the previous model.

However, I have added a constraint such that the aggregation assumes the "fizziness" factors also sum to zero. This may be a bollocks assumption; and will need some further analysis.

So - with the caveat that this is still very early exploratory analysis - the results of this new model follow.




On a plain reading, the above charts suggest that the current Rudd effect may actually be better for Labor than other aggregations have found. However, I am not convinced this is anything more than an artifact of the model. I am particularly concerned about the inclusion of data from polling houses that do not have data points across the highs and lows of the Gillard period, and across the Rudd and Gillard periods. Such data don't fully occupy the model, which can result in artifacts.

If we limit the analysis to data from Nielsen and Newspoll for a comparison using data that does fully occupy the model, we get the following.




Because I am committed to presenting poll analysis in a transparent and unbiased fashion, the next two code snippets show how I package up the data for the model (in R), and the model itself (in JAGS).

    # - prepare the data ...
    data$y <- data[ , y] / 100 # vote share between 0-1
    data$variance <- data$y * (1 - data$y) / data$Sample #pq/n
    data$standardError <- sqrt(data$variance)
    data$samplePrecision <- 1 / data$variance
    HOUSES <- levels(data$House)
    HOUSECOUNT <- length(levels(data$House))
    NUMPOLLS <- length(data$y)
    cat('Number of polls: '); print(NUMPOLLS)
    midpoints <- rep(NA, HOUSECOUNT)
    for(i in seq_len(HOUSECOUNT)) {
     midpoints[i] <- (max(data[data$House==HOUSES[i], 'y']) + 
      min(data[data$House==HOUSES[i], 'y'])) / 2
     cat('Midpoints: '); cat(i); cat(' '); cat(HOUSES[i]); cat(': '); print(midpoints[i])
    }
    
    # - remember these dots 
    dotsFile <- paste('./files/', fPrefix, 'original.csv', sep='')
    write.csv(data, file=dotsFile)

    # - manage dates
    day0 <- min(data$Date) - 1  # walk starts from earliest date
    if(endDate == TODAY)
        endDate <- max(data$Date)
    PERIOD <- as.numeric(endDate - day0) # length of walk in days
    DISCOUNTINUITYDAY <- as.numeric(as.Date(discontinuity) - day0)
    cat('Discontinuity: '); print(DISCOUNTINUITYDAY)
    tPrefix <- paste(format(day0+1, '%d-%b-%Y'), ' to ', format(endDate, '%d-%b-%Y'), sep='')
    data$day <- as.numeric(data$Date - day0)

    # - do the MCMC thing ...
    
    parameters <- list(PERIOD = PERIOD,
        HOUSECOUNT = HOUSECOUNT,
        NUMPOLLS = NUMPOLLS,
        DISCOUNTINUITYDAY = DISCOUNTINUITYDAY, 
        NEWSPOLL = which(levels(data$House) == 'Newspoll'),
        y = data$y,
        x = data$y,
        day = data$day,        house = as.integer(data$House),
        samplePrecision = data$samplePrecision,
        midpoints=midpoints
    )
                
    jags <- jags.model(textConnection(model),
        data=parameters,
        n.chains=4,
        n.adapt=n.adapt
    )

    # - burn in
    update(jags, n.iter=n.update) # burn-in the chains

    # - capture results
    jags.capture <- c('walk', 'alpha', 'beta', 'discontinuityValue')
    coda.samples <- coda.samples(jags, jags.capture, n.iter=n.iter, thin=n.thin)
    coda.matrix <- as.matrix(coda.samples)

model {
    ## Derived from Simon Jackman's original model 
    
    ## -- observational model
    for(poll in 1:NUMPOLLS) { 
        # note: x and y are the original polling series
        houseEffect[poll] <- alpha[house[poll]] + 
         beta[house[poll]]*(x[poll]-midpoints[house[poll]]) 
        mu[poll] <- walk[day[poll]] + houseEffect[poll]
        y[poll] ~ dnorm(mu[poll], samplePrecision[poll]) 
    }
            
    ## -- temporal model
    for(i in 2:PERIOD) { # for each day under analysis ...
        day2DayAdj[i] <- ifelse(i==DISCOUNTINUITYDAY, 
            walk[i-1]+discontinuityValue, walk[i-1])
        walk[i] ~ dnorm(day2DayAdj[i], walkPrecision)
    }
    sigmaWalk ~ dunif(0, 0.01)            ## uniform prior on std. dev.  
    walkPrecision <- pow(sigmaWalk, -2)   ##   for the day-to-day random walk
    walk[1] ~ dunif(0.4, 0.6)             ## uninformative prior
    discontinuityValue ~ dunif(-0.2, 0.2) ## uninformative prior

    ## -- house effects model
    for(i in 2:HOUSECOUNT) { ## vague normal priors for house effects
        alpha[i] ~ dunif(-0.1,0.1)
        beta[i] ~ dunif(-1,1) 
    }
    alpha[NEWSPOLL] <- -sum(alpha[2:HOUSECOUNT]) ## sum to zero
    beta[NEWSPOLL] <- -sum(beta[2:HOUSECOUNT]) ## sum to zero
}

Tuesday, July 30, 2013

Weekly aggregation

This week's polls:

  • Galaxy was 50-50, which was a one point movement in Labor's favour.
  • Essential was 49-51, which was the same as last week, but a one point movement to Labor on the pevious fortnight.
  • Morgan was 50.5 to 49.5 (with preferences distributed as they were at the last election), which was a 1.5 point movement to the Coalition.

 All of which sees an aggregation that is largely unchanged on last week.





Sunday, July 28, 2013

Exploring a non-linear house effects model

I have spent a couple of hours today exploring a non-linear house effects model. I have been troubled by the "fizziness" of individual polling houses. When the population voting intention moves to labor, some polling houses are much more sensitive to this movement than others. There appears to be a consistency to this tendency between polling houses (ie. some polling houses appear consistently over-sensitive to these movements, and other houses appear consistently under-sensitive).

One of the reasons I limit my weekly aggregation data to six months is to avoid problems from this non-linearity. If I can better model this non-linearity, I might be able to model longer time-sequences of data.

In today's exploration, I have modeled the non-linearity with a degree-1 polynomial for each pollster. In simplified terms, the house-effect for each pollster is given by the following equation.

house-effect-for-pollster = alpha-for-pollster + beta-for-pollster * (poll-result - minimum-poll-result)

In this model, the constant alpha is analogous to house-effect equation from my previous model. The value for beta indicates the extent to which a polling house is more or less sensitive to a population shift in voting intention. Returning to the term I introduced above, beta is my measure of fizziness.

I have used a sum to zero constraint across polling houses to anchor the alpha value. I have assigned Newspoll=0 to anchor the beta value. The part of this approach I am least comfortable with is the selection of priors for the beta value. These are neither uninformed nor vague. They significantly influence the final result. Clearly, some more thinking is needed here.

The initial experimental results follow (using all of the polling data since the previous election).




The code for the non-linear model follows.

model {
    ## Developed on the base of Simon Jackman's original model 
    
    ## -- observational model
    for(poll in 1:NUMPOLLS) { 
        # note: x and y are the original polling series
        houseEffect[poll] <- alpha[house[poll]] + beta[house[poll]]*(x[poll]-min(x)) 
        mu[poll] <- walk[day[poll]] + houseEffect[poll]
        y[poll] ~ dnorm(mu[poll], samplePrecision[poll]) 
    }
            
    ## -- temporal model
    for(i in 2:PERIOD) {
        day2DayAdj[i] <- ifelse(i==DISCOUNTINUITYDAY, 
            walk[i-1]+discontinuityValue, walk[i-1])
        walk[i] ~ dnorm(day2DayAdj[i], walkPrecision)
    }
    sigmaWalk ~ dunif(0, 0.01)            ## uniform prior on std. dev.  
    walkPrecision <- pow(sigmaWalk, -2)   ##   for the day-to-day random walk
    walk[1] ~ dunif(0.4, 0.6)             ## vague prior
    discontinuityValue ~ dunif(-0.2, 0.2) ## uninformative prior

    ## -- house effects model
    for(i in 2:HOUSECOUNT) { 
        alpha[i] ~ dunif(-0.1,0.1) ## vague prior
        beta[i] ~ dunif(-0.1,0.1)  ## could be problematic!!
    }
    alpha[NEWSPOLL] <- -sum(alpha[2:HOUSECOUNT]) ## sum to zero
    beta[NEWSPOLL] <- 0 ## Newspoll as benchmark on non-linearity
}

Saturday, July 27, 2013

How much was Kevin Rudd worth?

I was a little surprised when I saw Simon Jackman suggest that Kevin Rudd had moved the two-party preferred voting intention by seven percentage points in Labor's favour. It was not consistent with my own analysis and only one pollster (Morgan) has data that supports a seven point movement. Data from all the remaining pollsters suggest the "Rudd Effect" was less than seven percentage points.

Now don't get me wrong, I have enormous respect for Professor Jackman. I purchased and read his 600 page text, Bayesian Analysis for Social Sciences. It is a tour de force on Bayesian statistics. I cannot recommend this book enough. His understanding and knowledge in this area far surpasses my own. Unashamedly, I have used Jackman's approach as the basis for my own aggregation efforts.

However, I suspect he has not noticed that the data since the second ascension of Keven Rudd violates a number of the linear assumptions implicit in his model. In particular, some of the house effects before and after Kevin are radically different. I blogged on this under the rubric: When models fail us. As I noted previously, the violation of the underpinning assumptions results in the model producing incorrect results.

Revisiting the discontinuity model I initially used following Rudd's restoration, I have treated the Morgan, Galaxy and Essential data before and after the restoration as different series. I have also centred the aggregation on the assumption that the house effects for Newspoll and Nielsen sum to zero (this may turn out to be problematic, but it is sufficient for the moment). Notwithstanding, some remaining doubts, I think this approach overcomes many of the problems my earlier discontinuity model had. I will cut to the results before reviewing the R and JAGS code.

The key finding is that Kevin was worth 5.6 percentage points in Labor's two party preferred vote share.



Turning to the house effects, we can see some of the variability in the pre-Rudd (PR) and after-Rudd (AR) values.


The revised model follows. In the first code block is the R code for managing the Morgan sample size and for separating the relevant polls into pre-Rudd (PR) and after-Rudd (AR) series. The second code block has the JAGS code. (As an aside, I have been playing with Stan lately, and might make a switch down the track).

# fudge sample size for Morgan multi - adjustment for observed over-dispersion
output.data[output.data[, 'House'] == 'Morgan multi', 'Sample'] <- 1000

# treat before and after for Morgan, Galaxy and Essential as different series
output.data$House <- paste(as.character(output.data$House), 
    ifelse(as.character(output.data$House) %in% c('Essential', 'Morgan multi', 'Galaxy'),
        ifelse(output.data[, 'Date'] >= as.Date(discontinuity), ' AR', ' PR'), ''), 
    sep='')
l <- levels(factor(output.data$House))
n <- which(l == 'Newspoll')
l[n] <- l[1]
l[1] <- 'Newspoll' # Newspoll is House number one in the factor ...
output.data$House <- factor(output.data$House, levels=l)


model {
    ## Based on Simon Jackman's original model 
    
    ## -- observational model
    for(poll in 1:NUMPOLLS) { 
        y[poll] ~ dnorm(walk[day[poll]] + houseEffect[house[poll]], samplePrecision[poll]) 
    }
            
    ## -- temporal model
    for(i in 2:PERIOD) { # for each day under analysis ...
        day2DayAdj[i] <- ifelse(i==DISCOUNTINUITYDAY, walk[i-1]+discontinuityValue, walk[i-1])
        walk[i] ~ dnorm(day2DayAdj[i], walkPrecision)
    }
    sigmaWalk ~ dunif(0, 0.01)            ## uniform prior on std. dev.  
    walkPrecision <- pow(sigmaWalk, -2)   ##   for the day-to-day random walk
    walk[1] ~ dunif(0.01, 0.99)           ## uninformative prior
    discontinuityValue ~ dunif(-0.2, 0.2) ## uninformative prior

    ## -- sum-to-zero constraint on house effects 
    for(i in 2:HOUSECOUNT) { ## vague normal priors for house effects
        houseEffect[i] ~ dnorm(0, pow(0.1, -2))
    }
    #houseEffect[NEWSPOLL] <- -sum(houseEffect[2:HOUSECOUNT])  ## all sum to zero
    houseEffect[NEWSPOLL] <- -houseEffect[NIELSEN]   ## Newspoll and Nielsen sum to zero
    #houseEffect[NEWSPOLL] <- 0 ## centred on Newspoll as zero
}

Tuesday, July 23, 2013

Weekly aggregation

This week's polls (noting that at 7.30am Essential is not out):

  • ReachTEL at 49-51 moved a point in Labor's favour
  • Morgan at 52-48 moved half a point in Labor's favour
  • Newspoll at 48-52 moved two points in the Coalition's favour

These polls produced an aggregation that is largely unchanged. [But note: I have down-weighted the Morgan sample size in this week's aggregation to better match the variance we see in that polling series. I have also relaxed the day-to-day change constraint in the temporal part of the model. I am using the one-percent figure I originally used].



On a uniform swing basis, this would see the following outcome as the most likely:


Of note: Simon Jackman observed the most recent Newspoll (below) has seen the probably of a Labor win (as reflected by betting markets) move from having a three in front of it to having a two in front.


Update 8.30pm


Now that the Essential poll (49-51 in the Coalitions favour; a one point move to Labor) has been factored into the aggregation we see an ever so slightly more favourable result for the Government (but really not that different to what we had above).




Monday, July 15, 2013

When models fail us

The change of prime minister has had an unexpected impact on the polling data and on my attribution of house effects. This impact makes the polls particularly difficult to interpret at the moment.

The critical question is how much of a bounce did Kevin Rudd bring to the polling fortunes of the Labor party. To answer this question, I will look at the average two-party preferred Labor vote share for each polling house. I will take the polls in May and June (prior to 26 June) and compare them with the polls since then.

HouseBeforeAfterBounce
Essential45.3 (n=7)48.0 (n=3)2.7
Galaxy45.5 (n=2)49.0 (n=1)3.5
Nielsen44.5 (n=2)50.0 (n=1)5.5
ReachTEL42.0 (n=1)48.0 (n=1)6.0
Newspoll43.3 (n=4)49.5 (n=2)6.2
Morgan (multi)44.5 (n=8)51.7 (n=3)7.2

While this is a little rough and ready (some might say arbitrary), it reveals substantial differences of view between the polling houses on the boost Kevin Rudd's return gave Labor.

Of note, Essential has gone from being among the most Labor leaning polling house to among the most Coalition leaning House.

The discontinuity model I had been using in recent weeks assumed the polling bias during the reigns of prime ministers Gillard and Rudd remained much the same. Clearly this is not the case. While I now have serious doubt about the utility of these charts, for reasons of historical continuity, they follow:




If we limit our analysis to the data since the second ascension of Kevin Rudd, the story is a little different. This analysis suggests a 75 per cent chance that the Coalition has 50 per cent or more of the TPP vote share, and a 25 per cent chance that Labor is in front. 




At this stage I would urge some caution in interpreting the second Rudd-era polls. As more polling data becomes available, we will be able to better calibrate our models.

Tuesday, July 9, 2013

Weekly aggregation: Labor on 50.5 per cent

Three polls came out in the past 24 hours:

  • Newspoll is at 50-50
  • Morgan has Labor well in front at 52.5 to 47.5 (using the preferences flows from the last election)
  • Essential has the Coalition well in front on 48-52

Graphically, this looks like.



Turning to the Bayesian aggregation.




Tuesday, July 2, 2013

Correction to weekly aggregation

Yesterday I excluded this week's Essential poll because I mistakenly thought the poll included some of Julia Gillard's prime ministership. A few people emailed me with the simple news: I was wrong. The one-week sample from Essential is solely from the second Rudd period.

So this is me eating humble pie. I have added in the weekly Essential report into the dataset and re-run the Bayesian model. Rather than Labor a touch ahead (on 50.1 per cent TPP as I reported erronously yesterday), the model now has the Coalition on 50.2 per cent TPP. Not a lot of difference.



The Rudd Resurrection Effect is looking like 5.6 per cent. Still substantial, and still a potential game changer.


A uniform national swing has us back at the hung Parliament. However, as we noted yesterday, the swing to Labor is unlikely to be uniform. Everyone expects a larger swing in Queensland where Coalition seats would fall in quick succession.


Monday, July 1, 2013

Weekly aggregation: Labor in front

If you have been following my blog, you will know I have been exploring a before-and-after Bayesian model that allows me to quantify the size of the discontinuity in voting intention between the reigns of Julia Gillard and Kevin Rudd. I have called this the Rudd Resurrection Effect (RRE).

In addition to the polls I captured in the past few days, today (as promised) I have backed-out the snap Morgan SMS poll (I don't have sufficient data to calibrate the house effect). In its place I have dropped in the latest Morgan poll, which has Labor in front 51-49.  I have also added Newspoll, which had it the other way around.

I have decided to exclude today's Essential poll as the current polling period spans both Julia and Kevin. Exclusion was necessary to maintain the integrity of the before-and-after capacity of the model. This is a bit of pain, because I suspect I won't get a clean Kevin poll report from Essential for another two weeks.

The final aggregation has Labor just ahead (and a Kevin effect approaching 6 percentage points). For a number of reasons, I think we need a couple more weeks worth of data before we can be confident of the level of TPP support for Labor suggested by today's polls.





Elsewhere, Pollytics has an interesting post on the impact of uneven state swings. His conclusion is that a 5% swing to Labor in Queensland would deliver around 9 seats simply because so many Coalition held seats sit on relatively small margins. He argues that Queensland's innate support for Kevin Rudd could see him win a poll with a national TPP of 51-49 in the Coalition's favour.