Simulates voters switching to the best-placed party in their bloc to stop the other side winning.
Progressive bloc: Labour, Lib Dem, Green. At 20%, one-in-five progressive voters in seats where their party is 3rd or lower switch to whichever progressive party is best placed.
Right bloc: Conservative, Reform. Same logic: right-leaning voters consolidate behind the stronger right candidate.
Real-world tactical voting in 2024 was roughly 10-15% on the progressive side.
Model "what if group X turns out more/less than expected?"
Moving a slider right means that group votes at a higher rate than baseline. This shifts seats towards parties that group favours. E.g. higher graduate turnout helps Labour and hurts Reform; higher young turnout helps Labour and Green.
Some sliders capture multiple demographics. Under 35s adjusts both age and secularity, since young people are disproportionately non-religious. This means the youth slider correctly boosts parties strong among secular voters (Green, Labour) not just age-correlated parties.
Constituency Results
| Constituency | Region | Predicted | Result | Margin |
|---|
FAQ & User Guide
About the Model
Most seat calculators apply the same swing to every constituency. This one works differently. It looks at what kind of people live in each seat and uses that to predict how voting patterns would shift locally if national polls change. The approach is similar to MRP (multilevel regression with post-stratification) but uses published polling averages and Census data rather than its own survey respondents.
For example, a rise in Reform support doesn't add the same amount everywhere. It adds more in seats with lots of Leave voters and non-graduates, and less in young, urban seats. That's why the same national poll can produce very different maps depending on which party is moving.
What Data Does it Use?
- Census demographics: Age, education, ethnicity, religion, housing type, employment and Brexit vote for every constituency. These are the main ingredients that tell the model how different seats will react.
- 2024 election results: The starting point. Every projection begins with what actually happened in 2024 and adjusts from there.
- Polls: A house-effect adjusted average of recent national polls, plus separate Scottish, Welsh and regional polls where available. House effects are systematic biases that individual pollsters tend to show, and these are removed before averaging.
- Vote switching data: Polling crosstabs showing how 2024 voters are moving between parties (e.g. Con to Reform, Lab to Green), blended with the demographic predictions.
- Council elections: Current council seat shares and recent by-election results, which pick up local momentum the national polls miss.
- Local overrides: A handful of seats where national models always get it wrong, like the Speaker's seat or strong independent candidates.
What it Can and Can't Do
It's good at capturing broad patterns: how education, age, ethnicity and the Brexit vote push different areas in different directions. It handles Scotland and Wales with their own polling data, and picks up signals from recent local elections.
It can't predict surprises: a popular independent candidate, an MP's personal vote, a local campaign surge, or a completely new voting pattern that didn't exist in 2024. Use it to explore "what if" scenarios, not as a crystal ball.
Most swing calculators apply the same national swing to every seat. This tool uses a demographic regression model that learns how age, education, ethnicity, religion, housing tenure, employment and Brexit vote relate to voting patterns. Given a set of national vote shares, it predicts each constituency's result based on its unique demographic profile.
For example, a 5-point Labour rise doesn't add 5 points everywhere. It adds more in seats with demographics that correlate with Labour support (younger, more diverse, more renters) and less in seats where those demographics are absent.
The model was trained on the 2024 General Election results across all 632 GB constituencies, using Census 2021 demographic data as predictors. See the model overview above for the full list of data sources.
A Uniform National Swing (UNS) calculator applies the same percentage-point change to every constituency. If Labour go up 3 points nationally, they go up 3 points in every seat. This is simple but unrealistic, because real swings vary enormously between seats.
This model distributes changes demographically. A rise in Reform support concentrates in Leave-voting, non-graduate seats. A rise in Green support concentrates in graduate, younger, urban seats. The same national shift produces different local effects depending on each seat's population.
Backtested against real elections, the model correctly predicts the winning party in 90%+ of seats for 2015, 2017 and 2019 when given actual national vote shares. 2024 was harder, with a historic Labour landslide, large geographically uneven swings and Reform emerging as a new force, bringing accuracy to roughly 75%. Accuracy is higher in safe seats and lower in tight marginals where small errors can flip the predicted winner.
It performs best when swings are moderate. In extreme scenarios (e.g. a party doubling its vote) or very tight marginals, accuracy drops. It cannot predict independent candidates, local incumbency effects, or localised tactical surges that depend on constituency-specific factors the demographics don't capture.
Use it as an exploratory guide to patterns, not a precise forecast.
The headline seat counts come from running the model's predictions through first-past-the-post: whichever party has the highest projected vote share in a constituency wins that seat. Add up the winners across all 632 seats and you get the seat totals.
Behind the scenes, the model also runs Monte Carlo simulations to estimate uncertainty. It takes the central prediction and re-runs it hundreds of times, each time adding small random perturbations: noise in the national polling estimate, variation in how strongly the national swing translates locally, party-specific brand effects that can shift a party up or down across correlated seats, and random constituency-level noise. Each simulation produces a slightly different set of 632 results.
The final seat number shown is the mean (average) across all those simulations. In a close election, there could be hundreds of marginal seats where a 3-point shift would flip the result, so the range of plausible outcomes from the simulations can be wide. The model finds that range and then displays the average as the headline number.
The tool has three main control groups:
- Vote share sliders: set national vote share for each party. Use the poll selector dropdown to load a specific poll, or adjust manually.
- Turnout differentials: model "what if" scenarios for different demographic groups turning out at higher or lower rates.
- Tactical voting: simulate voters switching to the best-placed party on their side (progressive or right bloc).
Additional features:
- Poll selector: choose a specific recent poll to see its seat projection instantly.
- Lock buttons: freeze a party's vote share so it doesn't change when you adjust other parties.
- Interactive map: click any constituency for a detailed breakdown including demographics, vote shares and swing decomposition.
- Region filter: focus the map on a specific region.
- Constituency table: sortable, filterable list of all 632 seats with gains/holds.
The map colour dropdown (above the map) lets you switch between several views:
- Winner: the default. Each constituency is coloured by which party is projected to win it. This is the simplest view and matches what you'd see on election night.
- Party vote share heatmaps: select a specific party (e.g. "Labour vote share") to see a gradient showing how strong that party is across the country. Darker shading means a higher projected vote share. This is useful for spotting geographic patterns, like where Reform support is concentrated or where the Lib Dems have pockets of strength.
- Change vs 2024: a toggle that works alongside the party heatmaps. Instead of showing absolute vote share, it shows the change from the 2024 General Election result. Blue/positive means the party is up compared to 2024, red/negative means they're down. This helps you see where swings are largest.
- Demographic choropleths: select a demographic variable (e.g. "Age 65+", "Degree+", "Leave Vote") to shade each constituency by its Census 2021 demographic profile. This doesn't show any voting data. It's a reference layer that helps you understand why the model makes the predictions it does. If you can see that high-Reform seats line up with high-Leave areas, that's the demographic model at work.
You can combine these with the region filter to zoom in on Scotland, Wales, or a specific English region.
The dropdown at the top of the Vote Share section lets you select a specific recent poll. Choosing one sets the vote share sliders to that poll's published figures.
The default "7-poll average" uses a house-effect adjusted average of the 7 most recent polls. House effects are systematic biases that individual pollsters tend to show (e.g. consistently over-reporting one party), and these are subtracted before averaging to give a more accurate picture.
When you select a single poll, the raw published values are used (no house-effect adjustment), because you're specifically choosing to see that pollster's numbers.
Some polls don't include all parties (e.g. GB-only polls may omit SNP and Plaid Cymru). For missing parties, the model falls back to the baseline average value.
Polls are conducted for Great Britain (or sometimes just England and Wales), so they don't always include SNP or Plaid Cymru. When a poll is missing these parties, the tool fills in baseline values which may not sum exactly to 100%.
Additionally, published polls often round figures and may not sum to exactly 100% even as published. The model handles non-100% totals gracefully and normalises internally. A total between 95-105% is fine and won't distort the results significantly.
The lock icon next to each party's vote share slider freezes that party's value. When you then move another party's slider, the change is redistributed proportionally among all unlocked parties only.
This is useful for scenarios like "What if Reform rise to 35% and all the extra comes from the Conservatives?" Just lock every party except Conservative and Reform, then move Reform's slider up.
The UK uses first-past-the-post (FPTP), where only the candidate with the most votes in each constituency wins. The Greens' vote is spread relatively thinly across many seats rather than concentrated in a few.
Even at 12-15% nationally, the Greens typically come 3rd, 4th or 5th in most seats. They only win where their support is highly concentrated (e.g. Bristol Central in 2024). Under FPTP, a party needs roughly 30-40% in a seat to win it, and the Greens rarely reach this outside a handful of target seats.
This is a genuine feature of FPTP, not a model error. Under proportional representation, 12% of the vote would translate to roughly 12% of seats (~76 seats). Under FPTP, it translates to 1-4.
Each turnout slider represents a demographic group (e.g. "Under 35s", "Graduates", "Social Renters"). Moving the slider right simulates that group turning out at a higher rate than the baseline; left means lower turnout.
Some sliders are composite, adjusting multiple demographics at once. The Under 35s slider adjusts both age and secularity (young people are disproportionately non-religious), so it correctly captures how youth turnout affects parties like Green whose support concentrates in secular, young areas.
The model pre-computes how each constituency's vote shares would change if a given group's turnout shifted. These sensitivities vary by seat. Muslim voter turnout only matters in seats with a significant Muslim population, while older voter turnout affects most seats.
Turnout differentials produce subtle, targeted effects rather than dramatic nationwide swings. This is realistic: even large turnout shifts only change a few seats.
Consider: if young people turn out at +5pp higher than baseline, this shifts results in seats with lots of young people (university towns, urban centres) but has no effect in seats with few young people. The national seat count might only change by 5-15 seats, but those are precisely the seats where demographic composition makes a difference.
To see larger effects, try combining multiple sliders (e.g. higher graduate turnout + higher young turnout together) or use the extreme ends of the slider range (±8pp).
Tactical voting simulates voters switching to the best-placed party on their side to prevent the other side winning.
Progressive bloc (Labour, Lib Dem, Green): at 20% willingness, one-in-five progressive voters in seats where their party is 3rd or lower switch to whichever progressive party is best placed to win.
Right bloc (Conservative, Reform): same logic. Right-leaning voters consolidate behind the stronger right candidate.
The calculation is done per constituency. In a seat where Labour is 1st and Lib Dems are 3rd, progressive tactical voting would see some Lib Dem voters switch to Labour. In a seat where the Lib Dems are 2nd and Labour 4th, the reverse happens.
In 2024, real-world progressive tactical voting was estimated at roughly 10-15%.
House effects are systematic biases that individual polling companies consistently show. For example, one pollster might consistently report Labour 2 points higher than average, while another consistently reports Reform 3 points higher.
These aren't random errors. They're persistent methodological differences caused by different sampling methods, weighting schemes, and question wording.
The 7-poll average (default) subtracts each pollster's estimated house effect before averaging, producing a more stable and less pollster-dependent estimate of true national opinion. When you select a single poll, you see the raw published values without this adjustment.
Scotland and Wales are modelled alongside England using the same demographic regression. However, the model includes regional effects that capture how voting patterns differ by region, for example the SNP baseline in Scotland and Plaid Cymru baseline in Wales.
SNP and Plaid Cymru vote share sliders only affect their respective nations. National polls typically report GB-wide figures, so the SNP and Plaid Cymru values are derived from Scottish and Welsh sub-samples or separate polls where available.
Northern Ireland is excluded from the model entirely (its party system is completely separate).
Vote shares should broadly sum to 100%. When you increase one party, the others need to decrease to compensate. The tool does this using proportional redistribution: the change is shared among the other unlocked parties in proportion to their current share.
For example, if Labour is at 30% and you increase Reform by 2 points, Labour might lose ~0.8 points and the Conservatives ~0.5 points (proportional to their current sizes). This is more realistic than taking the change from just one party.
Use the lock buttons to control exactly which parties absorb the change.
POLLCHECK
Demographic Swingometer
LONDON