Seeding & Expert Fallibility
40 years of NCAA selection committee decisions under the microscope
2,518 tournament games • 1985–2024
Finding 1: The Committee Gets It Right About 73% of the Time
In the first round, the higher-seeded team wins 72.9% of the time. You might expect that number to climb in later rounds — after all, the surviving teams have proven themselves, and the committee's best-seeded teams should separate from the pack. Instead, the upset rate stays remarkably flat:
| Round | Games | Higher Seed Wins | Upset Rate |
|---|---|---|---|
| Round of 64 | 1,453 | 72.9% | 27.1% |
| Round of 32 | 828 | 69.3% | 30.7% |
| Sweet 16 | 178 | 68.9% | 31.1% |
| Elite 8 | 49 | 71.1% | 28.9% |
| Final Four | 10 | 71.4% | 28.6% |
This is a key economic insight: the seedings contain meaningful information, but roughly a quarter of it is wrong in any given game. If seeding were a financial forecast, you'd say the model has a consistent ~27% error rate that it can't seem to reduce, regardless of the sample or the round.
Finding 2: 40 Years of Data and the Committee Hasn't Gotten Better
First-round upset rate by decade
If anything, the rate has drifted slightly upward — from 23.9% in the first decade to 25.2% in the most recent. The committee has access to more data, more analytics, and more computing power than ever before. Yet their hit rate hasn't budged. This is consistent with a well-known finding in forecasting research: expert judgment tends to plateau, and more information doesn't automatically translate into better predictions.
Finding 3: The 11-Seed Anomaly SYSTEMATIC MISPRICING
This is the single most striking pattern in the data and the clearest evidence of expert fallibility. If seeding were perfectly calibrated, performance should decline monotonically from seed 1 to seed 16. It doesn't. The 11-seed line is out of order:
Red marker = where 11-seeds "should" be based on their seed number
The numbers get more dramatic the deeper you go:
11-seeds have made 6 Final Fours. 9-seeds and 10-seeds have made 1 combined. This isn't a fluke — it's a 40-year pattern across 174 11-seed entries. Several possible explanations:
Finding 4: The 5-vs-12 Game Is Less of a "Lock" Than Advertised
The 5-vs-12 matchup is famous in bracket circles as the most common upset pick, and the data validates it:
| Era | 5-Seed Win % | 12-Seed Win % |
|---|---|---|
| 1985-1994 | 72% | 28% |
| 1995-2004 | 62% | 38% |
| 2005-2014 | 55% | 45% |
| 2015-2024 | 69% | 31% |
From 2005-2014, 12-seeds won 45% of the time — almost a coin flip. The rate has pulled back recently, but the structural issue remains: the committee consistently assigns 5-seeds to teams that aren't meaningfully better than 12-seeds. The gap between a "good but not great" major-conference team (typical 5-seed) and a "best team in a mid-major conference" (typical 12-seed) is smaller than the committee prices it.
Finding 5: When the Committee's Best Pick Loses Immediately
1-seeds are supposed to be the committee's four most confident selections. They get the easiest path — facing a 16-seed in round one, then an 8 or 9 seed. Yet:
| Year | 1-Seed | Lost To | Score | Context |
|---|---|---|---|---|
| 2018 | Virginia | #16 UMBC | 54-74 | First 1-vs-16 upset in history. 20-point loss. |
| 2023 | Purdue | #16 Fairleigh Dickinson | 63-66 | Second 1-vs-16 upset ever. Both in 5 years. |
| - | Additional early exits: 6 different 1-seeds have been eliminated in the Round of 32 (2nd round loss) |
The fact that both 1-vs-16 upsets happened in the last 6 years is noteworthy. Either the committee's top selections have gotten weaker, the 16-seed play-in game provides a warm-up advantage (similar to our 11-seed hypothesis), or random variance is clustering in a small sample.
Finding 6: The ACC Gets Favorable Seeding — Mountain West Gets Punished
If we compare each team's actual tournament wins to the expected wins for their seed, we can measure whether a conference's teams are systematically over- or under-seeded:
| Conference | Tourney Entries | Avg Seed | Actual Avg Wins | Expected for Seed | Over/Under |
|---|---|---|---|---|---|
| ACC | 214 | 4.9 | 1.82 | 1.63 | +0.19 |
| SEC | 200 | 5.9 | 1.44 | 1.33 | +0.10 |
| Big East | 223 | 5.6 | 1.51 | 1.45 | +0.06 |
| Big Ten | 236 | 5.6 | 1.43 | 1.43 | +0.01 |
| Big 12 | 159 | 5.5 | 1.36 | 1.48 | -0.11 |
| Pac-10 (old) | 110 | 5.8 | 1.28 | 1.39 | -0.11 |
| Mountain West | 65 | 8.8 | 0.48 | 0.80 | -0.33 |
The ACC consistently outperforms its seeding (+0.19 wins per entry), meaning the committee gives ACC teams slightly worse seeds than they deserve. Conversely, the Mountain West is the most "overseeded" conference — their teams win a third of a game less than their seeds would predict.
Finding 7: The Committee's Uncertainty Grows Exponentially in the Middle
The coefficient of variation (standard deviation divided by mean) tells us how "noisy" each seed's performance is relative to expectations:
Coefficient of variation by seed (higher = more unpredictable relative to expectation)
Seeds 1-2 are reasonably well-calibrated (CV around 50-63%). By seed 5, the CV crosses 100% — meaning the standard deviation is larger than the mean. The committee's predictions for seeds 5-12 contain more noise than signal. In economic terms, the "price" the committee assigns to these middle seeds is barely more informative than random assignment.
Finding 8: The Hall of Shame — The Committee's Biggest Misses
Champions the committee underrated:
| Year | Champion | Seed | The Miss |
|---|---|---|---|
| 1985 | Villanova | #8 | Won the title as an 8-seed. Still the lowest seed to ever win it all. |
| 2014 | UConn | #7 | A 7-seed championship in the modern era. Committee badly mispriced them. |
| 1988 | Kansas | #6 | Danny Manning carried a 6-seed to the title. |
| 1997 | Arizona | #4 | Beat 3 number-1 seeds en route to the championship. |
| 2023 | UConn | #4 | Won every game by double digits — a dominant 4-seed the committee undersold. |
1-seeds that flamed out earliest:
| Year | 1-Seed | Result | The Miss |
|---|---|---|---|
| 2018 | Virginia | Lost in R64 to #16 UMBC | The committee's #1 overall seed lost by 20 to a 16-seed. Maximum error. |
| 2023 | Purdue | Lost in R64 to #16 FDU | Second 1-vs-16 upset in 6 years. Pattern or fluke? |
| 2011 | Pittsburgh | Lost in R32 to #8 Butler | Out by the second weekend despite #1 seed. |
| 2017 | Villanova | Lost in R32 to #8 Wisconsin | Defending champion bounced early. |
| 2022 | Baylor | Lost in R32 to #8 UNC | Defending champion also bounced early. |
The Expert Fallibility Story
The NCAA Selection Committee is a fascinating case study in expert judgment. They get the broad strokes right — 1-seeds are genuinely much better than 16-seeds, and the top 4 seeds win the title 92% of the time. But within that framework, the data reveals persistent, exploitable patterns:
1. The committee doesn't improve over time. Despite 40 years of feedback and increasingly sophisticated analytics, the first-round upset rate has been flat at ~24-25%. More data hasn't made the experts better.
2. 11-seeds are systematically mispriced. They outperform 9 and 10 seeds by every measure — average wins, Sweet 16 rate, Final Four appearances. This has been true for 40 years and the committee hasn't corrected it.
3. Mid-range seeds (5-12) are essentially noise. The coefficient of variation exceeds 100%, meaning the committee's signal is weaker than the randomness. The distinction between a 5-seed and an 8-seed is not very meaningful.
4. Conference bias exists. ACC teams outperform their seeds; Mountain West and Big 12 teams underperform. The committee appears to apply a subtle brand premium to certain conferences.
The economic lesson: Expert committees — whether they're seeding basketball tournaments, rating bonds, or setting policy — tend to be well-calibrated at the extremes but noisy in the middle, prone to status-quo bias, and resistant to self-correction even when feedback is immediate and clear.