I am posting my bowl season results with only the championship game remaining tonight. Here is how the model performed by game.
Matchup | |||||
63% | BYU | 52 | 24 | UTEP | 37% |
54% | Northern Illinois | 40 | 17 | Fresno State | 46% |
52% | Ohio | 21 | 48 | Troy | 48% |
51% | Southern Miss | 28 | 31 | Louisville | 49% |
32% | Utah | 3 | 26 | Boise State | 68% |
45% | Navy | 14 | 35 | San Diego State | 55% |
32% | Tulsa | 62 | 35 | Hawaii | 68% |
46% | Florida Intl. | 34 | 32 | Toledo | 54% |
59% | Air Force | 14 | 7 | Georgia Tech | 41% |
55% | West Virginia | 7 | 23 | NC State | 45% |
62% | Missouri | 24 | 27 | Iowa | 38% |
32% | East Carolina | 20 | 51 | Maryland | 68% |
51% | Illinois | 38 | 14 | Baylor | 49% |
62% | Oklahoma State | 36 | 10 | Arizona | 38% |
41% | Army | 16 | 14 | SMU | 59% |
59% | Kansas State | 34 | 36 | Syracuse | 41% |
38% | North Carolina | 30 | 27 | Tennessee | 62% |
60% | Nebraska | 7 | 19 | Washington | 40% |
46% | South Florida | 31 | 26 | Clemson | 54% |
57% | Notre Dame | 33 | 17 | Miami (FL) | 43% |
51% | Georgia | 6 | 10 | UCF | 49% |
51% | South Carolina | 17 | 26 | Florida State | 49% |
44% | Northwestern | 38 | 45 | Texas Tech | 56% |
56% | Florida | 37 | 24 | Penn State | 44% |
57% | Alabama | 49 | 7 | Michigan State | 43% |
64% | Mississippi State | 52 | 14 | Michigan | 36% |
46% | Wisconsin | 19 | 21 | TCU | 54% |
31% | Connecticut | 20 | 48 | Oklahoma | 69% |
57% | Stanford | 40 | 12 | Virginia Tech | 43% |
52% | Ohio State | 31 | 26 | Arkansas | 48% |
39% | Mid Tennessee | 21 | 35 | Miami (OH) | 61% |
47% | LSU | 41 | 24 | Texas A&M | 53% |
58% | Pittsburgh | 27 | 10 | Kentucky | 42% |
65% | Nevada | 20 | 13 | Boston College | 35% |
51% | Oregon | Auburn | 49% |
The expected model accuracy was 58%. Actual model performance was 59%. If we look at the distribution of expected results, we get the following. Blue bars represent relative probabilities of the model predicted the correct results for the number of games on the x-axis. 34 games have been played. The dark line represents the expected probability from guessing each game (50%). The model is clearly a little better than guessing.
The model was not completely uniform. It predicted better at the higher probabilities and not as well near 50%.
If we consider the results as the percentage of points score by the predicited winner, then the model correlates well to the result.
So, going into the championship game, I don’t think I could do any better than a coin toss on picking the winner. I would expect the game to go right down to the last couple of minutes of the 4th quarter.
Still: GO DUCKS!