Every year, undergraduate super fans follow every pass, every defensive stop, and every concussion of the season attempting to will their team to a national championship. Alas, it is not meant to be for most. But fear not, for the powers that be have provided an ample, perennial scape goat for each and every fan of the also-rans: The BCS.
Now, I have no problem with anyone blaming the BCS. I’d even argue that part of the excitement of college football is arguing about the stupidity of the BCS system (But it’s still absurd), and, apparently, getting fans to argue is a fantastic way to make money. (Lots of money, in fact, on the backs of essentially unpaid labor.)
But what exactly is the BCS? It’s some combination of humans and computers getting together in a black box and spitting out some sort of ranking system that is used to decide which schools go the BCS bowls and which schools spent way too much money to end up in the Papa John’s Pizza Bowl. Of course, all of this is going away for the 2014-15 season, so this is our last year of BCS “joy”. That also means it’s my last chance to really take a look at how the BCS sausage is made. So, what I’m going to do is delve into each of the three components (the Harris Poll, the Coaches Poll, and the computer rankings) with a focus on the computer rankings. The first two components are relatively easy to understand. People/coaches vote, teams earn points, teams get ranked. The computer rankings are anything but easy to understand, but I’ll try to shed some light on what goes into them and how they arrive at what comes out. Here is the final 2012 BCS standings and below is a brief explanation of the BCS straight from the source. BCSfootball.org describes the BCS like this:
Compilation of standings
Since the 2000 regular season, the BCS standings have been compiled by the National Football Foundation and College Hall of Fame.
The standings include three components: USA Today Coaches Poll, Harris Interactive College Football Poll and an average of six computer rankings. Each component will count one-third toward a team’s overall BCS score.
All three components shall be added together and averaged for a team’s ranking in the BCS standings. The team with the highest average shall rank first in the BCS standings. The BCS standings will be used for:
- Selecting the teams that will participate in the national championship game.
- Determining any other automatic qualifiers; and,
- Establishing the pool of eligible teams for at-large selection.
In the Harris Interactive College Football Poll and USA Today Coaches Poll, a team will be evaluated on the number of voting points it receives in each poll. Each voter ranks the top 25 teams and the highest ranked team receives 25 points, the next highest receives 24, etc. [Side note: This method of voting is referred to as the Borda count method after French mathematician Jean-Charles de Borda, and, according to Wikipedia, this method is actually used in some form in political elections in Slovenia, Iceland, Kiribati, and Nauru.]
A team’s Harris Interactive score for the purposes of the BCS will be its points in the poll divided by its total possible voting points (if 114 voters, then 2850=114 x 25). The same formula will apply to the USA Today Coaches poll and its total voting points (if 63 voters, then 1575=63 x 25).
The number of actual voters, which can vary, is figured into the computation on a weekly basis in stating each team’s percentage of a possible perfect score.
Six computer rankings will be used: Jeff Sagarin, Anderson & Hester, Richard Billingsley, Colley Matrix, Kenneth Massey, and Dr. Peter Wolfe. Points will be assigned in inverse order of ranking from 1-25. A team’s highest and lowest computer ranking will be discarded in calculating its computer rankings average. The four remaining computer scores will be averaged and the total will be calculated as a percentage of 100.
The Human Polls (Briefly)
Harris Interactive College Football Poll
USA Today Coaches Poll
A full list of Harris Poll voters:
List of Voters: The USA TODAY Board of Coaches is made up of 59 head coaches at Bowl Subdivision schools. All are members of the American Football Coaches Association. The board for the 2012 season: David Bailiff, Rice; Frank Beamer, Virginia Tech; Tim Beckman, Illinois; Bret Bielema, Wisconsin; Terry Bowden, Akron; Art Briles, Baylor; Troy Calhoun, Air Force; Matt Campbell, Toledo; Gene Chizik, Auburn; Dave Christensen, Wyoming; Mark Dantonio, Michigan State; Tim DeRuyter, Fresno State; Dave Doeren, Northern Illinois; Sonny Dykes, Louisiana Tech; Jimbo Fisher, Florida State; Kyle Flood, Rutgers; James Franklin, Vanderbilt; Al Golden, Miami (Fla.); Todd Graham, Arizona State; Jim Grobe, Wake Forest; Darrell Hazell, Kent State; Brady Hoke, Michigan; Dana Holgorsen, West Virginia; Skip Holtz, South Florida; Mark Hudspeth, Louisiana-Lafayette; Curtis Johnson, Tulane; Ellis Johnson, Southern Miss; Butch Jones, Cincinnati; Brian Kelly, Notre Dame; Mike Leach, Washington State; Pete Lembo, Ball State; Tony Levine, Houston; Mike London, Virginia; Rocky Long, San Diego State; Dan McCarney, North Texas; Mike MacIntyre, San Jose State; Ruffin McNeill, East Carolina; Gus Malzahn, Arkansas State; Bronco Mendenhall, BYU; Les Miles, LSU; George OLeary, Central Florida; Paul Pasqualoni, Connecticut; Bo Pelini, Nebraska; Chris Petersen, Boise State; Joker Phillips, Kentucky; Paul Rhoads, Iowa State; Mark Richt, Georgia; Mike Riley, Oregon State; Rich Rodriguez, Arizona; Nick Saban, Alabama; Steve Sarkisian, Washington; Frank Solich, Ohio; Steve Spurrier, South Carolina; Rick Stockstill, Middle Tennessee; Bob Stoops, Oklahoma; Dabo Swinney, Clemson; Jeff Tedford, California; Tommy Tuberville, Texas Tech; Kevin Wilson, Indiana.
The Computer Polls
First of all, what are the computers supposed to be doing? What is their objective? Are they attempting to rank the teams based on who has accomplished the most over the course of the season? Are they supposed to predict the results of future games? Are they supposed to find the two best teams at the end of the season? Or should all results be be weighted equally? Trying to find the two “best” teams is not really a simple question as “best” can mean many different things, and, in fact, the BCS has never given a clear answer to what they want “best” to mean. In the 2006 article “In favor of a quantitative boycott of the bowl championship series,” Jim Albert says in his abstract :
The Bowl Championship Series is designed to match the two top teams in college football in a title game. To determine the top ranked teams, polls are combined with a number of quantitative ratings of the teams known collectively as the “computer rankings”. The computer ranking serves primarily to offer a validation of the polls. The individuals whose quantitative ratings are used in the computer ranking have never been given a clear objective to design their ratings for and they are limited in the inputs that they can use (for example, they can not use score or site of game). For all of these reasons, I am advocating a boycott of the Bowl Championship Series by all quantitative analysts.
I’m not sure how a boycott could possibly affect the BCS in any way, but the point is that the BCS has never clearly defined what the computer polls are supposed to be doing. They’ve simply given them the mandate to rank the teams. Also, maybe in 2006 the site of the game was not allowed in the rankings, but it certainly is allowed at present.
The Sagarin ratings, which have been featured in USA Today since 1985 and have been part of the BCS formula since 1998 are maintained by Jeff Sagarin. Sagarin graduated from MIT in 1970 with a degree in mathematics and later received an MBA in quantitative analysis from Indiana. His rankings are based on the Elo Chess ratings. Actually, he produces two ratings, the aforementioned Elo Chess and the Predictor (or Pure Points). Elo Chess is based only on wins and losses and where the game was played while Predictor further incorporates margin of victory. Sagarin is bound by the BCS to use only Elo Chess method in his rankings for picking a national title game (BCS forbids using margin of victory in rankings), but he admits that the Predictor is more accurate in picking upcoming games. He says on his USA Today page:
In Elo Chess, only winning and losing matters; the score margin is of no consequence, which makes it very “politically correct”. However, it is less acurate in its predictions for upcoming games than is the Pure Points, in which the score margin is the only thing that matters.
For a full review of the Elo Chess rating system (and other chess rating methods), check out “A comprehensive guide to chess ratings” by Mark Glickman. One of the most interesting quotes in the paper pertaining to Elo is:
While much of Elo’s development can be criticized for its lack of reliance on established statistical principles, he successfully implemented a system that appears to track players’ performances with reasonable accuracy.
If I can be sarcastic for a second, this is perfect for the bowl championship series: It doesn’t rely on established statistical principles, but it appears to work. Just like the BCS.
In all seriousness though, the Elo Chess rating system is well established rating system that has been applied to many forms of competition over the years. Further, the basic formulas for Elo are publicly available and if one wanted to I suspect one could reproduce Sagarin’s ratings with some degree of accuracy. So, overall I generally approve of this computer ranking. It’s not what I would choose to use as my first choice for a ranking system, but it’s also not obviously bad. And besides, Nate Silver has used Elo, so it must be valid, right?
Finally, I’ll close this section with a quote from Sagarin about his thought on Elo and the NCAA:
“If I were czar of the NCAA, I’d use Elo. It’s not like I’m suggesting affirmative action for the small conferences. But the way the RPI is set up, it’s like telling a kid in a ghetto high school, ‘No matter what you get on your SATs, you can’t go to college.’ “
Jeff Anderson and Chris Hester attended the University of Washington where they were roommates. According to Wired.com, the two felt that Washington was consistently under-rated in national polls, and they set out to fix this by developing their own ranking system.
According to their website their rankings are distinct in four ways:
1. Unlike the polls, these rankings do not reward teams for running up scores. Teams are rewarded for beating quality opponents, which is the object of the game. Posting large margins of victory, which is not the object of the game, is not considered.
2. Unlike the polls, these rankings do not prejudge teams. These rankings first appear after the season’s fifth week, and each team’s ranking reflects its actual accomplishments — on the field, to date — not its perceived potential.
3. These rankings compute the most accurate strength of schedule ratings. Each team’s opponents and opponents’ opponents are judged not only by their won-lost records but also, uniquely, by their conferences’ strength (see #4).
4. These rankings provide the most accurate conference ratings. Each conference is rated according to its non-conference won-lost record and the difficulty of its non-conference schedule.
First of all, none of the BCS computer models use margin of victory in their formulas now because the BCS prohibits it. So, that’s definitely not a distinct attribute of these rankings. Everyone else is doing it because they have to. This was true, prior to the BCS deciding that margin of victory could not be used in determining the rankings. In this fway, the A&H rankings are the only rankings that have been unchanged in the entire history of the BCS.
As for point 2, I don’t believe that any of the other computer models “pre-judges” teams either. They all let the data determine the rankings with no preconceived notion of strength (this is at least true by the end of the season).
The next two claims are simply subjective statements of superiority. They claim that they “compute the most accurate strength of schedule rating” and that they “provide the most accurate conference ratings.” Based on what? How are they measuring accuracy? There is just no way to factually back up these claims. In fact, Sagarin points out that his Pure Points model, which does incorporate margin of victory, is more accurate than his Elo Chess rating, which does not. So, by choosing not to consider margin of victory at the time when it was allowed in the rankings could possibly have made their rankings less accurate than the other systems that were considering margin of victory.
I’m fairly skeptical of this ranking system. It’s two guys who seem to have very little statistical background (I could be wrong about this) who created a ranking in reaction to Washington being ranked lower than they thought they deserved. And the fact that they are using Excel makes me even more skeptical. You can do some nice things with Excel, but it’s just not that sophisticated. Maybe what they are doing is very good and statistically sound, but we don’t know at all because this is another computer model that doesn’t publicize their method. So no one has any idea what they are doing.
Finally, an article in Wired.com, Anderson describe the BCS (emphasis added):
And while mathematical rankings predate the BCS by decades, the new process promised to be the first system combining consensus opinion with numerical analysis into a single ranking. “I think it’s perfect,” Anderson says. “Two-thirds art, one-third science.”
I assume the “two-thirds art” he is referring to are the human polls and the “one-third science” are the six computer polls. Sure, go ahead and call the human polls art, but calling the computer models science a pretty liberal use of the word. Actually, I take that back. According to their website the Pac-12 (nee Pac-10) Conference has referred to the A & H rankings as
“The most accurate” computer rankings
Though I don’t know what authority anyone working for the Pac-12 Conference has to make such a statement.
In review, this computer model, whose methodology is not public, was created by two former students of the University of Washington who, by their own admission, created this system because they felt as if Washington was being snubbed nationally in the ratings and is endorsed by the Pac-12. You can’t make it up!
After everything I have read about this computer ranking, I’ve concluded that its inclusion as one of the BCS computer models is insane. Like many of the other computer models, he doesn’t release the methods used for his rankings, but he freely admits that he’s not “looking at it from a purely mathematical standpoint.” Isn’t that exactly the standpoint that he should be looking at it from? If you’re not going to look at it from a purely mathematical stand point then. . .aren’t you just another subjective poll?
I’ll let Billingsley explain it himself:
“The fact is, in the six computers, there are the most time-tested brilliant mathematical minds – everybody but me, I’m not a mathematician – but one of the requirements is the BCS needed to see 10 years of rankings on a week-by-week and see how far back can you go. (Bloom) was amused that I had such a vast amount of information. How I ended up getting in the BCS was probably because they were impressed with my research.”
My system is probably more different from the other computer systems. The other five guys are looking at it from a purely mathematical standpoint – don’t get me wrong, I applaud their systems and I have tremendous respect for what they do. But my system is not purely mathematically based. My rankings are based on rules that are put in place from a fan’s perspective, things I think that are important to rank college football teams. My rankings are closer-ly related to human voters, an improved AP poll, if you will. It reacts to games more like a human voter but does it without biases like the name of team, the conference they play in, etc. It’s mainly concerned with wins, losses, strength of schedule (SOS) and head-to-head results. The core of my system is not something you see in most computers. It’s not necessarily better – in purely mathematical terms, it’s not as good – but the public relates very well to the system.
Just to remind you, the BCS is using this guys rankings to decide which team gets to go to the national championship in football and, indirectly, what school gets the biggest cut of the BCS money pie. Did I mention this is insane?
Wes Colley received a Ph.D. in Astrophysical Sciences from Princeton University. So, it’s safe to say this guy is smart. He even has a very simple method for predicting elections and he got 49 out of 50 states correct this year. His contribution to the BCS is the Colley Matrix, which, as far as I can tell, it is the ONLY one of the 6 computer polls whose methods are publicly available in full detail. As a result of this, a mistake was found in the final BCS standings of the 2010-2011 season because Wes Colley had failed to include a game between Appalachian State and Western Illinois. Let that sink in a little bit. The omission of an FCS game between Appalachian State and Western Illinois in the Colley Matrix had the ability to change the rankings of a top ten team in the FCS. However, rather than criticize Colley, I think he should be applauded for putting his model in a position where someone could find and point out this mistake. Colley’s is the only model in which a mistake like this could even potentially be caught. This means, that for all we know, the other five computer models could be constantly making mistakes. Or even just making stuff up entirely. How would we know?
Anyway, Colley lists five advantages of his method on his web site:
- First and foremost, the rankings are based only on results from the field , with absolutely no influence from opinion, past performance, tradition or any other bias factor. This is why there is no pre-season poll here. All teams are assumed equal at the beginning of the year. If you include some kind of human input, what’s the point of a computer poll in the first place? Garbage in, garbage out. NOTE: Bear in mind that because there is no pre-season poll, the early rankings will not look much like the press polls. The rankings are based on results so far within the season of play.
- Second, strength of schedule has a strong influence on the final ranking. Padding the schedule wins you very little. For instance, Wisconsin with 4 losses finished the 2000 season ahead of well ahead of TCU with only 2 losses. That’s because Wisconsin’s Big 10 schedule was much, much more difficult than TCU’s WAC schedule.
- Third, as with the NFL, NHL, NBA, and Major League, score margin does not matter at all in determining ranking, so winning big, despite influencing pollsters, does not influence this scheme. The object of football is winning the game, not winning by a large margin. Now, other games have other metrics. In golf we have strokes; in texas holdem we have winnings; in NASCAR we have points standings; but in football, we have one simple overriding metric: did you win the football game?Ignoring margin of victory eliminates the need for ad hoc score deflation methods and home/away adjustments. If you have to go to great lengths to deflate scores, why use scores?What about home/away? Though reasonable arguments can be made for a home/away factor, I do not know of a simple, mathematically consistent means of rating the relative difficulty of playing at the Swamp vs. playing at Wallace-Wade Stadium. The home advantage for some teams is simply more than it is for others. There are further complicating factors, such as home weather for a northern team in November vs. home weather for a southern team in August.Even the pollsters seem to forgive or forget big scores or surprisingly close scores, home or away, after a few weeks. Usually, after a few weeks, a W is a W and an L is an L, as it should be anyway.
- Fourth, in this method, only very simple statistical principals, with absolutely no fine tuning or ad hoc adjustments are used to construct a system of 120 equations with 120 variables, representing each team according only to its wins and losses, (see Ranking Method). The computer simply solves those equations to arrive at a rating (and ranking) for each team.
- Fifth, comparison between this scheme and the final press polls (1998, 1999, 2000, 2001, 2002) proves that the scheme produces sensible results.
Summarizing these points, results from the Colley Matrix are entirely based on performance in the season of interest with no past performance considered, strength of schedule plays a major roll, and only wins and losses are considered. Neither margin of victory nor location of the game is factored into the rankings and the method uses simple statistical methods. Finally, the results from the Colley matrix provide similar results to the polls.
I would argue that these are all good points with two small points of discussion. The first is that I believe it is a well established fact that the home team has an advantage in many sports, certainly including football. For instance, Sagarin estimates the advantage to be about 3 points. So, I think ignoring this is a pretty significant weakness of this method. But the fact that someone can even form this opinion because ALL of the details are publicly available is nice.
My second small comment involves Colley’s fifth point where he says that “comparison between this scheme and the final press polls proves that the scheme produces sensible results”. This assumes that the voters in the poll know what they are doing and are competent voters capable of evaluating all of the college football teams. I don’t assume that. A much better way to evaluate the method would be with prediction accuracy or some other objective metric. Sure, the human voters do a decent job rankings teams, but the task of ranking this many teams in a rational way is a very difficult task for a human brain to handle. The purpose of the computer polls is to remove subjective judgement from the rankings. So, just because one computer ranking method matches the human polls more closely than another computer poll, doesn’t make the method better or more valid. It simply means that the method more closely matches a subjective human poll.
One final note about the Colley Matrix from Ed Feng:
…..the Colley Matrix, the one fully transparent computer poll, does not use this game-specific information. The system considers a team’s win-loss record and strength of schedule; yet, the results of each individual game are not counted as an input. The method doesn’t care whether Kent State’s loss came against Kentucky or against Rutgers.
It doesn’t require an advanced math degree to verify this. Wesley Colley, the creator of the ranking method, has set up a web page where you can alter the results of games and see how the rankings change. If Kent State loses to Rutgers (25th in Colley’s rankings) but beats Kentucky (98th), common sense indicates its rank should improve. However, the Flashes actually drop one place. In this scenario, Rutgers has a 10-1 record instead of a 9-2 mark and therefore moves ahead of Kent State.
So, there are many issues to potentially object to with this method, but at least it’s transparent.
Kenneth Massey describes himself in a quick bio on his Massey Ratings page:
Kenneth Massey is a professor of mathematics at Carson-Newman College in Jefferson City, TN. He received his B.S. from Bluefield College and a master’s degree (ABD) in mathematics from Virginia Tech. His research involved Krylov subspaces in the field of numerical linear algebra.
Kenneth is a partner with Limfinity consulting, and produces the Massey Ratings, which provide objective team evaluation for professional, college, and high school sports. His college football ratings have been a component of the Bowl Championship Series since 1999. Massey has also worked with highschoolsports.net since 2008.
While Massey is relatively open about the details of his system, I believe that with the information he does release, it would be difficult to accurately reproduce his rankings exactly.
Massey also makes it clear in his FAQ that he releases two different sets of rankings. The first are the true “Massey Ratings”, and the second are the version for the BCS. The actual rankings “utilize actual game scores and margins in a diminishing returns fashion” whereas the BCS version only take into account wins and losses.
Massey’s BCS ratings are the equilibrium point for a probability model applied to the binary (win or loss) outcome of each game. All teams begin the season rated the same. After each week, the entire season is re-analyzed so that the new ratings best explain the observed results. Actual game scores are not used, but homefield advantage is factored in, and there is a slight de-weighting of early season games. Schedule strength is implicit in the model, and plays a large role in determining a team’s rating. Results of games between well-matched opponents naturally carry more weight in the teams’ ratings. The final rating is essentially a function of the team’s wins and losses relative to the schedule faced.
One other interesting question in Massey’s FAQ is ‘Are you satisfied with the BCS formula?’ His answer:
Over the years, the BCS has gotten criticized for fine-tuning its formula. Recent changes have simplified the system for the better and removed extraneous redundancies. The current setup is a good balance of the traditional human polls, which the fan base is most comfortable with, and the objective computer component. Over the years, the two methods have tended to converge as the computers have revealed to the human voters the dangers of regional bias and misunderstanding of schedule strength. There will always be controversy when the formula must split hairs between #2 and #3, but the system is stable and beginning to be accepted by the media and fans.
In summary, Massey’s ratings are based on mathematics, which is a good sign, incorporates home-field advantage and weights games at the end of the season slightly more than games at the beginning of the season. Also, he’s the only on of the six computer polls that also does Hurling rankings.
Dr. Peter Wolfe (not to be confused with Peter Wolf or Peter and the Wolf) is keeper of the final computer ranking system to be reviewed here. A short description of Wolfe’s computer model is described on his website here. His model relies on maximum likelihood estimation. Wolfe points out that the method he is using is often referred to as a Bradley-Terry model, which was first proposed by Bradley and Terry in 1952. (He also points to Ford 1957, Elo 1986, and Keener 1993). The model is associated with paired comparisons where items two at a time are presented to a judge who chooses the superior item. Over the course of many paired judgements, data is collected as used to ranked the objects based on the collection of judges paired decisions. This is easily applied to sports where two teams play against one another and superiority is based on who wins the game rather than a judge.
Wolfe fits this model by using any data that arises from teams that can be connected by schedules. This includes FCS and FBS teams as well as teams from Divisons II and III and NAIA, for a total of 739 teams. He also allows for a home field advantage in his rankings. Many of the details of this method are publicly available, and, depending on how home field advantage is handled, I believe these rankings could probably be recreated very closely if you had access to all of the data (Are NAIA results easy to get?).
Wolfe also provides on his website a fantastic resource containing a bibliography of college football rankings systems maintained by David Wilson. An interesting note on this page is:
I’ve found a neat audio/video recording of a 1997 lecture by Hal Stern on “Probability and Statistics in Sports,” complete with slides. The ~50 min lecture covers several sports but includes a discussio on ranking college football teams using least-squares. Stern includes a brief discussion of the 1997 Sagarin rankings, which he shows to be highly correlated with least-squares. At the conclusion of the talk, Stern presents several references for useful publications on sports statistics.
Summary of Computer Rankings
Before I summarize, one important aspect of the BCS computer rankings that seems notably suspect to me is that not all ranking systems are using the same data as input. Some methods use only FBS games, another might use, only all Division I games, and another may use all NCAA and even NAIA games. This means that, in the words of a friend from grad school, theoretically a game in front of family and friends on a Saturday in a farm town in Kansas on a Saturday in October could potentially determine who gets to go to a BCS bowl and who gets left out. Now to summarize…
Sagarin, Colley, Massey, and Wolfe all release at least some details of their rankings systems and all seem to be based on some sort of mathematical principle of varying degrees of statistical legitimacy. As far as I can tell, Colley is the only one of the computer rankings that releases full details of his method, and for this he should be commended. It’s absolutely insane that so much is riding on these computer polls and no one other than their creators knows the full details of these methods (with the exception of Colley). Or as Ed Feng puts it in this SI article:
…..five of the six computers do not reveal the details necessary to reproduce their results. Millions of BCS dollars are at the mercy of five black-box algorithms.
Also, along those same lines, Jerry Palm says:
…..the bigger point is that nobody checks the BCS computer data. We should all be grateful to Colley for having a system that is open, accountable and verifiable. The BCS owes us an entire system that is open, accountable and verifiable.
According to Anderson and Hester, their method has many desirable properties, but many of these properties are impossible to verify as they, like most of the others, do not release their methodology. Further, they do not appear to have any substantial mathematical or statistical background that would make them particularly well suited for being included in the BCS computer rankings (please correct me if I am wrong about this). This method is almost entirely a mystery as very few specific details of this method are released.
Finally, there is Richard Billingsley. He freely admits that his rankings are not created from a “purely mathematical point of view.” Since he doesn’t release his methodology, no one has any idea what he is doing exactly, but we can likely rule out “solid statistical inference” from the list of potential descriptions of his method. If you’re not using mathematical concepts, this is essentially a glorified human poll, which is, in fact, almost exactly as he describes his method: “an improved AP poll”. If the BCS’s goal is to have credible computer rankings, including Billingsley in the mix makes no sense. But then again, if the goal of the BCS was to make sense, it simply wouldn’t exist.
It really doesn’t matter anyway
In the end, the computer polls don’t really matter anyway. They are a farce set-up by the BCS to give the appearance of legitimacy. If you look at the most recent BCS standings for the end of the 2012 regular season, you’ll see that 5 out of 6 of the computer models had Florida second (only Richard Billingsley has Florida lower at 6). Though Florida didn’t go to the national championship game and really had no shot at making it even with such high computer rankings. That’s because the two human polls have so much weight that they effectively drown out the computers. The voters got what they wanted: Alabama in the BCS championship game. Because the computers don’t matter.
The same thing happened in 2011. 4 out of 6 of the computer polls had Oklahoma State ranked second and the remaining two had OSU ranked third, but the voters got their way. They voted Alabama into the title game against LSU. And this was after LSU beat Alabama in the regular season AT Alabama and then won the SEC title game. Alabama not only didn’t win its conference, it didn’t even win its own division in that conference. And yet the voters got what they wanted: Alabama in the BCS championship game. Because the computers don’t matter.
So, is the BCS good? Does it work? What does it accomplish? If the goal of the BCS is to match the two best college football teams in the country in a bowl game at the end of the season, it’s very difficult to definitively say one way or the other that this goal is accomplished. At the very least, it is absolutely an improvement over the old school bowl system prior to the BCS.
So what does the BCS definitely do? I’d sum it up like this: The BCS creates a game between the two highest rated teams in the FBS as voted on by the the voters in the two human polls while giving the appearance of objectivity by including 6 computer polls, at least one of which is explicitly not based on any mathematics at all (Billingsley), which are forced by the BCS to ignore information about margin of victory even though it is well known that this information improves the predictive power of the rankings (e.g. Sagarin, Massey).
So, in a word, I’d describe the BCS as a clusterfuck. Of course, all of this is moot in 2014 when they finally move to a play-off system for determining the national championship. Actually, we’re gonna need a way to pick those four teams………
Hopefully, choosing four teams will be less of a controversy than picking only two teams, and the BCS can worry about more important issues. Like paying the athletes who are risking brain trauma and debilitating injuries for almost no compensation (at least publicly) while coaches, schools, television, and just about everyone else involved makes money off of their hard work.