Thursday, July 2, 2009

The Director's Cup: ACC-ANALyzed

The National Association of Collegiate Directors of Athletics of American Athletics of America releases a rating system every year ranking every athletic program by the number of bullshit sports by the quality of the athletic teams competing in NCAA sanctioned sports.

This year the NACDA announced the winner as Stanford (again) and runnerup UNC. However, upon further investigation I have found that the system for crowning a champion is designed strictly for big ass athletic programs like Stanford to win rather than smaller more efficient programs such as FSU or Wake (small as in # of teams small).

First, let me explain the scoring system. The NACDA uses an exponential function to assign points. The input is final standing in the NCAA tournament (or CFB polls) and the function is based on the total number of postseason teams. The output ranges from 100 points (Champion) to 25 points (making the tournament). Certain sports with odd numbers of teams or non-tournament systems have special scoring systems.

The scoring system only accounts for the 20 best teams fielded (10M/10W), which automatically puts 50% of the ACC at a disadvantage as only UNC, Maryland, UVA, Duke, BC, and NC State have 20 or more teams.

The final overall tally:
2. UNC - 1184.25
8. UVA - 1059.00
15. FSU - 945.00
17. Duke - 891.80
28. MD - 668.80
37. Wake - 580.25
43. Miami - 491.00
46. VT - 459.25
48. GT - 452.38
53. CU - 397.00
74. NCSU - 265.30
75. BC - 262.00

This is merely a total of the 20 best teams so schools like Stanford and UNC with 30 or so teams can dominate especially in sports with smaller postseasons where Final 4 or Elite 8 status is almost guaranteed. For example, you have to beat 6 teams to win March Madness but you only have to beat 4 teams to win the Frozen Four. So schools in less-popular sports are automatically racking up big time points.

A breakdown of actual efficiencies shows a slightly different ranking of the ACC teams. If the teams were ranked by actual teams earning points in the NACDA system, FSU would win as 100% of their teams earned points (earned postseason births). UNC was second with 82% of their teams. The worst was BC with only 26% of their teams earning postseason births. The ACC received points by 59% of their competing teams for comparison's sake.

This BC fact is particularly disturbing in that they've also got the most NCAA teams of the ACC with 31, placed the lowest percentage of teams into postseason play, and got the least amount of NACDA points out of the entire ACC (despite their voluminous team advantage). I know their hockey team normally kicks ass but they've gotta do something with the rest of their dead weight programs, in my opinion.

Another interesting way to look at the system is actual team efficiency or "How many points did each school earn per competing team?" Once again FSU won with 55.59 points/team wth UNC and UVA close behind with 44.97 and 42.36 points/team, respectively. BC had 8.45 points/team.

1. FSU - 55.59 p/t
2. UNC - 44.97 p/t
3. UVA - 42.36 p/t
4. Wake - 36.27 p/t
5. Duke - 34.20 p/t
6. Miami - 32.73 p/t
7. GT - 30.16 p/t
8. VT - 27.01 p/t
9. MD - 24.77 p/t
10. CU - 20.89 p/t
11. NCSU - 11.05 p/t
12. BC - 8.45 p/t

I like the idea of awarding points based on postseason appearances but my main problem is the over-valuation of smaller tournaments. It is much more difficult to earn a bid for the Men's/Women's Basketball tournament then it would be to the fencing tournament as there are a lot less teams competing in fencing. The system should account for tournament size and regular season competition field size (more teams = more points).

Otherwise, I find the system intriguing because it is good to have a uniform "objective" metric to evaluate athletic departments. I think it's also good because it adds "weight" to sports that normally get very little (if any) coverage.

Thoughts are welcome.