In July 2012, thousands of athletes from around the world gathered to compete in the Olympic Games, the most prestigious of all multi-sport international competitions. Measuring individual performance in London is easy. Measuring a country’s Olympic success is far more problematic, because media outlets in different countries use different counting systems. Although the International Olympic Committee (IOC) doesn’t recognize country rankings, the medal table was constantly updated during the competition and both the media and the public in countries large and small followed its evolution with keen interest.
But the medal table isn’t standardized. In some countries, including the United States, the media rank countries according to the total number of medals won. Others use a system based on the number of gold medals. It’s very tempting to choose the system that makes your country look the best, but both of these systems are flawed. For example, let’s say that you choose to base rankings on gold medals won, which is the dominant method used around the world. Hypothetically, country A’s athletes could win 10 golds and no silver or bronze medals whereas country B’s athletes might come away with 9 golds, 25 silvers, and 25 bronzes. Who can legitimately argue that country A enjoyed more medal success than country B did?
Conversely, arguing for total medal count is just as flawed. In this hypothetical situation, let’s say that country A’s athletes earned no golds, 25 silvers, and 25 bronzes, while country B’s athletes won 49 gold medals. Again, it’s indefensible to say that country A ‘‘won’’ the medal count in this case. Who wouldn’t rather have 49 golds than a combination of 50 second and third place finishes? If the color of the medal didn’t matter, the medal podiums would be the same height, one flag wouldn’t wave above the others, and fans would hear the anthems of three countries, not just one.
We propose a new way to calculate a country’s Olympic medal success, called Medal Premium Calculations (MPC), which allocates a certain number of points based on the color of the medal; the country with the most points tops the Olympics medal table.
MPC uses a ratio of 5 to 3 to 2 to count medals, which was inspired by the U.S. Olympic Committee’s financial bonuses paid to American athletes of $25,000 for gold, $15,000 for silver and $10,000 for bronze. Using the new system, an athlete would earn 5 points for his/her country by winning a gold medal while silver and bronze medals would be rewarded with 3 or 2 points respectively.
See full List of country rankings
Unlike 2008, when China finished with the most Olympic gold medals but the U.S. earned the most medals overall, there's no controversy about which of the two countries should top the medal table. In London, American athletes earned both the most overall medals and the most golds. However, the lack of logic behind the two prevalent counting systems surfaces again at position three on the table. Should it be Great Britain, with the third highest tally of golds, (29 to Russia's 24) or Russia, with the third highest number of medals overall (82 to team GB's 65)? There are discrepancies between the two tables at other positions as well. As we've argued, there are fundamental flaws with both systems.
The straightforward system allows us to reach more precise conclusions about Olympic medal supremacy than either the gold medal count or the total medal count because MPC recognizes silver and bronze medal success (which basing ranking on gold medals won doesn’t, except in case of a tie) while placing a premium on gold medals (which basing rankings on total medal count fails to do).
With just a bit more work once MPC is calculated, media outlets could add new and interesting levels to the presentation of the medal table by introducing factors such as the size of a country’s Olympics contingent, the country’s overall population or even that country’s relative level of development. Using MPC scores as a starting point, we used regression analyses to determine whether those three factors were legitimate measures of Olympic success both in the summer and winter games. Both development and size of contingent had a positive effect on Olympic success. Population also had a positive effect on winter Olympic success, but it had a negative effect on summer Olympic success.
Factoring in population or contingent brought dramatic changes to the medal tables, but development had very little effect on the standings.
Additionally some have argued that the size of a country's Olympics contingent should be a factor, as should other things such as overall population. We agree, but basing those calculations on the number of golds or the overall number of medals is as flawed as the counting systems on which those statistics are based. Below are the MPC rankings based on population and contingent size for those countries that earned at least one medal and that sent more than or equal to the median number (10) of athletes to the London games. Our apologies to the people of Grenada, but putting the tiny country at the top of the table based on it's lone gold medal is like saying the baseball player who hit a home run in his only at bat of the year has earned the league's batting title.
More sophisticated analyses like these give smaller countries legitimate bragging rights and help level the playing field when it comes to charting Olympic success. Although all three could be calculated using gold medal count or total medal count rather than MPC, the analyses would be much less telling than when we employ a ranking system that addresses the shortcomings of the other two methods.
 Size of contingent values came directly from the IOC.
 Population numbers came from the Central Intelligence Agency’s World Factbook (2011) and other sources.
 The United Nations’ Human Development Index was used. HDI measures development by combining indicators of life expectancy, educational attainment and income into a composite score.
Facebook Poll: Which method of calculating medal success at the Olympics do you prefer?