Originally published at: https://discgolf.ultiworld.com/2019/02/20/ultiworld-disc-golf-power-rankings-pre-las-vegas-challenge-february-20-2019/
Welcome to the new and improved Ultiworld Disc Golf Power Rankings.
Around here we like to ask, “What have you done for me lately?” With that in mind, our new rating system was developed with an eye toward who will win the next tournament, not the year-end accolades (though they may go hand in hand).
This year, we combine the collective wisdom of the UWDG bullpen—the sole determining factor in 2018—with a proprietary objective metric. In other words, we marry the “eye test” with cold hard stats.
First, voters rank their top 25 MPO and FPO players based on recent play. Our pollsters primarily consider performances on the biggest stages. Worldwide: Majors with a capital M. Stateside: PDGA National Tour (NT) and Disc Golf Pro Tour (DGPT). Across the Pond: Euro Pro Tour (EPT) and Euro Tour (ET). But because it’s impossible to completely ignore stand-out performances at off-week A-tiers, they may be used as secondary voting criteria (note: they are not included in the objective metric).
After tallying the UWDG votes, we combine the cumulative subjective list with the objective metric to produce the power rankings.
Why did we choose to adopt this system? Well, we’re fans, too, and sometimes our unconscious bias privileges one player over another—American over European or the perennial champion over newcomer—and that’s not the heart of a power ranking. The objective metric helps to moderate these tendencies.
So how do we determine the objective list?
The better you score, the more you win. To more accurately compare tournaments, we focused on scoring. Players that routinely go low have better objective metric values. Because par—total par and difficulty—changes week to week and scoring conditions fluctuate based on numerous confounding variables like layout, course type (e.g. open vs. wooded), and weather, we opted to use standardized scores instead of raw scores.
Competition and cash bonuses. To be the best, you need to beat the best week in and week out. Even within our relatively select cadre of qualifying events, not all tournaments are created equal. So we added a multiplier based on the number of 1000 rated players for MPO— 920 rated players for FPO—and purse (amount of prize money) for a given tournament. Our logic? A bigger purse typically attracts a deeper, more competitive field. What this means in practice is that a player who outscores a field of thirty 1000 (or 920) rated players gets more credit than a player who outscores a field with ten.
Decay Function. Remember, we’re asking “What have you done for me lately?” Events that occurred in the recent past are weighted more heavily than those in the distant past.
The standardized scores, competition multipliers, and recency weights are combined to give each player an objective metric value, and players are ranked based on their values.
Finally, we combine the objective ranking with subjective voter polls. We modeled our system after the BCS selection system in college football. For each voter ranking and the objective ranking, the no. 1 player receives 25 points, the no. 2 player receives 24 points, and so on. The number of points for each player is summed and divided by the total number of points a player would have if they were ranked #1 on all voter and objective rankings. This generates a value for each player between 1 and 0, and the final power rankings are based on this value.
Without further ado, here are your top 25. Bear in mind, the inaugural ranking of 2019 is really an end of year ranking based on how the pros finished their 2018 campaigns. Will they pick up where they left off? Whose offseason training regimen will reign supreme? Will we get an answer to the age-old question, “Is it the archer or the arrow?” Two weeks in the desert should yield plenty of hot takes and a shuffling of the ranks.
Want to see how the voting shook out? Click here to see the complete objective and subjective voter tallies.