Jet915 wrote:
Has the committee ever acknowledged using ken pom?
Q. The Kenpom analytic service that a lot of coaches use to evaluate teams, I'm curious if that's part of your discussion at all. There's some dramatic swings in what a team's RPI is and what a teams Kenpom number is. I wonder if the Committee is aware of those situations.
RON WELLMAN: To me, the Committee is very aware of everything. The Committee looks at every piece of data that we can put our hands on.
Kenpom, RPI, all that data, we have a list of data points that we can use. Sagarin, the LRNC, it just goes on and on.
Various Committee members will emphasize and use that data to various degrees. Some of them will reply on certain data more than others. So it just depends upon the Committee member.
This process can be very subjective. Certain Committee members will value certain pieces of data more than others. All of that information is available and easily accessible by the Committee members.
Jet915 wrote:Has the committee ever acknowledged using ken pom?
Jet915 wrote:Has the committee ever acknowledged using ken pom?
stever20 wrote:Jet915 wrote:Has the committee ever acknowledged using ken pom?
last year, yes.
http://www.newsobserver.com/sports/spt- ... 20578.html
These common-sense changes aren’t nearly as significant, though, as the committee’s increasing willingness to move away from the simple but inevitably flawed RPI to evaluate teams. Barnes acknowledged on Selection Sunday that the committee looked beyond RPI in deciding to leave Temple and Colorado State out of the field, but this week confirmed the use of such metrics as Ken Pomeroy’s and Jeff Sagarin’s efficiency ratings was even more extensive than Barnes described that day in March.
“More this year than any prior year, we looked at other systems when there were gaps and inconsistencies in the RPI,” Barnes said. “We talked more about it, how the RPI doesn’t tell the whole story.”
This is a change from past practice, when the committee hewed so closely to its procedures and principles that, combined with its slavish devotion to RPI, it was possible for the increasingly educated consumer to mimic the process from outside the committee room. The average score on Bracket Matrix, a website which tracks bracketologists, went up each year from 2010 through 2014 as predictions, collectively, improved.
That changed in 2015, when teams like Texas and UCLA benefited from strong KenPom ratings to get into the field and Oklahoma was seeded ahead of Maryland. In each case, use of the RPI-based resume to compare teams would have suggested different results.
jaxalum wrote:So how many are we realistically looking at now? I'd be happy with 5.
ecasadoSBU wrote:stever20 wrote:Jet915 wrote:Has the committee ever acknowledged using ken pom?
last year, yes.
http://www.newsobserver.com/sports/spt- ... 20578.html
These common-sense changes aren’t nearly as significant, though, as the committee’s increasing willingness to move away from the simple but inevitably flawed RPI to evaluate teams. Barnes acknowledged on Selection Sunday that the committee looked beyond RPI in deciding to leave Temple and Colorado State out of the field, but this week confirmed the use of such metrics as Ken Pomeroy’s and Jeff Sagarin’s efficiency ratings was even more extensive than Barnes described that day in March.
“More this year than any prior year, we looked at other systems when there were gaps and inconsistencies in the RPI,” Barnes said. “We talked more about it, how the RPI doesn’t tell the whole story.”
This is a change from past practice, when the committee hewed so closely to its procedures and principles that, combined with its slavish devotion to RPI, it was possible for the increasingly educated consumer to mimic the process from outside the committee room. The average score on Bracket Matrix, a website which tracks bracketologists, went up each year from 2010 through 2014 as predictions, collectively, improved.
That changed in 2015, when teams like Texas and UCLA benefited from strong KenPom ratings to get into the field and Oklahoma was seeded ahead of Maryland. In each case, use of the RPI-based resume to compare teams would have suggested different results.
The Commmite is just looking for ways to squeeze more Power-5 schools into the tournament. it's a shame that they are doing that.
If they are going to use multiple systems they have to be absolutely transparent on how much they are using it and what are the precise methods to choose one team over another. Otherwise I see it as a way to benefit the teams of major conference at the expense of the mid-major that may have a high RPI but may not be as high in other metrics. Bottom line is that you need to choose a system and STICK to it and use it all the time. You can't be arbitrary about this...
Return to Big East basketball message board
Users browsing this forum: No registered users and 37 guests