GGN member Buzzy made an excellent point in one of the threads recently regarding PFF grades. He included snap numbers when talking about overall grades for safeties, alluding to a huge hole in much of the use of these grades. Because they are called "grades" people think of them as the kind of qualitative grade that is given by a teacher or a football scout. Your an "A" or a "B" kind of player. But that really isn't how these grade values work. These are cumulative numbers, both positive or negative. They are additions of earned points. Just because you can earn negative values doesn't change this really. And it stands to reason that if Player A earned a +15.5 season rating in 652 snaps, and another, Player B earned the same in 1032 snaps, Player A was the far superior player at producing a net positive result (at least in the eyes of the respective graders). The problems of the grading itself is another matter altogether. He was impacting the game positively, according to Pro Football Focus at a much higher rate. This is the answer to the lazy interpretation that reads the PFF Grades as: How Good Is This Player? And Was This Player Better than That Player?
The Proposed Change
PFF isn't going to change how they show things on their site, but we at GGN can change a little how we talk about their grades. If there is going to be talk about them - something I don't really favor all that much - we should be looking at these comparative grades as cumulative totals judged by the rate at which those totals are produced, not just as qualitative marks given at season's (or game's) end. This means that a stat like: total snaps / overall grade would tell us how many snaps it took to produce a +1.0 increase in grades (for high ranking positively graded players). Simple formula like these work.
snaps / grade value
Thrown Ats (TAs) / grade value
attempts / grade value
Here are some examples of how this changes things the PFF Grade Ratio next to the PFF Grade:
Safety Overall Grades:
|Name||Team||PFF Grade Ratio||PFF Grade|
We see for instance that Byrd graded out much better overall than the straight grade would show. Instead of the 9th best safety, he is the 3rd. And Berry showed better than Ward despite having the same grade.
For Safeties in Coverage you see this:
|Name||Team||PFF Grade Ratio||PFF Grade|
|Chris D. Clemons||MIA||2.91666666666667||7.2|
Byrd was the 7th best coverage safety by overall PFF grade, but by grade ratio he actually was the 2nd best. In fact he was producing a positive +1.0 grade at nearly twice the rate of the 7th best coverage safety. A pretty significant difference. Chris Clemens was the 4th best safety, and not the 9th best, again, twice as productive as the 9th best safety by Ratio.
a note: This comparison of safety grades and ratios also points to a significant weakness in PFF grades. When in direct coverage the top safeties are experiencing +1.0 grade changes about every 5 plays or so, but overall they are experiencing +1.0 grade changes around every 100 plays or so. This suggests that about 95% of the time their performance is falling out of the focus of the grading system. This hints to one of the larger problems with how PFF grades are read. The number quickly brands a player comprehensively, while much of the quality of that player can simply fall out of the purview of the grader.
Wide Receivers Overall Grades ranked by PFFR
And the WRs pass PFF Grade ranked by PFFR:
Running Backs Overall PFF Grade ranked by PFFR
This is of note to Jet fans considering Sproles. Instead of McCoy walking away with Best Graded RB in the league in a romp, and Sproles showing at 7th, Sproles actually is the top rated PFFR Grade Producer in the league at RB, followed by Ellington who jumps up from 10th.
Here are just the RBs ranked by Run Grades alone, ranked by PFFR
|Adrian L. Peterson||MIN||12.9||21.6279069767442|
Ellington, Brown, Foster, Woodhead, Blount all show well in on a rate basis.
3-4 Defensive Ends - Pass Rush
In another tidbit of interest to Jet fans, the statistical or graded answer to Who is the best pass rushing 3-4 Defensive end we see interesting variables between PFF grades, PFFR ratio, and the PRP stat used by ProFootballFocus to quantify pass rush productivity. PRP counter sacks, hurries and pressures, with hurries and pressures downgraded to .75, and it is a rate stat. This is one of the best advance stats out there in my opinion. Here is what the PRP rankings look like:
What we have is a nice little rate graph, which shows, as we all know, that J.J. Watt is a beast, leading all other 3-4 DEs in a significant but not absurd way. For instance he was a little less than twice as effective in pressuring the QB than the beloved Wilkerson. But what happens when we look at PFF grades? See below:
Suddenly J.J. Watt is 8 1/2 times (!) more effective as a pass rusher than Wilkerson. Wow, he must be good.
If we adjust the PFF grade rankings, using the same top DEs, what happens to the list? Below are those players ranked by PFFR ratio.
|Name||PFFR||PFF Rush Grade|
|Antonio D. Smith||21.9148936170213||18.8|
|Kyle D. Williams||26||20.5|
A player like Vinny Curry really jumps up in the grade list. Watt generates a +1.0 grade about every 10 times he rushes the passer, almost twice the rate of 2nd place Cameron Wake. Boy those Houston graders love hitting the +1 button on the keyboard. But look what happens to Wilkerson. He went from being almost half as effective (PRP), to 1/8 as effective (PFF rush grade), to now 1/10th as effective as Watt (PFFR ratio). As PFF graders see it, it takes Muhammad nearly 100 pass rushes to produce the same effect on the game as Watt does in 10.
In this case PFFR reveals maybe some of the bigger problems with the grading system, the way that things really can get inflated (in perhaps both directions), but it also (possibly) reveals hidden graded pass rushing gems like Vinny Curry who did not have enough rushes to quality for the first PRP graph, he would have been 2nd on that list with a PRP of 11.6 to Watt's 12.8.
Above are just some examples of how taking rates of production into account changes rankings. Of course high-snap players have a kind of unspoken value to the team, and generally are better players, but PFF grades are supposed to be about performance quality, not so much performance volume. It can be argued that looking at rates of positive grade production is a little like rates stats like Yards Per Carry or even PRP. There are minimum qualifiers, but rate stats allow us to compare quality across variables like playing time which may be affected by roster depth, scheme or injury.
Of course people aren't going to start calculating new grade rankings, but it is suggested that if these grade numbers are going to continue to grow in popularity - something I kind of dread because of how subjective and distortive they are - if they are going to be used in player to player comparisons, which seems to be the trend, we should also be looking at the snap, target and pass rush numbers that occurred to generate these grade values. It should be part of the conversation.
Thanks to Buzzy for thinking about this and making a point about it.