Earlier this offseason, Panthers tight end Greg Olsen made some comments about the website Pro Football Focus that got attention.
I constantly see people reference @PFF for NFL analysis and I am baffled by it.— Greg Olsen (@gregolsen88) May 11, 2016
For the purpose of data collection and raw info? Useful. For the purpose of GRADING a certain player/ play? Comical https://t.co/6Gv3S3LRSP— Greg Olsen (@gregolsen88) May 11, 2016
To a large extent, I agree with Olsen. If you go through the archives of this site, you might see me criticize PFF's methodology in one post and then utilize some of their numbers in the next. This seems like as good of a time as any to explain to you what PFF is and my likes and dislikes.
To start off, it is important to note that PFF does a couple of different things.
PFF compiles numerous detailed stats that had not been previously available for wide public consumption. These can be very useful.
At their best, stats can help us understand the game better. Advanced stats such as the ones compiled by PFF enhance our knowledge and sharpen our arguments.
If you watched the Jets two years ago, you might have had the observation that Chris Ivory was a much more effective runner than Chris Johnson. Ivory only averaged 4.1 yards per rush. Johnson averaged 4.3 yards per rush. Was this a sign stats are useless? It depends on which stats you use. PFF kept track of how many times each running back either broke a tackle or made a potential tackler miss. Ivory forced 52 missed tackles. That was fourth most in the NFL at the running back position behind only Marshawn Lynch, Le'Veon Bell, and DeMarco Murray. This was a case where PFF's advanced stats provided fans with a deeper understanding of the game.
PFF compiles stats not only on sacks, but they also tell you how many times a pass rusher hits and pressures a quarterback. Pass rushing is so much more than only registering sacks. If you can get to a quarterback, you can destroy a play even if the end result is not a sack.
They keep tracks of things like drops by wide receivers. When people talk about quarterback play, a lot of times the analysis devolves into unprovable anecdotes. You might hear somebody say, "This quarterback's receivers drop more passes than any player in the league." PFF's stats allow you to test that out. In some instances, you might find out that it wasn't true. PFF even provides the completion percentage for quarterbacks if you take out things like drops, throw aways, and passes batted down at the line.'
There is never going to be one almighty stat that definitively compares players. When Geno Smith had one of the longest average times from the snap to throwing the ball in his first two seasons, it was arguably evidence he was not getting rid of the ball quickly enough. When Russell Wilson had even longer averages, you had to account for how he has built an effective style by running around and creating big plays. You always have to understand how to put stats into context, but more in-depth stats can help smart fans make smarter evaluations about the game. This is a real service of PFF.
It used to be at least. PFF started phasing out their subscriptions to the general public for these stats shortly after the 2015 season started.
In addition to their objective stats, PFF provides numeric grades to players based on their performance in games. Unlike the stats, these are based on judgments by watching the games. These grades have become a source of controversy and have made many people critical of PFF.
I happen to agree with people who feel this way, and the difference between their stats and grades is what Olsen is discussing. Collecting data and raw information falls under the stats part. The grading is separate.
My big issue is I have yet to find any sort of explanation about how PFF has come to these grades. For years, I have searched through their website and found nothing.
Here is their current explanation.
We grade. An analyst grades every single player and every single play on a scale of -2 to +2.
Ok, but that doesn't explain anything. Who is grading this? What are their credentials to be grading a football game? How are these plays graded on a scale from -2 to +2? There is passing mention that Brett Favre's interception in overtime of the 2007 NFC Championship Game was worth -2, and Eli Manning's pass to Mario Manningham in Super Bowl XLVI was a +2. That doesn't tell me a whole lot, though. How much does a left guard pulling and throwing a key block on a 20 yard run count for? How much does a left tackle get hurt for getting his quarterback sacked after the pass rusher uses a spin move to get past him?
We grade again. The initial grading is reviewed by a second analyst to ensure accuracy.
That's great, but now we have to ask the same questions a second time.
We grade a third time. The second analyst’s grading is checked by a third analyst. You can never be too careful.
No question we should put safety first, but now we're asking the first question once again. And what happens when there are disagreements between the graders? Are their opinions averaged? Does one have sway over the other? And who are these guys?
We verify. Our grades are verified by the Pro Coach Network and their 400 years of combined NFL and college coaching experience.
Ok, now we're talking. PFF advertises its Pro Coach Network on a different page. We are finally naming names here, and they are names with credentials. This is on a page where PFF is advertising its services to pro and college teams. But does this mean the Pro Coach Network does not do any of the first three evaluations? And we still don't know the criteria on which they are grading the players. And the question pops up again of how they handle it when two parties disagree. Does somebody get overruled? Do they average it? Do they throw it out?
Advanced Normalization: The raw grades (as seen in our Premium Stats) are normalized to better account for the situation; this ranges from where the player lined up to the drop-back depth of the quarterback, to everything in between
This could have been written in hieroglyphic script and been just as easy to understand.
We set the grade. Convert to 1-100 scale.
PFF does provide this explanation for its grades.
For example, if a quarterback hits a wide receiver perfectly in stride on a post route in between two defenders, and the receiver drops the ball, it goes down in the box score as an incompletion. But in our system, that quarterback receives a positive grade for making a great throw. His statistics should not be punished based on the fact that his receiver dropped a pass.
It is a great idea in theory. Raw stats don't always present an accurate portrayal of what happens on the football field. The problem I have here is with the veil of secrecy. I see a couple of valid ways PFF could present a viable product.
They could be more transparent with their methodology. Does that block from the pulling left guard count for 0.6? Is the missed block by the left tackle -1.5? Is the criteria used consistently across all plays? I could buy it. Then we could have a discussion. Maybe you think the left guard pulling should be worth more. I can at least factor that in when I'm considering a player's grade. I at least will know what I am dealing with.
Maybe they will argue that not all blocks thrown in the run game by a pulling guard are created equal. That might be true. If I am going to trust a grader to properly tell me which block is a 0.3 and which is a 1.8, you need to tell me whom that grader is and what his or her credentials are to be grading a football game. If you are going to give somebody that much freedom, they had better understand the game in great detail.
At the very least, the graders are presumably taking notes. To come up with a grade at the end of the game, they have to keep a running tally. These notes should be made available so that the rest of us can compare what we see to how they graded.
Without these steps, PFF is asking people to buy into a system where nobody can explain how the numbers are compiled and nobody knows the qualifications of the people crunching the numbers.
This is particularly necessary because of some of the bizarre numbers these grades have come up with through the years.
Last year Aaron Rodgers got a negative PFF grade in a game where he threw five touchdown passes. PFF did write a response, but many felt like it did little to explain how such an anomaly could occur. Through the years, there have been a number of puzzling grades given out.
For these reasons, I alternate between thinking PFF has been a positive and a negative for football fans. The concept behind the site is great. There are a number of really useful stats. I also like the idea of grading players. That linebacker putting up a bunch of sacks might only be doing it because he is not getting blocked while somebody else is splitting double teams. Putting these things into context is great.
The problem is the lack of transparency in these grades. Nobody can judge their validity because nobody can explain where they come from.
The real issue in this is how it sours people on everything PFF does. Many see the puzzling grades and dismiss everything PFF does as useless. These fancy stats don't mean anything.This is proof numbers don't work in football. The problem is these aren't stats. They are grades. I feel like PFF's grading system has led people to turn away from information that could otherwise help us understand the game better. The really sad thing is they provide a lot of this information.