Weekly Commentary & Review #12
The running effectiveness and play-action discourse returns, part 1,093,876
This post looks at the storyline (only one this week) of the week, whether relevant articles, analyses or other news from the week that provide useful insight to absorb, or missing context to add.
NERDS WILL NEVER UNDERSTAND
As expected, there was a lot of chest-pounding agreement in the replies and quote-tweets to the one below. And to be fair, it is framed in a way that would be hard for anyone but the most diehard, myopic numbers-followers to disagree. It probably helps in discussing the discourse around this tweet to look at how the debate and research claims actually unfolded, and not assume it started with research saying the running game doesn’t influence the effectiveness of play action passing game.
It really starts with the coach/player/football guy maxim that “you need to run the ball well to have an effective play action passing game”, or “establishing the run set up play action.” Research from Ben Baldwin, Josh Hermsmeyer and others was designed to show there wasn’t enough evidence to make these claims, the types of things you’d commonly hear on game broadcasts and studio shows go without question. In statistical terms, they showed that there wasn’t enough of a relationship between run effectiveness or volume in a game to prove the hypothesis, or better reject the null hypothesis (you don’t need good or established runs to be effective at play-action). This was disproving an overconfident piece of established wisdom (you must have X to then get Y), of the best usages of data analysis.
What’s happened - it’s probably the fault of how findings were communicated by nerds and nerd-friendly analysts as much as goal-post shifting by football guys - is that the results showing no correlation between running efficiency and play-action effectiveness were transformed into a definitive statement of, “how well you run the ball has no effect on play-action passing.” This is now what analysts like Ruiz are calling into question, even though it’s not what the research shows.
I love the saying, “All models are wrong, but some are useful.” Models are better at disproving a definitive statement than proving evidence for one. The bar is much lower for the former, and models can consistently hit that bar more often than the latter due to their inherent flaws. One of the inherent flaws of modeling, that, in particular, comes into play in this debate, is how to quantify and measure the variables you’re trying to test. Running volume isn’t open to much interpretation, but what makes a “good” running game is. Nerds will rely on something like EPA per designed run, but that doesn’t function as a great proxy for what we’re trying to capture: the defense’s perception of facing a great running game.
If was to ask the average NFL observer, or even the average NFL coach, who the top-10 best running teams were in the NFL last season, they’d get many of the top-10 in EPA per designed run, like the Eagles, Falcons and Browns. But they’d probably miss others, like the Chiefs, Panthers and Steelers. If we asked the same questions about passing teams, the success rate would would be higher, as what we perceive to be a good passing game more often matches what we measure as value in advanced metrics like EPA. If we’re using EPA per designed run as the proxy for defensive perception, we’re not getting a precise alignment, enough so that the results won’t capture the desired inquiry to a strong degree.
But I do think there is some useful in capturing the perception of good running games with EPA efficiency, and the fact that there is no correlation with play-action effectiveness is meaningful. Most likely, there is something to the perception of facing a good running game mattering for play-action effectiveness, but it’s just a lot smaller than we think.
If you picture play-action effectiveness in terms of how much the defenders adjust their positioning to account for the run on a scale of 0 (totally ignore the possibility of a run) to 100 (totally sell out and give up the middle of the field to passes), no degree of opponent rushing success if going to move them in either extreme. What we’re trying to measure here is whether facing the 49ers (FYI: 11th in run efficiency in 2022) moves them to 60 on the scale versus 50 for other teams, a marginal difference. No matter who a defense is facing, they can’t fully ignore or commit to stop the run. Where defenses end up on the 0-100 scale on average could have as much (or more) to do with the defensive coordinators philosophy as in response to the offense. Plus, how much you can commit to stopping the run will be affected highly by the perception of the opponent’s passing game. Facing the strong rushing game of the 49ers isn’t the same as the strong rushing game of the Falcons, and that delta could mean more on the 0-100 scale than the rushing game strength alone.
From a practicality standpoint, how would knowing that having a better running game enhances play-action passing (marginally) even affect our decisions? I don’t think any team isn’t trying to maximize their value when running the ball. It all comes down to actions you can take outside of effort: Should we spend more draft capital? Should we run more often” Should we spend more time game-planning on the run? Those questions all come with costs, meaningful costs knowing that passing is more efficient generally, and the differences in efficiencies between good and bad passing teams is greater than between good and bad running teams.
The bar for assuming that these investments in the running game justify the opportunities costs is high, and that has to factor into our decisions, especially when we’re trying to juice up a relative advantage on the field. For me, that bar hasn’t been reached, and I think the NFL still likely errs on the side of investing too much in the run relative to the pass. Research that shows differently, with better methodology or more representative data would change my mind, but I’m going to need to see it.
Quickly, I’ll look a couple of the objections to data analysis generally in the quote tweets as an illustration for why research will never overcome skepticism.
The problem with the current research is that it does not take into account the changes made by the defense to keep the rushing success rate down, e.g., coverage, alignment, men in the box, etc.
@cameronsoran (Writer on College Football Xs & Os)
Ya don’t say… it’s almost like not understanding how the game works doesn’t give you good data?
@CoachVass (Host MDGA Podcast and Run-Vass Option)
Listen, you could build all kinds of factors into the model listed here (this reminds we of 4th down models criticized for missing stuff they have in them), and you can go out and better measure stuff with your superior knowledge to make more useful data (sounds profitable, actually). Will they do this? Of course not, because it’s easier to dismiss than be productive, especially on social media.
Great read. Not being up on this analysis like you are - has anyone looked at one on one on outside WR's (with no single high safety) for a "running/play action" team, versus not? If the frequency is high, and we know such a look is important in opening up the passing game, that would give credence to the theory would it not?
San Fran, for example, does get more of those looks, and Purdy/Jimmy G do see more one on ones down the field (although Brock can't seem to hit them this year).