Some of you clicked on this just to ask, “WTF is PDO?” Which is fine – we take all kinds here.

The seeming acronym doesn’t stand for anything – it was the online handle of Brian King who created the stat in hockey. The definition of the metric PDO is listed below but Wikipedia actually has a page for hockey analytics, so if you want to know more click here.

PDO – Uhhhh…

I’m just going to copy the definition from James Grayson.

PDO is the sum of a teams shooting percentage (goals/shots on target) and its save percentage (saves/shots on target against). It treats each shot as having an equal chance of being scored – regardless of location, the shooter, or the identity or position of the ‘keeper and any defenders. Despite this obvious shortcoming it regresses heavily towards the mean – meaning that it has a large luck component. In fact, over the course of a Premiership season, the distance a teams PDO is from 1000 is ~60% luck.

Now you may have seen an occasional tweet from me expressing displeasure with the use of this particular metric, but I’ve never actually sat down to detail why I think it’s dumb. Today I will do that.

Reason 1) It’s Theoretically Flawed
Why? Because it treats all shots as equal.

Here’s a clue: All shots in football are NOT equal. Not close.

Look at it visually. This is from one of many pieces by Michael Caley discussing expected goals metrics and it clearly shows all shots are not equal based on distance alone.


Then you add in the whole headers are a lot harder than shots with feet thing that Colin Trainor did way back when and POOF there goes your theory and your metric, and we haven’t even gotten to all the other factors that impact a shot’s probability of being a goal.

It’s kind of sort of fine in hockey I guess because shotqualityomgwtfbbq, but it’s just fantastically dumb to use anything that makes this assumption in football.

If you need an image in your head to help explain all of this in personal terms, picture yourself with a football on a football pitch facing a goalkeeper. You take 20 on target shots at the goal from 20 yards out in the center of the pitch. You also take 20 on target shots at the goal from 6 yards out in the center of the pitch.

Which one of those scenarios is going to yield more goals?

Reason 2) It Combines Attacking and Defensive Conversion As If They Are Remotely Related
They aren’t. Teams technically have infinite choices in how they attack and how they defend. They don’t have to be related at all. Therefore, why would we treat them as if they were?

You can have a normal, straightforward average attack and a league leading defense. Or you can have an attack that consistently creates insane chances and pairs it with a defense that gives up exactly the same. Or you can… well, anything. The point is that by combining the two separate phases of play into one metric, you miss out on the signal.

“Hey, this team is overperforming PDO!”

Okay, why?

THIS IS ALWAYS THE NEXT QUESTION, and if it is always the next question, then maybe you can – I DUNNO – treat the two phases separately and immediately jump ahead a step.

“This team is giving up far fewer goals than expected in defense.”

Aha, now you have my interest. Tell me more.

“This team brought in an attacking assistant coach in the summer to try and boost the number of goals scored…”

Excellent, let’s analyze that. Wait… no team would actually do that in the current football landscape, but if they DID then this would be a very good thing to analyse.

Reason 3) Every Team Does Not Completely Regress
This is a fundamental nerd point, but the fact of the matter is that every team’s PDO does not completely regress toward zero, even across multiple seasons.



There are systemic reasons why some teams allow far worse chances season after season than others. If a team’s defensive structure is such that the average shot distance it allows is from 20 yards instead of 15, your goalkeeper has more reaction time on average to make saves, there are likely more men between the ball and the goal, and the team is almost certainly going to post a better save percentage.

Or if you are a crazy high pressing team that tends to keep the number of opposing shots low, but the trade-off is that when someone beats your press they get awesome chances right on top of your goal, then your save percentage numbers are also going to look weird and are unlikely to regress to anything approaching average.

The same applies for elite attacking systems. Some head coaches have an attack that consistently creates better chances than average, which means their shots are more likely to go in the goal, and the team is more likely to post abnormal PDO numbers that have very good reasons to stay that way.

And all of this is before we even touch the impact of super elite or sub-par players with regard to skill.

One reason why it may look like teams revert to the mean over the course of many years is because manager or head coach tenures last between 12-15 months on average. Start tracking these things by head coach tenure (or tracking head coach performance across different teams) and it yields a lot more clarity.

A weird PDO by a team might be random variation, but there’s a decent chance it isn’t and for reasons you care about. Other ways of analyzing team performance would be a lot more insightful and should be examined first instead of simply assigning outliers to the random variation dustbin.

Regardless of its common usage in hockey, PDO is theoretically flawed in football and people need to stop using it. Yes, I know there may be data reasons why some analysts continue to use PDO, but as explained above, we should try to find a way past this at the earliest possible opportunity.

Do something smarter that better relates directly to the sport you are analyzing.

The good news here is that there is now a giant open space just waiting for a clever person to tell the world what they should be using in place of PDO, and that person could be you!

  • Afro Man

    Ted, agree with this from way back when. It’s a measure of luck more than anything else.

    As a bit of an aside, I’ve long thought about building a model based on ratings of shots (eg, from 6 yard box centre of goal to 40 yards by the touchline) and then the result (goal, saved, blocked, wide etc). My problem first off is in getting the data. gives pretty detailed analysis but developing it is a killer and I’d need to give up the day work. Do you have any other sites you could point me to? Much appreciated.

  • Ron IsNotMyRealName

    Does this also go for Expected Goals?

  • Michael Gormley

    Just to echo what Afro Man has said, I’ve been toying with an idea all season of modelling expected goals based on shot positional data. However, aside from Opta (who’d never give me access) I’m not really sure where to find such data. Using my initiative I’ve managed to convert some of this seasons games into a format my tool can use and I’m encouraged by the results, but the process is very time consuming and prone to error.

    If there are any free (or low cost) API’s out there that provide such data (x, y positions) or blogs (etc) that have that kind of information available then it’d be greatly appreciated!

  • Konsta Kojootti

    It boggless the mind to see an analyst resort to such a feeble argument, where’s the data? The PDO or whatever one might call it, is clearly a tool to determine dominance of one team over another besides the scoreline- whether the win/loss or a draw was to be expected according to this metric, and as I understand it, so do the xG models. So instead of resorting to an argument which is basically ”I don’t quite like it”, you’d do well to compare the performances of these two metrics, as one would expect an analyst would do.

    So if it’s not too much trouble, why not put the two to the test? How often does PDO succeed in predicting the outcome of a match (W/D/L) versus the xG? Which one is more accurate? I’m not a statistician but I’d expect that with both metrics chance plays a big role due to the nature of the sport, but obviously you know the math and you’re capable of determining the sample size required to counter the variance.

    PS. I have asbolutely no preference on these metrics, but I simply found it curious that you didn’t actually compare the performance of the two.

  • allanderek

    This is a nice article.

    Reason 1) Well it is theoretically wrong, but that’s because it’s a *model*, all models are wrong, some are useful (, so the question is, do we have a useful model? That is of course dependent on what you are using it for. To digress a little, xG has a similar problem in that what we are really interested in is *chances* not just chances that result in a shot, Marek wrote a really nice piece about a related topic ( In this article Marek pointed certain teams played a lot of low passes into the 6 yard box, a high proportion of which didn’t complete and hence do not show up in any shot statistic, but those that connect end in very good chances. Hence the low crosses/passes that don’t complete are probably equivalent to a shot with decent xG.

    In summary, it is clearly fair to say that PDO is wrong, but that is true of any measure. To show that it is not useful you have to show that it over/under-estimate specific teams. That’s probably true, but I don’t think you’ve shown it here. As you pointed out shots are more likely to be goals depending on location so if you have a team that consistently shoots at any sight of goal, or consistently refuses shooting opportunities in favour of developing better ones, then it seems likely that their PDO will be skewed because of that. Still I’m not quite convinced that that would make PDO “useless”.

    As an aside, the graph you show is a little unfair, that shows the odds of scoring from any attempt plotted against adjusted distance from goal, but PDO only looks at shots on target, it could be that all of the difference between shots taken at different (adj) distance is absorbed by the chance of getting such shots on target. I doubt it, but it could be. I would certainly expect it to account for some of the difference.

    Reason 2) This is a much fairer point in my opinion. A response might be to point out that it takes time to look through lots of statistics, so a good aggregated statistic *might* give you a good signal. However it is pretty easy to automatically search for interesting statistics. However, PDO could *arguably* represent a nice way to aggregate such statistics to *display*.

    To add to your complaint here, I would point out that not only is PDO combining two seemingly unrelated characteristics, that could end up *masking* an interesting feature of a team, who may have a particularly bad save percentage but a particularly good shooting percentage which cancel each other out and then appears as an uninterestingly average PDO.

    We combine unrelated statistics all the time, and partly it may be because they are unrelated. If we are trying to compare the general luck of two teams, then we need to look at both their attacking luck and their defending luck. It might be that team A has better attacking luck but team B has better defending luck, so we need a combined measure. Now you can *certainly* complain that PDO does not combine the two correctly, but that is a different complaint.

    A final remark here is that PDO is at least interpreted as some measure of luck. If that is true, then we certainly would expect shooting and save percentage to be unrelated.

    Reason 3) This is essentially a much better argument for your first reason.

    Conclusion: I think PDO represents a useful *starting* point. Your article makes some very important points, but your conclusion (or rather title) is (for me) a bit of a stretch. But your actual conclusion that the space is ripe for investigation is certainly one I can agree with.

  • allanderek

    A further point with regards to PDO in relation to luck (which is not the only way you might wish to interpret PDO). Generally we are looking at PDO to try to determine whether a team’s league position/current points are commensurate with their actual skill-level and to therefore try to determine whether we expect this team to improve, decline or remain roughly where they are.

    It therefore seems strange that PDO does not reference points. What we really care about is whether or not a team has been really lucky *and* gained from that luck. For example, if a team lets in 10 shots on target, takes 1, and loses 2-1, their PDO will increase. But they lost the game so their luck hasn’t been particularly useful to them.

    Taking care of this is non-trivial, because a team may be very lucky (as measured by PDO), but deserve to win in any case. For example you may take 7 shots on target and score with 5 of them, whilst your opponents take 3 and score with zero. In this case the scoreline may well be flattering, but the result and hence points likely commensurate.

    Obviously these examples are deliberately extreme to make the point, but several less extreme examples summed together could produce a similar effect.

Improve Performance and Productivity in Your Club:
State-of-the-art Football Analytics