Managerial Changes and the Perception of Form

Fourteen managers have been sacked during the last and current season of the Premier League. Almost every time we’re left wondering if they deserved it based on their team’s performance and if their successor did, or will do, significantly better. After sacking Chris Hughton, Norwich’s David McNally was quoted saying: “We are sad to see Chris go, but our form generally, and away from home, has been poor and this is a results business”. A simple, honest statement at first glance, but it raises a few questions:

  • Is he implying that ‘form’ and ‘results’ are the same thing?
  • If he isn’t, was Hughton let go due to poor form, poor results, or both?
  • Isn’t it his job as a chief executive to decide whether it’s a ‘results business’ or not? Doesn’t he know that results can be misleading, or does he merely use results to justify a decision based on something else?

Now I’m not the ultimate judge of a manager’s performance and I don’t intend to be. Form, either in terms of results or in terms of the underlying performance, is in the eye of the beholder. It’s a matter of perception. This perception of form is what I’m interested in. Here are three important points about how we perceive form:

  1. There is a temporal dimension. One match occurs after the other, which automatically causes us to perceive a trend – even if there is none (i.e. the trend may not have any predictive value beyond the long term average).
  2. It’s about relative performance/results. A loss may not be judged as harshly if it happens away against a good team, but do we correct for the strength of the opposition enough in our perception?
  3. And of course there’s the difference between results and the underlying performance (good or bad luck), as far as we can measure it.

I while ago I experimented with something I called ‘form charts’ (article in Dutch). The idea is that they are a graphical representation of a team’s attacking and defensive performance relative to the difficulty of the match, over time. In this article I present a slightly improved version. How it works Team A plays a match against Team B and Team A gets an attacking ‘score’ by comparing their offensive output* (adjusted for home advantage) with the offensive output of all other teams in the league against that same opponent (Team B). We know Team B’s average amount of offensive output conceded, as well as the standard deviation. The number of standard deviations above or below the mean is Team A’s attacking score. Along the same lines we can calculate a defensive score by comparing the offensive output conceded by Team A with the average offensive output produced by Team B and it’s standard deviation. *The offensive ‘output’ can be defined as goals, shots, expected goals or anything like that. For example:

  • Norwich scored 2 goals at home against Everton.
  • A correction for home advantage means this really only counts as 1.74 goals.
  • Everton concede an average of 0.98 goals, with variation of 0.62
  • Norwich’s attacking score is (1.74-0.98)/√(0.62) = 0.97, almost 1 standard deviation above average

For their defensive score we calculate:

  • Goals conceded corrected for home advantage: 2.3
  • Average goals scored by Everton: 1.63
  • Variation of goals scored by Everton: 0.37
  • (1.63 – 2.3)/√(0.37) = -1.1

We can simply add the offensive and defensive score to get an aggregate score of -0.13. This is what I view as the ‘perceived result’. If we do the same thing but with ExpG instead of goals we get a metric of ‘perceived performance’ instead. This graph shows the (goal-based) attacking and defensive results over the course of Norwich’s season (catchy title huh?). Norwich Form Results As you can see the values are all over the place. It quickly becomes clear that looking at individual matches isn’t very useful and that it only works as a moving average of, say, five matches. This makes sense because a team is not usually judged on the basis of one match. If we’re talking about form, we are indeed talking about the perception of a handful of consecutive matches. Exactly what this graph shows: Norwich Form Avg Results Norwich was in a bit of a slump when Hughton was fired, especially defensively. That’s the temporal dimension I was talking about right there. How about the other two points then?

  • To see the difference between perceived results and perceived performance we can simply use the difference between goals and expected goals.
  • To see the influence of the difficulty of the schedule we can calculate the form graph but remove the correction for home advantage and instead of comparing to the average and standard deviation of a specific opponent, we compare with the league average and the average standard deviation. This way we can illustrate all three points with one graph, because all three metrics fit on the same scale (using the aggregate of attack and defence):

Norwich form Note that most of the time results with or without correction for difficulty don’t deviate that much because difficulty tends to even out over five matches. Some particularly hard or easy stretches can be seen though. The 7-0 away at City and the 5-1 in Liverpool stand out. This graph also indicates that there was more of a downward trend in performance than there was in results, so that might have been the real reason for McNally. Here’s Fulham as another example. The end of Jol’s reign shouldn’t have been a surprise to anyone, but sacking Meulensteen seemed like a strange decision. Magath certainly hasn’t done any better so far. Fulham Form Before I shower you with more graphs, let’s move on to some conclusions after looking at all 14 sackings. I have tried to measure to what extend the three problems with perception of form were present at the time of sacking in each case.

  • “Bad luck” – Results score in last 5 minus performance score in last 5 (to what extend were results worse than the performance would suggest)
  • “Temporary slump” – Performance in last 5 minus performance in last 20 matches.
  • “Underestimation” – Results not adjusted for difficulty in the last 5 minus adjusted results in last 5 (to what extend did the difficulty of the schedule make the situation look worse)

The lower the number, the worse it makes the manager look:

Club Manager Bad luck Temporary slump Underestimation
Cardiff Mackay -0.54 0.04 -0.25
Chelsea Di Matteo 0.38 0.10 -1.21
Fulham Jol -0.94 -0.51 -0.47
Fulham Meulensteen -0.48 -0.58 -0.81
Manchester City Mancini -1.09 -0.25 0.94
Norwich Hughton -0.30 -0.38 0.31
QPR Hughes -0.36 -0.49 0.48
Reading McDermott 0.22 -1.25 0.52
Southampton Adkins 0.85 0.57 -0.94
Sunderland O’Neill -0.61 -0.48 0.51
Sunderland Di Canio -1.61 0.64 0.06
Swansea Laudrup 0.63 0.02 0.25
Tottenham Villas-Boas -1.34 -0.97 -1.07
West Brom Clarke -2.13 0.36 0.24
Average -0.52 -0.23 -0.10

The “bad luck”-effect is the strongest. In 10 out of 14 cases it was present, and on average it makes these managers look half a standard deviation worse than they really are. The “temporary slump”-effect is also at play, but it’s less obvious. Based on this data I couldn’t say for certain that underestimation is much of a problem. In the introduction I left open the question whether a trend in performance has predictive value beyond the long term average. In other words: is a temporary slump really temporary? Based on the last two seasons of the Premier League, I can say with some certainty that looking at the last 5 matches to predict the performance in the next match is no better than looking at the last 20 matches. The difference between the performance score in the last 5 matches and the next match is on average 1.24 and the difference between the last 20 and the next is on average 1.17. Graphs, graphs, graphs Poor André Villas-Boas… Spurs Form A change of manager didn’t have much effect on performance in Cardiff. Cardiff Form Sunderland never looked good during the last few seasons, but Di Canio was particularly bad (click for big): Sunderland Form This one shows only the attacking score of Manchester United. The difference in results between this season and the last are very clear. The difference in performance not so much. Man Utd Form As a bonus, here’s the current top 4. Or should I say top 3… Top 4 Form Final note: I didn’t read Ben’s take on Sacked Managers, Luck & Underlying Numbers before writing this. His approach is somewhat different, but it’s definitely a recommended read.

Attacking Styles and Defensive Weaknesses

An Expected Goals Model works by categorizing chances and assigning a value to each category. In most cases we just add up all Expected Goals for a team or a player to measure performance, but it’s also interesting to take a closer look at those categories themselves. I’ve looked at a few recognizable ‘types’ of chances (based on shot type, location and assist type) and asked myself the question if a team’s attacking style can be identified by the types of chances they create, and similarly if teams have certain defensive weaknesses against specific types of chances. In other words: how much do chances of a certain type contribute to the total ExpG or total conceded ExpG? Here are the numbers for the Premier League as a whole over the last four full seasons: totals To see if these numbers are actually meaningful I’ve looked how much they differ per team (relative standard deviation), and to what extend they are repeatable (correlation between the first and second half of a season). Any stat that tells something about a teams style should be repeatable at least. The first thing you’ll notice is that repeatability can hardly be found in types of chances conceded. Teams play to their own strengths much more than they play to their opponents weaknesses. This actually surprised me a bit. I would suspect that if it’s known that a team has trouble defending crosses other teams will use that knowledge, but it might be easier said than done if you don’t have the players for it. On the other hand, a manager can change a team’s attacking style as we will see later. A couple of examples: setpieces Teams that are consistently weak at defending set pieces? There’s no such thing aside from one exception (the top right outlier): Arsenal during the 2010/2011 season. Penalties are all over the place and completely random: penalties Here you’ll see that the creation of a certain type of chance is more spread out and shows more correlation than the amount of chances conceded of the same type: headers throughballs All in all I would say that there are only three numbers that are definitely meaningful: chances created from through balls, headers and shots from outside the box. Between through balls and headers there’s also a negative correlation of 0.54. Without a doubt we’re looking at different attacking styles. Here’s a view of this season’s data which I like to call arsenewenger.png: arsenewenger.png A closer look at Arsenal shows that although he fits right in, it’s not just Özil either. Here are the top ten seasons (out of the last four) in terms of %ExpG from Through Balls: toptb You want more manager fingerprints? Here’s the full picture from this season: allteams Notice Crystal Palace as the leading team when it comes to headers? It’s no coincidence. Over the last four seasons under Tony Pulis, Stoke averaged more than 30%. Now Pulis gets the Palace job and immediately their percentage is up to 32.7% from 24.6% under Holloway. And then there’s Swansea, currently the team with the lowest share of through balls. If they ever were a poor man’s Arsenal they’re not doing a very good job now. Under Brendan Rodgers in 2011/2012 they managed 11.1%, in the first half season under Laudrup it was even up to 11.8% but then it dropped to 4.6% and all the way down to 0.7% now (that is one shot from a through ball all season). This blows my mind as Michael Laudrup was an absolute master of the through ball himself. At the same time you can find Rodgers’ first season at Liverpool in the top ten above, right there as the highest non-Arsenal team. That leaves us with one high profile managerial change which doesn’t show such a clear picture. After David Moyes’ move from Everton to Manchester United, Everton’s headers are down slightly and shots from outside the box are up (Barkley, Mirallas), but it’s not a huge difference. United’s through balls are down a bit, and headers are up, but that trend was already going on under Ferguson: manu  

The Premier League 2003-2013: Points Per League Position

This will be a really short post today. The topic: historic Premier League points totals per table position.

It is most likely that this type of study has been undertaken before, so if it seems like I may have repeated previous work, please don’t get mad. Assume I may not have seen said previous work, send me a link of said previous work and I will link it at the top of this page.

I have pulled the last ten years worth of PL tables (2003/4 to 2012/13) and the points total for each of the 20 positions in the table. It looks like this:

 

Ten Year Table

Pos Mean Mode Median PPG
1 88.6 89 89 2.33
2 81.9 83 83 2.15
3 75.8 75 75 2.00
4 68.8 68 68.5 1.81
5 64.0 65 64 1.68
6 60.8 58 61 1.60
7 57.1 53 56 1.50
8 53.1 49 52.5 1.40
9 51.0 52 52 1.34
10 48.7 47 49.5 1.28
11 46.8 47 47 1.23
12 45.4 46 46 1.19
13 43.7 42 44 1.15
14 42.7 45 43 1.12
15 41.5 41 41.5 1.09
16 39.3 36 39 1.03
17 37.3 35 37.5 0.98
18 35.0 36 35 0.92
19 32.1 30 32.5 0.85
20 25.3 25 26.5 0.67

 

Ten Year Table In Graph Form

Every season previous to 2013/14 is plotted on this graph (grey lines) and the red line is the ten year average.

Points_table_10_medium

Of Interest:

  • The 10 year average says there is likely to be one terrible team cut off from the rest.
  • Top 3 are separated somewhat.
  • Points gap between each positions gets closer as we move down the table.
  • The two low points for Fourth place were 60 and 61 points in 03/04 and 04/5.
  • I’m not entirely sure which way to solve this problem, but should any work on “points required to win the title/4th place/avoid relegation?” use the average points won for 1st/4th/17th or should we use 2nd+1 point/5th+ 1 points/18th+1 point?

 

I’ve no idea what the answer is for point four, but I recall a recent conversation on twitter about this very subject between @theM_L_G and @JamesWGrayson. So, we have looked at the average number of points that each table position records, we have also seen that information in graph form. The question now is, how does the 13/14 season shape up in comparison to this ten year average?

Are the top 6 over performing, are the bottom three weaker that the ten year historical average?

Ten Year PPG Pace

 

2014_ppg_historical_medium

Right now, positions 2-9 are are recording points per game at a significantly higher clip than the ten year average. This over performance comes at the expense of positions 10-18. Also, the gap between 8th place and 10th place is really something: Newcastle (8th) 36 points, Villa (10th) 24 points.

Has the top of the league become stronger? Have the bottom ten teams become weaker? Are the results in the chart above simply variance of just 22 games played? Possibly.

It is also possible that the 2013/14 season is an outlier, just as it is possible that 2013/14 may be the start (or middle) of a trend which sees the richer, more successful clubs record points total above historical averages.

I don’t want to dig too far into that today, instead all the information needed to conduct your own investigations are at the bottom of the page. I’m lazy, see!

Working Out

Show yer working out!

Just copy and paste.

Pos S 12/13 S 11/12 S 10/11 S 09/10 S 08/09 S 07/08 S 06/07 S 05/06 S 04/05 S 03/4 Average
1 89 89 80 86 90 87 89 91 95 90 88.6
2 78 89 71 85 86 85 83 83 83 79 82.2
3 75 70 71 75 83 83 68 82 77 75 75.9
4 73 69 68 70 72 76 68 67 61 60 68.4
5 72 65 62 67 63 65 60 65 58 56 63.3
6 63 64 58 64 62 60 58 63 58 56 60.6
7 61 56 54 63 53 58 56 58 55 53 56.7
8 49 52 49 61 51 57 55 56 52 53 53.5
9 46 52 48 50 51 55 54 55 52 52 51.5
10 46 47 47 50 50 49 52 51 47 50 48.9
11 44 47 47 47 45 46 50 50 46 48 47
12 43 47 46 46 45 43 46 48 45 47 45.6
13 42 45 46 44 41 42 43 47 44 45 43.9
14 41 45 46 39 41 40 42 45 44 45 42.8
15 41 43 43 38 41 39 41 43 42 44 41.5
16 41 38 42 36 36 37 39 42 39 41 39.1
17 39 37 40 35 35 36 38 38 34 39 37.1
18 36 36 39 30 34 36 38 34 33 33 34.9
19 28 31 39 30 32 35 34 30 33 33 32.5
20 25 25 33 19 32 11 28 15 32 33 25.3
Total 1032 1047 1029 1035 1043 1040 1042 1063 1030 1032

Match Simulation: Score Effects and Beyond

During the short time that I’ve been involved in football analytics I’ve learned a few things about match prediction, or more specifically win percentage prediction, which is very interesting from a betting perspective because it allows you to directly compare your own predictions to the bookies’ odds and see if there’s value in a specific bet. As I see it odds prediction consists of two major parts: predicting the relative strength of the two teams involved in a match, and estimating the likelihood of a certain outcome given this relative strength. This article is about the second part. It’s common knowledge that given an ‘expected goals’ value for one team in a match, you can calculate the probability of that team scoring a specific number of goals quite easily by using a Poisson or binomial distribution, which can then be turned into win percentages. This actually gives remarkably good results, but it’s not perfect. It can’t be. It’s ‘only’ simple mathematics so it assumes that the probability of a goal being scored during the time frame of a match is fixed and independent of other events. We know that this isn’t the case in reality. For example there’s something called ‘score effects’. The ‘game state’ (in this case the goal difference) influences the probability of scoring, and obviously the probability of scoring eventually influences the game state. Measuring Effects After analyzing data from the last four full Premier League seasons I’ve identified some more of these effects and by putting them together you can see a sort of ‘system’ taking shape that explains/models how a match progresses and that can be used to simulate a match and figure out the chance of a certain outcome. To do this I’ve divided each of the 1520 matches into 10 sections and measured team performance (ExpG) during each section, comparing different initial game states (in the broadest sense, not just the score). Here’s the theory: assuming a random team at a random time and a random game state, all we know is a theoretical average scoring probability. For any extra ‘information’ (about the team, the game state, etc.) we can measure the effect that is has in terms how much it causes the probability to deviate from this theoretical average. The probability of scoring is influenced by these (independent!*) effects:

  • Initial, pre-match expected goals (how good the team is on paper, including home advantage etc.). On average this causes a 43% deviation.
  • Time (it’s well known that the amount of goals significantly increases as the match goes on). Average deviation: 14.5%
  • Response to goal difference (score effects): 8.5%
  • Red card state (being a man up or down): 2.5%

This might seem counter-intuitive in the sense that a red card obviously has a much bigger effect on scoring probability, but the chance of the situation occurring in the first place is also taken into account here, and a team being a man short happens less than 10% of the time. Similarly a goal difference other than 0 only happens about half the time, while the factor ‘time’ itself is always at play. A note on score effects: I’ve noticed that score effects are much more pronounced in games where the teams are evenly matched. If a team is really dominant (on paper) they seem to stick to their plan and continue to create a similar amount of chances even when ahead. It’s also interesting that the total amount of goals scored has no clear effect on the future probability of scoring. Something can seem like an ‘open game’, but that’s mostly in retrospect, as it has little predictive value. Finally you can take this one step further because the probability of a red card occurring isn’t fixed either. It’s heavily influenced by:

  • Time. Most red cards occur late in the game: 52%
  • Goal difference: the chance of receiving a red card somehow increases by about 50% when a team is trailing by one goal. On average this causes a 14.4% deviation.

At this point I’m really stretching my data though, and as sample size is becoming a problem that’s as much detail as I’m daring to go into. The full picture looks like this (the size of the arrows roughly corresponds to the strength of the effect): Match Simulation To test this I’ve built a little “simulator” based on the underlying numbers. It works by taking only initial ExpG values and running through the match in a number of iterations in which the game state influences the scoring probability and the probability (potentially) influences the game state. It does seem to produce reasonable results, although the jury is still out on whether it’s a significant improvement upon Poisson. As far as betting goes it does have the potential added benefit of being able to quickly run some numbers as the state of the actual game changes (for example after a red card). *For example: to see the effect of goal difference, the performance I measure is relative to pre-match ExpG and after correcting for the influence of time.

Can You Get Away With a Foul Early in the Match?

Referees are sometimes lauded for keeping the cards in their pockets for as long as possible, but are players taking advantage of this? This graph may raise a few eyebrows: Yellow Cards vs Free Kicks While free kicks are spread out evenly over the duration of the match, the amount of yellow cards increases steadily as the game goes on. This suggests that the chance of getting a card when you concede a free kick increases as well. Unless fouls are in fact steadily getting more reckless, it doesn’t seem like they are judged entirely on their own merits. Of course there are different reasons for a referee to give a free kick or yellow card. I’ve looked at minute-by-minute data of every Premier League match since 2009/2010 from whoscored.com, which provides a description along with every free kick or yellow card event. The distinctions made in these descriptions are not terribly specific, but it goes as long way. Let’s look at the classification of yellow cards: Yellow Cards Classification You can see the same increase in cards as in the previous graph, but it’s clear that the composition changes over time. The share of yellow cards for unspecified reasons (“other”) increases from almost non-existent to close to half of all cards. I can only assume that these are mostly for things like time-wasting, kicking the ball away, dissent, etc., which become more of an issue later in the match. For the most part this explains the increase in the second half, but it still doesn’t explain what happens in the first. A possible answer can be found in the official rules, which state that “persistent infringement of the Laws of the Game” is also a cautionable offence. That means fouls don’t actually have to be judged on their own merits and it makes sense to look at the amount of fouls committed by the player that receives the yellow card as well. In the next graph I’ve separated yellow cards received for “real” fouls (kicking/holding an opponent etc.) into those received for first and for subsequent offences: Yellow Cards First Foul Now it’s clear that the chance of getting a card for a single foul is fairly consistent during three quarters of the match, but the opening stages are still an anomaly. The only other explanation I can think of is that referees are conscious of the fact that a card early in the match is a harsher punishment than a card later in the match. In my previous article I calculated this effect for red cards, and to a certain extend it must be true for yellow cards as well. Players already on a yellow run the risk of getting a second and will be more careful making fouls in the rest of the match. The numbers show that on average, a player receiving a yellow card will have made 0.59 previous fouls and will only make another 0.36 fouls in the rest of the match. Of course this is skewed by the fact that the average yellow is given after 59% of the match. If we correct for that it’s 0.5 vs 0.42. It’s a minor effect, but it’s there. To be certain I’ve also looked at hand balls, which I expect will be judged on their own merits. Free kicks given for hand balls are evenly spread out as well, but the risk of getting a yellow for it is 60% higher in the second half than in the first. All things considered it looks like it’s true: it is easier to get away with a foul early in the match.

DOGSO and Punishment

This week UEFA revealed plans to make a case for an end to the ‘triple punishment’ of a penalty, a red card and a suspension for denying an obvious goal-scoring opportunity in the 18-yard box. It’s true that this punishment often seems harsh on first glance, but this move by UEFA seems like a good time to try and back this up with facts. The best way to do this is to assign an expected goals value to all of the factors that are involved, which are:

  • Penalties
  • Red cards
  • Suspensions
  • “Obvious goal-scoring opportunities” (OGSOs)

For example, we know that about three out of four penalties are scored, so we can say that a penalty is worth about 0.75 goals. The other factors are quite a bit harder to determine though. I’ll even leave suspensions out of the equation altogether because that would require an accurate measurement of the influence of an individual player on a team’s performance. A bit too ambitious… Obvious goal-scoring opportunities “OGSOs” in this case are almost by definition hard to assign a value to, because we’re specifically interested in those that are denied. That means we’re trying to measure the effect of something that didn’t happen. We also know that not all OGSOs are created equal, and that nobody can even agree on an all-encompassing definition. We can, however, look at some typical OGSO-situations. For example, there’s the classic one-on-one with the goalkeeper. We have no readily available statistics on this either, but we do have this:

“From 1977 through 1984 the NASL had a variation of the penalty shoot-out procedure for tied matches. The shoot-out started 35 yards from the goal and allowed the player 5 seconds to attempt a shot. The player could make as many moves as he wanted in a breakaway situation within the time frame.” http://en.wikipedia.org/wiki/Soccer_Bowl

This crazy American experiment may turn out to be pretty useful, as this seems to be a decent simulation of a similar situation in a match. As the video below shows, five seconds is not a lot. It puts quite a bit of pressure on the attacker, not unlike having a defender on his heels. As you can see it’s not at all easy to score. [youtube id=”uJEnwi7otu0″] From the available historical data on the internet I’ve gathered that in these kinds of shootouts about 48% of attempts were scored. That means this kind of one-on-one OGSO has an expected goal value of 0.48. I take it that this is the kind of situation UEFA has in mind, but of course there are also cases where it’s not merely an opportunity that is denied, but a (near-)certain goal. Think of Suarez’s infamous handball on the line to deny Ghana in the 2010 World Cup, or a keeper intentionally bringing down an attacker who only has to walk the ball into an empty net. Surely these have an expected goal value of >0.95. Red cards That leaves us with the factor of the red card. In theory the effect of a red card on expected goals can be measured well, but it’s a complicated matter:

  • Unlike penalties and goal-scoring opportunities, the effect of a red card isn’t constant over time. A red card in the 85th minute obviously doesn’t leave the opponent much time to capitalize on the advantage, while a red card early in the match can be a huge deal.
  • There’s a risk of confusing correlation and causation. Teams ship more goals after conceding a red card, but worse teams get more red cards anyway, so if the team simply has an off-day they can expect to concede more goals and more red cards.
  • When counting goals after a red card, we should exclude penalties resulting from the same incident, if we want to consider both factors separately.

Mark Taylor has done some interesting work here. As he points out not only is the value of a red card not constant, it’s not even linear, since on average more goals are scored in the second half than in the first. This means that the rate at which the value of a red card degrades increases a little as the match goes on. I’ve confirmed that this is true even if matches with red cards themselves are excluded (which would be one explanation for this effect). Mark comes up with an expected goal value of 1.45 for a theoretical first minute red card, but because I’m not entirely sure how he got there (and because double-checking is simply good science) I decided to take a shot at it myself. I’ve taken minute-by-minute data from 4.5 Premier League seasons and looked specifically at the 204 matches in which exactly one red card was given. For these matches I’ve taken the average number of goals scored by the 11-man team and the 10-man team, both before and after the red card was given. After adjusting for the fact that the average dismissal is after 66% of the match, taking into account that more goals are scored near the end, and subtracting the value of penalties given for the same incident as the red card (12% of cases), I get a value of 1.08 goals for a red card in the first minute. In this theoretical case in which they still have to play the entire match the 11-man team can expect to score 0.61 goals more, and the 10-man team will have to do with 0.47 goals less. If I exclude matches with red cards given before 20%, or after 80% of the match has been played (cases which provide too little information to compare events before and after the red card), I still end up with the same number of 1.08. The Ole Gunnar Solskjaer guide to taking one for the team Is UEFA right? Well, the graph shows that the combination of a red card and a penalty can be almost four times as valuable as the goal-scoring opportunity that was denied. Harsh indeed! On average it will be about 2.5 times as valuable as a one-on-one situation. This has the nasty effect of making it very tempting for the attacker to go down easily instead of staying on his feet and taking the shot. DOGSO and Expected Goals This also serves as a handy guide for defenders. When they’re chasing an attacker who is through on goal I suggest they refer to these simple rules that they will now surely keep hidden in their sock before deciding on how to proceed:

  1. As long as you still run the risk of getting both a red card and a penalty, it’s never a good idea to make a foul inside the area…
  2. …Unless you are avoiding a near certain goal and it’s during the last minutes of the match (Suarez did the right thing).
  3. If he’s still outside the box and at least an hour has been played, go ahead and take him out (the Solskjaer special seen below).
  4. If UEFA’s suggested change goes through and you’re still in the first quarter of the match, let him enter the area and then take him out. You’re better off with a penalty than a red card.
  5. Under the new rules, a near-certain goal should be stopped by any means in almost all cases.

The last point makes clear that in reality a distinction would have to be made between DOGSOs and the denial of near-certain goals (DNCG?) and that the triple punishment would still have to apply to the latter. I feel that on average this new rule would be more fair, but I’m afraid that in specific cases there would be even more room for controversy. [youtube id=”1waQJ3dC5ro”]