For Leafs fans, the upcoming season will be an important one. Though it is (once again) extremely unlikely that the Leafs could win the big silver beer stein on offer at the end of the postseason tournament, fans of the team will be watching very closely for signs that any of the existing questions about the team might be answered. We’ll dig through the statistics like the oracles of old pawed through goat entrails, looking for evidence that augers well for a brighter future ahead. It is pretty safe to assume that Brian Burke and his staff will be engaging in a similar process.
Many of those questions concern individual players: what, for example, can we realistically expect from players like Jonas Gustavsson, Luke Schenn, Tyler Bozak and Nikolai Kulemin, all of whom are approaching their likely peak athletic potential in the next few years. Other questions concern more collective issues: what improvement can we expect from the Leafs’ power-play and penalty killing units?
All of those questions merit discussion, but they all relate to issues about the players; with Ron Wilson entering his third season as Maple Leafs head coach, and keeping in mind that last season in particular represented a disappointing step backwards, it’s safe to say that questions must also remain about the suitability of the current staff for the task ahead.
One of the things I like most about the hockey blogosphere is the very strong tendency to attempt to quantify, measure and make concrete and expressible these sorts of issues. When we speak of “issues” and “questions” about the coaching staff, the reality is that there must be some set of performance metrics against which it is reasonable measure the observed outcome of this season, in an effort to dispassionately judge whether the coaches are making a discernible difference in the team’s play (and whether that difference represents an improvement).
Statistical analysis isn’t my strong suit, and I don’t pretend to have the facility with numbers that many other hockey bloggers have ably demonstrated, but I thought I’d try my hand at attempting to cobble together an answer to this last question. What types of numbers should we look for when attempting to grade Messrs. Wilson, Hunter and Acton at the end of this season. Please accept this analysis for what I hope it is: a starting point for the discussion, and a jumping off point for others with the statistical chops that are absent from my toolkit. Criticisms, comments and refinements are welcome – put ’em in the comments below!
I wish I could figure out a way to embed the tables I compiled directly into this post, but two hours of futzing about with Google, Google docs, WordPress, Excel and Numbers have failed to surrender any such secrets, assuming they exist. Unfortunately, therefore, I have to just insert a link to the table I compiled. All data are sourced from hockey-reference. com.
I thought the most logical place to start in assessing the performance of the coaches would be year over year changes in goals for and goals against. I compiled the goals for and goals against data for all 30 teams in each season since the lockout, calculated the percentage change in each from the previous year. I then tried to normalize the percentage change data by calculating the average change each year and the standard deviation of the data. I then selected out those results that lie between one and two standard deviations away from the mean (classified as “moderately exceptional”), and those results that lie two standard deviations or more away from the mean (classified as “significant”).
Link to Google docs spreadsheet re: YOY data: change in GF and GA
Assuming that the year-to-year changes are normally distributed, if I remember my statistics class correctly, the results that are interesting are those that fall more than one or two standard deviations from the mean. Those are the results I mentioned above, with the moderate desirable increments marked in light green, the significant desirable increments marked in dark green, the moderate undesirable increments marked in pink, and the significant undesirable increments marked in red.
If I’m reading all of the data correctly, it would appear that the standard deviation of the Goals Against data is typically between about 9 and 12 per cent. Thus, an increase or decrease of anything less than 9 to 12 per cent, statistically speaking, represents the mushy random middle, results in the 68% of data that cluster around the mean in a normal distribution. If I am applying the theory correctly, it would be unwise to come to any conclusion that the team’s performance had either improved or deteriorated based on data of this nature. To make that sort of judgement, I would suggest that to even make a weak judgment about significant differences in performance, we would need to observe an increment (or reduction) of between 9-12% and 18 to 24% (these would be the results between one and two standard deviations from the mean). Variances of more than 18 to 24% from last year’s data could confidently be said to represent a clear indication of differential performance.
Two thoughts come to my mind: first, it’s important to keep in mind the (perhaps obvious) but important point that increases or decreases in a team’s goals for or goals against are not solely attributable to coaching. In fact, it’s probably a live question whether coaching can be said to have a demonstrable effect upon the results at all. Certainly, the old saw is that “you can’t teach scoring,” though it is generally believed that coaches and their systems can and do have a more pronounced effect upon the defensive side of the game (and, by extension, the goals against ledger). If anyone has any thoughts on how to examine the evidence in that regard, I’d love to hear about it.
Second, the numbers involved are fairly large. I think the data seem to be telling us that wide variances in the numbers may be expected from year to year for purely random (or at least statistically uninformative) reasons.
If that last conclusion is correct, unless there is an enormous change in the Leafs goals against totals this year (more than +/- 20%, which in practice would translate into about a 54 goal change either way), it seems that we ought not to make any judgements about the performance of the coaching staff based upon these numbers.
Thoughts?
All I can say is that trend for us better improve this year, and significantly. We have to decrease GA by at least 35-40 goals if we want to seriously compete, and in order to move up the standings, we need to at least make a start on improving GF, which we all know is a big ? this season. As to what all that means re: coaching, I just have no clear fucking idea. When I think of our coaching, it’s like a mystery wrapped up in an enigma- we need a Seer to figure it out.
Other than that, I totalled up the 5 year differential for Washington, and they moved 154 goals in the right direction in that time frame…..
Also, looking at the Sens trending lines puts them where we were about 2 or 3 years ago, confirming that their era of talent from high drafting is coming to a close. They are going to get worse in the next years, and I wouldn’t be surprised at all if they finish at the least out of the playoffs this year. Habs are in the same place as well- the loss of Halak might be an even bigger crumbling brick in the facade than we hope. Woohoo!
Actually, the more I think of it, I believe trying to assess coaching efficiency/effectiveness is either like trying to catch a greased pig, or it can only be seen clearly in hindsight after the existing regime is gone. I mean, we generally know when a coaching staff has egregiously fucked up their jobs, but in general it’s very hard to assess how good or how bad they are. Cases in point-
1. Is Boudreau brilliant or merely a placeholder for a spectacular amount of talent? Would a better coached team have blown through the Habs last playoffs?
2. Is Quinn in Edmonton a terrible coach, a solid coach in a totally fucked-up situation with little talent, or does he simply not fit there? Or is his style not suitable for the new cap era of hockey?
3. Holland is another one- is he one of the best coaches in the league or just the lucky recipient of a very sturdy and well-planned development system that also had some good/lucky scouting along with a very, very clear team philosophy?
I think the only way we’ll be able to assess our present coaching regime is in hindsight, and even then it will be in large part a guessing game.
[…] This post was mentioned on Twitter by jrwendelman (Junior), jrwendelman (Junior). jrwendelman (Junior) said: I did it again, I wrote something, and this time there's math involved: http://bit.ly/aQHd4k […]
Good article. If the GA and GF aren’t reflective of coaching, then my guess is they might have something to do with Vesa Toskala and a forward group consisting of exclusively third liners early last season. I think both numbers will improve this season, which hopefully translates into a playoff spot.
Great piece. I’m inclined to view GF/GA as a product of on-ice talent rather than coaching. I prefer to assess coaching by examining preparation, both over the season and per game, individual assignments, understanding player assets and limitations, player response to coaching and pressure, maintaining a lead, playing full games and the success or failure of implemented systems. For instance, Wilson likes to force point shots defensively, and have all his players block shots as often ad possible. In theory this is good selfless play, in practice it gets your players injured more frequently. I doubt Crosby and Malkin are asked to do this as much, nor would they be publicly called out for not doing it. Keith Acton is supposedly responsible for special teams. How frustrating was it watching Kaberle at the point and Kessel at the half-boards last year? It was an effort in futility, yet they stuck with the plan, too long IMO. Wilson has more wiggle room than Acton this season, afaik, but both need to realize that they won’t be around long if last year happens again.
Thanks, everybody. I’ve always felt that GF/GA are unlikely to be useful barometers for judging the coaches, for the reasons many of you have identified – the Toskala factor, the obvious impact of the players on the ice. I like to try and set the rules for judging performance in advance, though.
The way I read this data, the message is that no matter what happens this season, “no fair bitching about increases in GA or a failure to score and blaming it on the coaches”. I hope to take a similar look at the PK/PP and try to figure out what we can learn there too.
[…] few days ago, tentative as a newborn horse, I took my first steps towards bashing together some statistical analysis of my own. I was trying to help address one specific issue that might be relevant to the upcoming […]