Okay, the last couple of posts I've put up have had tables showing how many runs the batting team expects to score starting from a given situation. You can do that math another way, and figure out how many runs the defending team expects to give up starting from a given situation, instead. This only makes a difference if you look at it team-by-team; league-wide, the "hitter" and "pitcher" run expectancies will be the same: every run a batter scores is a run some pitcher gave up. And, yes, I know this isn't really "pitcher" run expectancy so much as "defensive" run expectancy, since fielding plays a huge role (a huge negative role in our case). But enough preamble. Here's the Nats' 2009 pitcher (or defensive) run expectancy:
|Runners||no out||1 out||2 out|
Do you see teh suk? Right off, the Nats gave up 0.59 runs per inning (starting from no out, no one on) in 2009. League average is 0.48, and the Nats scored 0.47... Some more comparisons after the jump, including the Horrifying Truth About Sac Bunting.
Let's take a look at the difference between the table above and the 2009 NL run expectancy. In other words, this is how many runs more than league average a team could expect to score against the Nats in 2009 in each base/out state. (Positive number = Nats R teh suk; negative number = Nats R teh aw3s0m3.)
|Runners||no out||1 out||2 out|
The bottom line is what we pointed out before the jump: NL teams scored an extra 0.11 runs per inning more than average just by playing the Nats--that's a full run per game! The interesting part is looking at how the extra runs are distributed: the Nats tended to give up extra runs or be roughly comparable to league average in almost every situation except big no-out run scoring opportunities (bases loaded no outs, runner on third no outs, runners 2nd and 3rd no outs). It's tempting to just wave the "Nats R not teh clutch" flag, like they can only prevent runs when there are no outs, or something.
But what's really going on here is the small number problem. A runner on 3rd with no outs is pretty uncommon, and the Nats probably got lucky the few times it came up--especially when you consider that they didn't do any better than average with a runner on second and no outs (a much more common situation). Checking the numbers, the "--3", "-23" and "123" with no out situations each came up less than 30 times over the course of the season. Sad to say it, but what we're seeing here is a couple of fluky islands of good in a vast ocean of awfulness.
And what a lot of awfulness there is! Look at the run expectancy for no out, man on first: 0.99! On average, it's like the leadoff hitter always scored if he got on base (league-average is about fives time out of six, but still...).
The Nats were so bad they made bunting look good!
It's true. An NL-average team lowers its run expectancy by 0.18 runs by sac bunting the leadoff runner to second. Against the Nats, they only lower their run expectancy by 0.10 runs (the Nats will let that guy score no matter how many outs you waste). This spurred me to took a look back at the Nats' hitter run expectancy and I found an odd, horrifying pattern: in 2009, the Nats only would only lower their run expectancy by 0.02 runs if they sac-bunted their leadoff runner. For some reason, the Nats were worse than average at scoring from first, but better than average from second--is Riggles some sort of evil genius? Maybe Dunn really was working on his speed game!
That's it for run expectancy until July. Next week: back to WAR!