So...are you all ready for this?
Going back to my annoyed at simpletons post, commenters Zach Martin and Berkowit28 both questioned the fact that I explicitly deemed fielding prowess "negligible" in the analysis. Thanks to their comments, I thought it through a bit more, and have amended my earlier analysis.
But before I walk through the amendment, I need to clarify one thing: the original post did NOT deem fielding ability negligible to a team's welfare (although this post partially does); it deemed fielding ability negligible to determining the validity of the "everyday player for MVP" argument. This distinction may seem subtle, but it's critical, so I hope it's understood.
Anyhow, as I thought it through, I identified three issues. Two supported my contention that fielding ability is neglible, while one supported Zach & Berko's contention that is should be considered. Here are the issues:
Issue 1: Impact of Good vs Bad Fielding Itself
I believe a player's hitting (or pitching) abilities have far greater impact on his team's welfare than his fielding abilities. Stated another way, the difference in impact between the best fielder in baseball (at a given position) and the worst is dwarfed by the difference in impact between the best and worst hitter at those positions. I don't know much about sabermetrics, but I think I remember Moneyball making a similar point. Here's an attempt to measure the impact of excellent vs poor fielding at two selected positions (Shortstop and Left Field):
|Player||Fielding %||Avg Chances for SS's||Avg Errors/ Season||Runs Costed/ Error||Total Runs Costed/ Season|
|Player||Fielding %||Avg Chances for LF's||Avg Errors/ Season||Runs Costed/ Error||Total Runs Costed/ Season|
As Zach stated, it's basically impossible to accurately quantify fielding ability. But since I know of no better metric, we're using fielding %. Since each position is unique in both average fielding % and # of chances, each position should be treated separately. Above I consider only SS and LF (feel free to do the other 7 positions on your own).
Let's look at shortstops first. Among them, Rollins had the best FLD% (0.988) and Hanley Ramirez the worst (0.967). Normalize that by shortstops' average of 617 chances per season to factor out differences in playing time, and you get 7.4 and 20.4 errors, respectively. Now the question is, how often does an error lead to a run? I have no idea. My best guesstimate is to take the MLB average of 99 errors per team and divide by the MLB average of 60 unearned runs allowed per team to arrive at about 0.61 runs per error. It would then follow that Ramirez would cost the Marlins 20.4 x 0.61 = 12.4 runs over the season by his fielding and Rollins 7.4 x 0.61 = 4.5 runs. So by this methodology, over the course of a season, the worst fielding shortstop would cost his team 8 runs more than the best fielding shortstop. Same methodology for LF, where the difference is about 5 runs.
Now let's look at hitting. This metric thankfully is done for us via Bill James' Runs Created statistic. The difference between the best-hitting SS (Hanley Ramirez, with 136 RC's) and the worst-hitting SS (Bobby Crosby, with 55 RC's) is 81 runs. At LF the difference is 51 runs (Manny's 145 RC's minus Fred Lewis' 94). Coincidentally, in both cases that's almost exactly 10x the impact of their fielding (no, I didn't backwards engineer this).
In summary, this asserts that hitting ability has about 10 times the impact as fielding ability (again, feel free to do a similar analysis for pitchers. I will choose to skip this and assume it's roughly equal to hitting). I thus feel that this Issue 1 supports my contention that fielding is negligible compared to hitting or pitching.
Issue 2: Relative Amount of Time in Field
This is the factor that I feel supports Zach & Berko's contention. Similar to how I gauged the % of a team's PA's that a player was directly involved in (i.e., either at bat or on the mound), I realize it's important to do the same thing for fielding. Here it is:
|HI's1/ game (both teams)||HI's/ game with player fielding||% of HI's/ game with player fielding||Games player plays in||% of team's games player plays in||% of team's HI's with player fielding|
As we know, a pitcher fields only when he’s on the mound, so it follows that in the course of a season, he fields in the identical percentage of his team's plate appearances as he is on the mound for. This is confirmed above, as the 7.8% figure matches that of the previous post. A hitter, however, fields in a far greater % of his team’s PA’s - let’s say roughly 8.5 half-innings per game, or 46% of the PA’s in a given game. 46% x 98% = 45% of PA's of games in which his team plays. Compare this with the pitchers' 7.8%, and to me this says that over the course of a season, hitters are playing defense 5.7 times more often than pitchers. I feel this is a significant factor I failed to consider the first time around.
Issue 3: MVP criteria
I'm not gonna try to quantify this, but my belief is that when selecting an MVP (or Cy Young), hitting and/or pitching stats dominate the voters' criteria. Being a poor or great fielder can help solidify a decision, but I don't think it's ever the primary criteria. Thus, in the context of whether it's valid to assert that an everyday player deserves more MVP consideration than a starting pitcher, I feel like their respective fielding ability is at best secondary.
So what does this all mean? I'm starting to get a little dizzy myself, but here's my conclusion:
|% of team's PA's directy involving player 1||Relative impact of fielding vs hitting||Relative time in field, compared to pitchers||Overall impact of fielding||Overall potential impact of both fielding and hitting/ pitching|
Looking at hitters first, we first take the 5.3% figure from the previous post (I discontinue using the percentage unit because after this calculation it will no longer apply). This represents the potential impact only due to the time the player is at bat. We multiply this by both the 10% (represents fielding is 10% as impactful as hitting, as per Issue 1 above) and the 5.7 factor (represents hitters are playing the field 5.7 times as often as pitchers, as per Issue 2), and arrive at an overall fielding factor of 3.0. Add the existing 5.3 from the previous post to this 3.0 fielding factor and get an overall impact factor of 8.3.
If you do the same exercise for pitchers, you get an overall impact factor of 8.6. When compared against the 8.3 factor for hitters, it looks like when considering both impact at the plate/mound and playing defense, hitters have raised their potential impact to just under that of pitchers, largely due to the fact they're on the field longer, but the "everyday player" MVP argument remains invalid (also keep in mind Issue 3 is not considered in these calculations, which should once again widen this gap between hitters and pitchers' impact).
So there you go. I know there are immeasurable deficiencies in how I've quantified these things (catch the irony there?), plus I had to crank this post out pretty quickly, so please check my math. I hope someone out there reads through and follows what I'm trying to say. I'd love to hear your critiques (but if you do, please include at least some specifics - that means you, Bobmac!), comments, or questions. And thanks Zach & Berko for making me dig a little deeper.