Friday, December 26, 2014

"Unskewing" Polls of the 2015 Baseball Hall of Fame

Want to know who'll win an election? Take a poll. That's what dedicated election watchers like Darren Viola and Ryan Thibs do, in effect, for the Baseball Hall of Fame every year around this time. Their methodology is simple: find and record every Hall of Fame ballot released by a BBWAA writer on Twitter or explained by them in a column. The end results are Viola's toplines (which, like a political poll, list each candidate's simple overall percentage) and Thibs's crosstabs (which break down how specific voters voted).

The results of this poll are a great starting point for predicting who will make the Hall every year—but they're not perfect. Like any poll, this one has a margin of error. In politics, you would never go around quoting raw polling data as final; the results must be weighted to account for sampling error, creating a final snapshot that's representative of the whole voting pool. It turns out the same adjustments are necessary for polls of the Hall of Fame.

For the past few years, I've calculated these adjustments and used them to project final Hall of Fame vote totals. As it turns out, ballot aggregators consistently over- and underestimate certain candidates (i.e., players) by predictable margins. This makes sense—the polling sample is self-selected, and the kinds of voters who value transparency and choose to release their ballots (thus opening themselves up to all sorts of vitriol on Twitter and in comments sections) are very different from those who clam up. Generally, writers who make their ballots public skew more progressive, overstating support for steroid-tainted candidates (e.g., Barry Bonds and Roger Clemens) as well as those with more subtle, sabermetrics-based cases for induction (e.g., Tim Raines and Mike Mussina). The casters of private ballots, on the other hand, are more likely to be conservative voters—less likely to be on Twitter (a major medium for sharing ballots) and less likely to even still be covering baseball (it's hard to explain your ballot when you no longer have column inches to devote to it). This explains why private ballots will give old-school candidates like Lee Smith and Don Mattingly a significant boost in the final results as compared to the polls.

We know this because we've seen this public/private divergence year in and year out, and we can calculate each player's exact deviation from the polls by looking at those historical results. The gory details can be found at the bottom of this post, but in short, all we have to do is calculate a player's historical public-versus-private disparity, add them to or subtract them from the player's current polling numbers, and combine the public ballots we know about with the private ballots we're expecting. Voilà—a more accurate forecast of the final vote totals that will be unveiled on January 6.

Below are my final Hall of Fame projections as of January 6, 2015. As of that date, 204 public ballots had been polled out of my projected final turnout of 570. Currently, I'm projecting Pedro Martínez, Randy Johnson, John Smoltz, and Craig Biggio to be elected to the Hall. Mike Piazza currently looks to fall just short despite a 76.0% showing in the polls, thanks to a negative adjustment factor, although he's so close that a small error in my calculations could easily put him in. Meanwhile, I expect Sammy Sosa, Nomar Garciaparra, and Carlos Delgado to all (unfortunately) fall off the ballot.

I'll update this page daily with new, up-to-the-minute projections as more public ballots become known, so check back often.

If you're still with me, here is my methodology for all of the above. To find each player's adjustment, I compared 2014, 2013, and 2012 Hall of Fame polls from Viola and Twitter user @leokitty to the final results released by the BBWAA in order to figure out how private ballots voted. (For these numbers year by year, see this Google spreadsheet.) I took a simple average of each player's difference between public and private ballots in those three years. (Last year, multiple people suggested to me that I should weight more recent data more heavily in calculating my adjustments; this was a good idea, but when I went back and calculated it in a post mortem of my projections, a straight average was actually more accurate.) Then I added or subtracted that average deviation to/from players' polling numbers this year and extrapolated a projection for the final vote. I assumed that final turnout will be 570 BBWAAers (it has been very consistent at 571, 569, and 573 the past three years) and weighted public and private ballots proportionally—so, as more and more public ballots are released, they will assume a greater and greater share of the final projection, and private ballots will matter less and less. This will also reduce the error in my forecasts; obviously, as public ballots approach 100% of the total ballots, the effect of private ballots will shrink to approach zero. (In other words, in some magical land where every BBWAA voter announces his or her ballot in advance, there would be no polling error because every vote has been pre-counted.)

For some players, 2015 is their first year on the Hall of Fame ballot, so there was no historical deviation to calculate. After two fancy attempts to guess first-timers' adjustment factors fell flat in 2013 and 2014, I decided to keep it simple this year. The one overriding pattern for first-timers is that public ballots tend to overstate them by a few points—specifically, they did an average of 5.1 points worse on private ballots than on public ballots the last two years. Therefore, this year, I docked each rookie candidate 5.1 points from the polls.

Saturday, December 20, 2014

Don't Vote for Pedro: A Strategic-Voting Guide to the Hall of Fame

This year, I was excited and honored to be accepted into the Internet Baseball Writers Association of America (IBWAA), an alternative to the Baseball Writers' Association of America (BBWAA) for bloggers and other web-based baseball writers. I'm a big fan so far; the IBWAA is open and accepting, it's full of knowledgeable baseball scribes I look up to, and it allows members to participate in BBWAA-style elections of their own—including for the Hall of Fame. As a result, on December 15, I submitted my first Hall of Fame ballot of any kind. My ballot was thoroughly researched, cross-referenced, and revised, and I'm now confident it represents my ideal vision on who should enter the Hall.

It's also completely different from how I'd want actual Hall of Fame voters to vote.

Wait, what?

Unfortunately, due to complicated and outdated Hall of Fame election rules, the list of players most deserving of the Hall of Fame and the ideal Hall of Fame ballot are two different things. This is especially true if, like me, you believe a lot more people deserve to be enshrined in the Hall than the 10 that the BBWAA allows its members to vote for. On the "big Hall vs. small Hall" spectrum, I'm pretty far to one extreme; there are 24 players on the BBWAA and IBWAA ballots that I believe should be Hall of Famers. In order of most slam-dunk to most borderline case, they are:

1. Barry Bonds
2. Roger Clemens
3. Randy Johnson
4. Pedro Martínez
5. Jeff Bagwell
6. Mike Piazza*
7. Curt Schilling
8. Mike Mussina
9. Tim Raines
10. Alan Trammell
(11. Barry Larkin)*
12. Craig Biggio*
13. Mark McGwire
14. Edgar Martínez
15. Larry Walker
16. John Smoltz
17. Sammy Sosa
18. Gary Sheffield
19. Jeff Kent
20. Fred McGriff
21. Brian Giles
22. Lee Smith
23. Nomar Garciaparra
24. Carlos Delgado

* Piazza and Biggio aren't on the IBWAA ballot, since the IBWAA "elected" them in previous years. However, Barry Larkin is on the IBWAA ballot, since he's never reached 75% among that organization as he did in 2012 with the BBWAA.

For the IBWAA ballot, which has no direct consequences for actually deciding players' fate, voting solely on the baseball merits will do just fine—hence the ballot I submitted of Bonds, Clemens, Johnson, Pedro, Bagwell, Schilling, Mussina, Raines, Trammell, Larkin, McGwire, Edgar, Walker, Smoltz, and Sosa. (The IBWAA imposes a cap of 15 votes per ballot, not the 10 limited by the BBWAA.) But if I had an actual Hall of Fame vote, my ballot would look much, much different.

That's because baseball merits aren't the only consideration. As politically minded readers know, it's equally important to maximize the impact of your vote. A vote is much more likely to make the difference in a close election than in a landslide. For Hall of Fame voters who, like me, can't fit all their preferred candidates onto one ballot, it's important to vote strategically.

Due to the unique setup of the Hall of Fame election, there are exactly two important numbers: 75% (the percentage required for induction) and 5% (the minimum percentage a player must receive to not be dropped from the ballot next year). As a result, the election is only suspenseful for—and your vote will only matter for—players who flirt with these marks. Thus, any BBWAA voter agonizing over which deserving players to prioritize on his or her ballot should start with those expected to fall inside two important ranges: between about 70% and 80% of the vote and below 10%.

Based on last year’s results as well as early "exit polls" of this year’s ballots, there are only three players sitting on the cusp of 75%, whose Hall of Fame fate is in real suspense: Smoltz, Biggio, and Piazza. This trio should be the starting point for any strategic ballot. Much lengthier is the list of candidates who are at risk of missing the 5% cutoff. Last winter, Kent, McGriff, McGwire, Walker, Don Mattingly, and Sosa all received less than 16% of the vote; on a ballot growing more and more crowded by the year, their support will only shrink in 2015. Of newcomers, Sheffield, Giles, Garciaparra, and Delgado are the four best-qualified candidates most likely to garner less than 5% support. That’s 10 endangered species right there, whereas our imaginary ballot has just seven slots left. Not everyone thinks all 10 of these men are Hall of Famers, so reasonable people can disagree on how best to allot the valuable seven remaining slots. For me, Mattingly is not a Hall of Famer, and Garciaparra and Delgado are iffy cases. I could live with myself if they didn’t survive the election.

Therefore, my ideal BBWAA Hall of Fame ballot would read Smoltz, Biggio, Piazza, Kent, McGriff, McGwire, Walker, Sosa, Sheffield, and Giles. Without a doubt, this would be one of the screwiest ballots ever submitted. It excludes players who everyone, myself included, agrees are better than the players actually on it. This year, everyone agrees that Johnson and Pedro are first-ballot Hall of Famers. But that means they are also locks to get upward of 80% or 90% of the vote—making them the biggest no-brainers for a smart voter to leave off his or her ballot. Of course, this isn't always the easiest call to make on a gut level. Many writers are just uncomfortable with not voting for who they believe to be the most qualified individuals on the ballot; it seems perverse. Well, it is, but these are the contortions that the overcrowded ballot and the limiting 10-vote cap force voters into. Some would also argue that achieving a unanimous election is important for the most elite players, but this too is silly; the Hall has never given extra credit for being elected unanimously, and being a Hall of Famer is a binary state—either you are one or you're not.

Likewise, the ballot leaves off the two people I believe are most qualified for the honor—without whose inclusion the idea of a Hall of Fame seems silly and pointless. But thanks to their history with performance-enhancing drugs, Barry Bonds and Roger Clemens are again destined for baseball purgatory with about 35% of the vote. Bagwell, Schilling, Mussina, Raines, Trammell, and Edgar Martínez are other examples of worthier players than most of the men I champion; nevertheless, supporting them on a BBWAA ballot would be like throwing that vote away, as it is a virtual certainty that they will once again register somewhere in the no-man’s land between 10% and 70%.

Regardless of this reality, most, if not all, BBWAA members will vote with their hearts, not their heads. That’s no surprise; after all, remember the myth of the rational voter. One of the great foibles of political science is that voting is fundamentally irrational; the likelihood that your individual vote will matter is so remote. Yet millions of people still do it. In baseball, even though many voters recognize that the system is broken, they’ll keep casting irrational votes.

If I’m honored enough to have any BBWAA voters reading this, I urge you to break the mold and reconsider. I urge you to vote only for Smoltz, Biggio, Piazza, and whomever you wish to protect from elimination. Your ballot may not end up containing the most deserving players—not by a long shot. But it will contain the most needing ones.

Friday, December 12, 2014

2014 Predictions in Review: A Winning Campaign

It's tough to make predictions, especially about the future. I made my fair share of them in 2014, and now that the year is drawing to a close, it's a good time to revisit them.

This election cycle, I left the big predictions—Senate, House, governor—to the experts. Instead, I tried my hand at handicapping some less celebrated but equally important races: those for constitutional office in the 50 states. By Election Day, I had issued Cook-style race ratings for the nation's lieutenant governor races, attorney general races, auditor races, comptroller/controller races, and state superintendent races. (Final ratings are, for a limited time, still up on my "2014 Ratings" tab but are archived forever at the bottom of this post.) I wanted to provide a guide to elections no one else was really bothering with, to help and encourage people to understand them at a glance—but that isn't much of a help if those race snapshots are totally off the mark.

Here in December, of course, we know who won each of those 72 races, so it's time to go back and see how my inaugural constitutional race ratings turned out. The main takeaway? Politics is easier to predict than baseball. Here's how I did by office:
  • Of the 17 lieutenant governors elected separately from governors, I predicted a post-election breakdown of 12 Republicans and five Democrats. That's exactly where it ended up.
  • I predicted a post-election split among the nation's 43 attorneys general of 22 Democrats and 19 Republicans, with two tossups. It was actually 20 Democrats to 23 Republicans. This was my worst category, but it also offered the most chances to be wrong, with 31 races to handicap.
  • I foresaw Republicans taking a 12–11 lead among auditors, with one tossup. The end result was a 14–10 Republican auditing majority.
  • For the nine comptrollers/controllers in the country, I predicted each party would win four seats, with one tossup. That rubber match ended up going to Republicans, who took a 5–4 lead among this group of officers.
  • Finally, I said Republicans would hold four superintendent jobs, Democrats would hold three, and two would be tossups. All the tossups went to the GOP, as they maintained their 6–3 superintendent advantage.
Not all of those picks were created equal, though; as with Senate or governor, many elections were foregone conclusions, while others were harder to forecast. You get a better picture of where I may have gone wrong when you look at the results of the races in each rating bracket:
  • Democrats won 17 of the 17 races I rated as Solid Democratic, including one uncontested race.
  • Democrats won three of the three races I rated as Likely Democratic.
  • Democrats won just two of five races I rated as Leans Democratic.
  • Republicans won all six races I rated as Tossup.
  • Republicans won seven of the seven races I rated as Leans Republican.
  • Republicans won 11 of the 11 races I rated as Likely Republican.
  • Republicans won 23 of the 23 races I rated as Solid Republican, including four uncontested races.
From this, it's clear that my ratings were pretty good, but not great. I had the right general sense for the spectrum on which races went from Democratic-leaning to Republican-leaning, but I underestimated Republicans across the board. However, I'm in some pretty good company; most election forecasters this year expected Republican gains but were taken aback by the Republican wave that ended up forming. The fact that general punditry, not to mention the polls, were overly friendly to Democrats explains why the GOP took all of my tossup races—and also won three of my Leans Democratic races, something that would normally be a red flag. In this election, though, I think it makes sense—even if I still would have liked to have known better.

This still doesn't tell the whole story, though. It's not just about who won these races, but how much they won by. (This is, after all, what separates the Solids from the Leanses.) There's an important caveat here: a candidate's final margin of victory doesn't necessarily reflect their pre-election likelihood of winning the race; some states or races are more elastic than others. (For instance, the 13-point race that was California lieutenant governor was never even remotely within Republicans' grasp, whereas the 16-point race for New Mexico AG was definitely winnable for the GOP; those last 13 points are much, much harder to scrape together in California than in New Mexico.) However, I do think final margins are important to look at as the most obvious indicator of a race's closeness. The following chart contains the Democratic margin of victory (positive numbers) or defeat (negative numbers) in all the races I handicapped (numbers are unofficial counts from the AP):

The average margins for each category are at least in the right order, from biggest at Solid Democratic (+21.3 points) to most negative at Solid Republican (–46.0 points). Likely Democratic (+9.7 points) is just where you'd expect a generic Likely Democratic race to be, as is Leans Democratic (+3.8 points). Tossup (–9.8 points) and Leans Republican (–14.0 points) are definitely miscalibrated, but again, that's a function of the GOP's overachieving night.

However, those averages mask some pretty wide deviation in some of the rating categories—most glaringly Leans Democratic. In retrospect, none of the five races given this rating were truly Leans Democratic. Rhode Island LG (+20.5 points) and New Mexico AG (+16.2 points) fit better as Likely Democratic, Nevada AG (–0.9 points) should have been a Tossup, and Arkansas AG as well as Delaware auditor (both –8.4 points) should have been Leans Republican.

Of my Tossup races, Arkansas auditor (–19.8 points) and Nevada controller (–14.9 points) look the worst. Most of my Leans Republican races should've been classified as Likely Republican, but especially Iowa auditor (–14.0 points), Arkansas lieutenant governor (–18.5 points), Ohio auditor (–19.1 points), and Nevada lieutenant governor (–25.9 points). And the 33-point Democratic loss for Nebraska attorney general was a real stretch to put as Likely Republican.

What lessons can I draw from my biggest forecasting misses? First, there's a pattern in the races I mischaracterized most badly. The same states keep popping up: Nevada. Arkansas. Ohio. Iowa. These are states where 2014 saw voters turning particularly hard, and particularly unexpectedly, toward Republicans. In Nevada and Ohio, non-serious top-of-the-ticket Democratic campaigns allowed GOP GOTV machines to operate completely uninhibited, turning these usual swing states into what Dave Wasserman called "orphan states." In Arkansas and Iowa, nominally competitive Senate races turned into laughers when polls failed to predict how utterly and completely the bottom would drop out for Democrats among certain voters there. Those Republican currents were strong enough to sweep away even downballot Democrats running separate, often quite capable campaigns, leading to blowouts of candidates who may have deserved better than they got.

To a certain extent, you can't guard against this. Even a constitutional race that is consistently tied or close in polls can fall victim to it and become a landslide. This is because even the best poll of a race like these includes far more undecided voters than your average Senate or gubernatorial survey—people just pay less attention to their state treasurer or insurance commissioner. The race may indeed be as close as it seems if each candidate wins over undecided voters equally—but often swing voters will all break the same way on Election Day. A race that was 38% to 38% in a poll (as Oklahoma superintendent was) thus can easily wind up as 56% to 44% (as Oklahoma superintendent did) without much imagination. Still, these currents are perceptible if you look carefully enough. In Oklahoma, undecideds were likely to break for the Republican given the overall conservativeness of the median voter there; in states like Nevada, there were warning signs that Democrats might roll over. I failed to pick up on the warning signs that the current was developing, and I underestimated how strong these currents would be.

I may have also been too idealistic in thinking that people would cast their votes based on the merits of each individual race. In many cases, I talked myself into seeing idiosyncratic strengths of, say, the Democratic candidate for Ohio auditor, or I banked on the scandal-tarred unpopularity of the South Carolina comptroller, when in fact I'd have been better off looking at the state fundamentals. Perhaps the biggest takeaway from this project is that it is essential to hone that sense for when to look at the state or partisanship and when the race is truly important enough to break through and stand on its own.

We had examples of both kinds of races this year. In certain states, it was clear from how closely downballot Democratic performances tracked with one another that bigger forces were at play. In states like Ohio, the race was decided by turnout; more of a certain shade of voter (in this case, red) was simply showing up to the polls than another shade. In states like Arkansas, voter anger at the amorphous scourge of "Obama" or "Reid" or "Pelosi" drove voters to make a statement and vote blindly against Democrats en masse. And in states like Texas and South Carolina, the normal cross-section of normally conservative-leaning voters just showed up like normal and didn't see anything in these lower-level races to cause them to break their Republican-voting habits.

Then there were other states where it was clear that voters were exercising independent judgment on each race. In Maryland, voters made a statement by electing Republican Larry Hogan governor—but my Solid Democratic ratings for Maryland's two constitutional officers proved right on the money. In Idaho, voters comfortably returned their Republican governor, lieutenant governor, and attorney general to office, but they almost elected a Democrat as superintendent of public instruction as well. That race gained a lot of independent attention because of how controversial education has been in Idaho in recent years. The key is knowing when a race qualifies as "special" enough that people will vote purposefully for it and not just follow their partisan instincts or the national mood. It's a subjective call to make, and it's why forecasting downballot races specifically can be so tricky.

Overall, I'm pleased with how my ratings turned out. I still pegged most races at their correct level of competitiveness, and I'm not too concerned about my less prescient calls. Constitutional race ratings will always involve a lot more guesswork than the traditional ratings on Cook or Daily Kos; downballot races have few polls or hard data to work off, and the polls that do exist tend to be less accurate than those of better-covered races. That inherent uncertainty will also always cause me to rate more races as "Tossups" or "Leans" than probably will deserve it in the final analysis. This year, only 13 races were within single digits, but considerably more than that were plausibly up for grabs, simply because no one knew enough to say otherwise. Some races I rate as close will inevitably be those 15-point routs, but that's a feature, not a bug. I stand by a couple of my "bad" ratings from this year for this reason: with as little as we knew about it, Nevada controller (–14.9 points) was indeed anyone's game.

Having tried my hand at race ratings for both Senate/governor/top-of-the-ticket races (in 2012) and downballot races (this year), I've definitely found these to be more of a challenge—and that's the way I like it. I plan on continuing to handicap and provide ratings for constitutional races in 2015 and 2016. I ran out of time in 2014, but with fewer seats on the ballot in the next two years, I hope and expect to branch out to secretaries of state, treasurers, and more, in addition to the offices I test-drove this year. My New Year's resolution to Baseballot readers: to preview every non-gubernatorial statewide office for you on these very pages. Stay tuned.

Archived 2014 Ratings

Lieutenant Governor

Attorney General




Saturday, December 6, 2014

2014 Predictions in Review: A Swing and a Miss

It's tough to make predictions, especially about the future. I made my fair share of them in 2014, and now that the year is drawing to a close, it's a good time to revisit them.

Everyone thinks they know exactly how the baseball season is going to go down every spring—and then everyone is proven totally and completely wrong every fall. (Exhibit A: the Giants-Royals World Series.) I've long since resigned myself to the fact that my preseason picks will never come true, and 2014 was no exception. At this point, it's simply entertaining to go back every winter and see what I expected the MLB season to have in store. Back in 2012 and 2013, this little exercise turned up a few gems, both good and bad. Now let's turn some 20/20 hindsight to my 2014 American League and the National League predictions.

Prediction: The AL playoff teams would be the Rays, Red Sox, Yankees, Tigers, and Rangers. The NL playoff teams would be the Nationals, Braves, Cardinals, Reds, and Dodgers.
What Really Happened: I got four of the 10, including just one in the AL but all three division champs in the NL. As a general rule, my NL projections were better than my AL ones. I picked 60% of the AL East to make the playoffs, so of course its one representative, Baltimore, wasn't among them. I actually correctly picked the order of finish in the AL Central (Tigers, Royals, Indians, White Sox, Twins), but almost totally inverted the AL East and AL West. Here's how my predicted win totals for each team matched up with reality:

Prediction: Miguel Cabrera would hit more than 44 home runs en route to a third straight MVP, with competition from a 30/30 season by Shin-Soo Choo. In the NL, Clayton Kershaw would cruise to a Cy Young Award, while Bryce Harper and Ryan Braun would compete for MVP.
What Really Happened: Cabrera's physical ailments finally caught up to him, as he hit "only" 25 home runs, ceding the MVP to a deserving Mike Trout. Harper, Choo, and Braun dealt with power-sapping injuries all year long; while they all managed above-average OPSes, Harper hit just 13 homers, Choo also had only 13 (and stole three bases), and Braun slugged just 19. Meanwhile, Kershaw won not only the Cy Young (which, come on, was a gimme), but also the MVP.

Prediction: Despite moving Cabrera off third base, the Tigers defense would not improve from 2013; in fact, it would get worse at catcher, first base, and right field. Nevertheless, Brad Ausmus would win Manager of the Year.
What Really Happened: Ausmus showed real growing pains in his first year as a manager, especially with his nonsensical bullpen management. The Detroit defense improved at catcher and first base but took a nosedive at third and right field. Overall, the team that totaled –66 Defensive Runs Saved in 2013 ended up at –64 DRS in 2014.

Prediction: José Abreu would be an instant hit and slug 30 home runs, but it would be Xander Bogaerts who would carry home the AL Rookie of the Year award with a .342 OBP, 30 home runs, and 98 RBI.
What Really Happened: Abreu actually mashed 36, validating my faith and then some. But his .317/.383/.581 line led him to walk away with Rookie of the Year after Bogaerts' tough first season, at a .297 OBP, just 12 home runs, and 46 RBI. He didn't even get a vote.

Prediction: Miami's Marcell Ozuna and Christian Yelich, no longer eligible for Rookie of the Year but still mere youngsters, would out-WAR the NL Rookie of the Year winner in a weak class.
What Really Happened: Ozuna broke out with a .338 wOBA and 3.7 fWAR. Yelich did even better, with a .362 OBP, 21 steals, and a 4.3 fWAR. At 23 and 22 years old, they were both younger and better than Rookie of the Year winner Jacob deGrom of the Mets (age: 26; fWAR: 3.0).

Prediction: The Marlins would have a surprisingly awesome rotation, with ERA champion José Fernández, Henderson Álvarez, Jacob Turner, and Andrew Heaney—although Nathan Eovaldi would blow out his arm.
What Really Happened: Fernández was the one who blew out his arm, although his 1.74 ERA in his seven healthy starts would indeed have led the NL. Álvarez had a breakout season, posting a 2.65 ERA and earning Cy Young consideration, but Turner and his 5.97 ERA were deported to Chicago while Heaney was only given seven games in which to post his 5.83 ERA. Meanwhile, Eovaldi led Miami with 33 games started and 199.2 innings pitched.

Prediction: The Dodgers would "be in the mix with the Nats and Reds for best rotation in the league," and Dan Haren would "thrive at Dodger Stadium." LA's outfield might be another story, with André Ethier and Carl Crawford playing more games than Yasiel Puig and Matt Kemp.
What Really Happened: Haren had a 3.32 ERA at home and a 4.75 ERA on the road. The Dodgers' rotation did indeed have the second-best ERA in MLB—sandwiched between the Nats at number one and the Reds at number three. Finally, in the games-played sweepstakes, I basically reversed the truth: Kemp played the most at 150, then Puig at 148, Ethier at 130, and Crawford at 105.

Prediction: The Phillies would drop to last place in the NL East—behind even the hapless Marlins!—thanks to a bottom-five offense and the worst defense in baseball.
What Really Happened: Philadelphia didn't lose the 102 games I predicted, but they hit bottom in pretty much every other respect. The Phils' .295 team wOBA was third-worst in MLB, and while they didn't have the worst defense in all of baseball, their –39 team DRS was the lowest figure in the Senior Circuit.

Prediction: Grady Sizemore would reinjure himself in April, ending his season and possibly his career.
What Really Happened: One prediction I'm glad I got wrong. Sizemore got 381 plate appearances, his most since 2009. Although he was a below-average player, he already has a guaranteed contract for 2015.

Prediction: The number of starts Miguel González makes will match his ERA: six.
What Really Happened: So close! González made six appearances—no starts at all—and ended with a 6.75 ERA.

Prediction: Lots of people would sleep on the Angels, especially their potent offense. Albert Pujols and Josh Hamilton would combine for 10 wins above replacement, and Kole Calhoun would prove more valuable than the traded-away Mark Trumbo, with his sub-.300 OBP.
What Really Happened: Pujols and Hamilton weren't quite that good, but Pujols did register a 3.9 rWAR, and Hamilton was worth 1.5 rWAR in half a season's worth of games. As for Trumbo, he was miserable for the Diamondbacks, with a .293 OBP, a mere 14 home runs, and a –1.1 rWAR. Calhoun was worth a full 5.2 wins more, at 4.1 rWAR. All these things helped the Angels to a far better record (98–64) than even I pegged them for.

Prediction: The 2014 season would be a breakout for Marco Estrada, who would pair his typical 4.0 K/BB ratio with a lowered home-run rate to become one of the NL's elite pitchers.
What Really Happened: Home runs had always been a problem for Estrada, but they spiraled out of control in 2014. He gave up a stunning 1.73 HR/9, although some of that was bad luck, with a 13.2% HR/FB ratio. His K/BB ratio also fell to 2.89, his lowest since 2010, and he lost his job in the starting rotation. Meanwhile, the two Brewers starters whom I was lukewarm on, Yovani Gallardo and Wily Peralta, posted ERAs around 3.50.

Prediction: The Cubs' offense would be stagnant—until September, when a set of callups would lead to their best hitting month. Darwin Barney's great glovework would be worth more than any batsman's offense, even top OPS-er Junior Lake (.755).
What Really Happened: Anthony Rizzo happened. Other than his monster .286/.386/.527 year (with 32 home runs), only Chris Coghlan and Luis Valbuena produced more offensive runs above average than Barney's defensive runs above average (7.8, including his time with the Dodgers). Junior Lake's .597 OPS was second-lowest among Cubs with enough at-bats to qualify. And September was actually Chicago's worst offensive month, although many of the "callups" (like Javier Báez) had made it there by August, which was their scoringest month.

Prediction: The Astros would take a step forward this year—with George Springer hitting 10 homers and stealing 10 bases in limited time—despite the majors' worst starters' ERA.
What Really Happened: Houston improved by 12 games more than I thought it would. Springer was indeed excellent; although he stole half the bases I expected, he hit double the dingers. The main difference was that 11 teams had a worse starters' ERA than the Astros' 3.82, including the Tigers and Red Sox.

Prediction: Derek Jeter would have an injury-marred and subpar final season, as the Yankee infield's best player would end up to be Brendan Ryan. Jacoby Ellsbury, Mark Teixeira, and Brian Roberts would all spend long stretches on the DL.
What Really Happened: Jeter was healthy all year long, but his .617 OPS was his worst ever, apart from his injury-shortened 2013. Ellsbury, too, stayed healthy, but Teixeira and Roberts were more battered than a Yankee Stadium fried dough. New York's most valuable infielders turned out to be two guys who only played half a season each in the city (Chase Headley and Martín Prado), so although I was wrong on Ryan (–0.7 rWAR), I got the gist of how badly things would go.

Prediction: Brandon McCarthy would return to his prior effectiveness, and Chris Owings would play well enough to drive Didi Gregorius out of Arizona in a trade.
What Really Happened: It took the Yankees to make both sides of this prediction come true. McCarthy had a 2.89 ERA for New York after the Diamondbacks traded him and his 5.01 ERA at midseason. And just a few days ago, my Gregorian prophecy also came true, as Didi was shipped to the Bronx to be Jeter's replacement.

Prediction: "As a team, the O's will slug at the second-highest rate in the AL—but get on base at the second-lowest rate," while the Ubaldo Jiménez signing would pay off thanks to his dangerous new slider. Rookies Jonathan Schoop and Henry Urrutia would play key roles, but Kevin Gausman would fall flat.
What Really Happened: At .422, the Orioles did indeed have the second-highest slugging percentage in the league, but they were only fifth from the bottom in OBP at .311. However, Jiménez proved the free-agent class's biggest bust—and according to FanGraphs data, his slider was among his weakest pitches. Instead, Gausman stepped up to be a reliable starter with a 3.57 ERA in 20 starts. Schoop was an out machine with his .244 OBP, and Urrutia didn't play a single game all year.

Prediction: The Padres would have an excellent rotation thanks to a healthy Josh Johnson, a return to form by Ian Kennedy, and a full season of Tyson Ross, who would register a strikeout per inning. However, Jedd Gyorko, Carlos Quentin, and Chase Headley would all take big steps backward.
What Really Happened: Johnson missed the whole season due to needing a second Tommy John surgery, but Kennedy did return to form in a big way. His strikeout rate of 9.3 was a career high, and he lowered his walk rate from his aberrant 2013, resulting in a 3.63 ERA. Ross started 31 games and pitched 195.2 innings—with 195 strikeouts. And it was kind of predictable in PETCO Park, but Gyorko (.210 average) and Quentin (.177 average) dropped off badly, while Headley was traded at midseason while hitting .229.

Prediction: A halving of Jayson Werth's WAR would epitomize the general blahness of the Nationals offense. However, DC's top four starting pitchers would all get Cy Young votes, while Drew Storen would finally step out from Rafael Soriano's shadow.
What Really Happened: Per FanGraphs, Werth's WAR actually improved from his phenomenal 2013—up to 4.8 from 4.6. But that pitching staff really was as special as advertised: Jordan Zimmermann, Stephen Strasburg, and Doug Fister all got well-deserved awards consideration (and heck, Tanner Roark probably deserved a throwaway vote too). And, by pretty much any available metric, Storen (1.12 ERA, 4.18 K/BB, 2.6 rWAR) did indeed outpitch Soriano (3.19 ERA, 3.11 K/BB, 0.8 rWAR), even though Soriano was still better than he appeared.

Prediction: Cleveland would be held back from contention with down years from Carlos Santana and Jason Kipnis, while Santana would contribute to a porous left side of the infield defensively. The AL's worst bullpen would turn the team's barely winning Pythagorean record into a sub-.500 one in real life.
What Really Happened: Indeed, Santana's average dropped to .235, although he remained a steady source of power and walks. Kipnis looked lost at the plate, turning in a .240/.310/.330 line. The Indians also clocked in at –19 DRS at third base and –5 DRS at shortstop. The bullpen, however, was a revelation—at 3.12, it sported the AL's fourth-best ERA, and the team did even better than its 83–79 Pythagorean record.

Prediction: "If the Indians don't have the league's worst bullpen, the Mariners will," and Seattle would boast the majors' worst record in either one-run or extra-inning games.
What Really Happened: The Mariners had the AL's best bullpen. At a 2.60 ERA, they were a full 0.31 runs better than the runner-up. This was a big part of why Seattle was the team I underestimated the most going into 2014. They won 18 more games than I expected. Bizarrely, though, they still struggled in one-run games, to the tune of an 18–27 record.

Prediction: The Braves would stumble out of the gate but have a hot September to just barely snag a Wild Card berth. Following this pattern would be Mike Minor and Julio Teheran, who would improve as the summer wore on.
What Really Happened: The Braves instead epically collapsed—or would have, if they had had a lead to protect going into August. The team's best month was April (17–8), and they just looked like they didn't care anymore by their 7–18 September. Minor had a 3.07 ERA entering June 10 but had a 5.51 after that point. Teheran was great all year long, but he was at his best in April (1.47 ERA) and May (2.21).

Prediction: The Rockies' key players would again succumb to injury. Jorge De La Rosa, Brett Anderson, and Jhoulys Chacín would combine for 300 innings. The one Rockie whom everyone agreed wouldn't last long, closer LaTroy Hawkins, actually would not get replaced by Rex Brothers like everyone was assuming.
What Really Happened: Sure enough, Hawkins saved 23 games, and Brothers could save nothing with his 5.59 ERA. Troy Tulowitzki and Carlos González played a full season—between them (161 games total). The three pitchers combined for 291 innings, most of which were De La Rosa's.

Prediction: Phil Hughes would take to his new home in Minnesota, posting a career year with a 3.99 ERA that would make him the Twins' best starter.
What Really Happened: Phil Hughes took to his new home in Minnesota, posting a career year with a 3.52 ERA that made him the Twins' best starter. The part that I would've found unbelievable in March is the fact that Hughes set a new major-league record for best strikeout-to-walk ratio of all time.

Prediction: With a full season of Tony Cingrani, the Reds would set a franchise record for strikeouts for the third consecutive year. However, only Jay Bruce and Joey Votto would be above-average hitters for them. A silver lining would be Billy Hamilton's leading the league in steals—by double digits.
What Really Happened: With a full season of Cingrani (who only started 11 mediocre games), the Reds would have broken that strikeout record. As it was, their 1,290 strikeouts were just six shy of the franchise record, set in 2013. The offense was even worse than I envisioned; Votto may have been above-average (127 OPS+), but he only had 272 plate appearances. Bruce had a terrible year with an 84 OPS+, and Hamilton finished second in the majors with 56 steals. (Maybe if he hadn't been caught 23 times...) Instead, Todd Frazier (123 OPS+) and Devin Mesoraco (149 OPS+) carried the Cincinnati offense (such as it was).

Prediction: Prince Fielder (40 home runs) and Geovany Soto (.280/.370/.490) would be the Rangers' MVPs, with the latter leading the club to a division title after his return from the DL.
What Really Happened: The virtual opposite. Fielder and Soto were among the biggest victims of Texas's injury bug in 2014. Fielder hit just three home runs, and Soto slashed .237/.237/.368 before getting shipped out in a trade to Oakland. In a way, though, they were among the most important players... to the team's last-place finish.

Prediction: The Royals offense would surprise, with four of Alex Gordon, Eric Hosmer, Mike Moustakas, Salvador Pérez, and Billy Butler hitting 20 home runs. Kansas City would suffer for its overreliance on the poor performances of Jeremy Guthrie and Jason Vargas, who would combine for more starts than the superior young trio of Danny Duffy, Yordano Ventura, and Kyle Zimmer.
What Really Happened: Yes, Guthrie and Vargas combined for 62 starts and a 3.93 ERA, while Duffy and Ventura combined for 55 starts (Zimmer was injured) and a 2.90 ERA. No, the Royals offense did not roar to life—in fact, none of the five reached the 20-homer plateau. The Royals nevertheless finished with three more wins than I predicted—and they came within two runs of winning the World Series. As a friend of mine predicted, it was just the Royals' year.

Prediction: The St. Louis Cardinals would be 2014 world champions, thanks to a much-improved defense and the postseason heroics of Shelby Miller, who would start three World Series games and win series MVP honors.
What Really Happened: The Redbirds did improve their defense by a remarkable +103 DRS from 2013 to 2014, but Miller only got them as far as the NLCS (and in fact Miller's poor Game 4 start played a role in St. Louis's loss in that series).

Prediction: Sergio Romo would cough up the Giants' closer's role, and Pablo Sandoval would have an outstanding season in an effort to break the bank in free agency. Madison Bumgarner would throw a no-hitter.
What Really Happened: Romo did indeed start the year with uncharacteristic awfulness, and Santiago Casilla took over as closer. Sandoval had an above-average year, although it continued his pattern of slightly declining every year since 2011. (And, of course, he did break the bank.) Bumgarner did not throw his no-hitter, instead settling for 21 innings of one-run ball spread out over three games of the World Series, which he singlehandedly won for San Francisco.

Prediction: This would be the year that Ben Revere finally hit his first career home run.
What Really Happened: He hit two. I was even there for the second one. There's video proof and everything.