Originally posted on 10/07/2017:

In 2012 I was between major health problems and had the opportunity to use the Kelly criteria to size my wagers for an entire MLB season. One of the major techniques I used was simulation. I would simulate each game one batter at a time to get a final score. Then I repeated this 5000 times to get the probability of each team winning. I used this probability, the game odds, and the Kelly formula to size my wager.

I ran simulations early in the morning using typical lineups and made wagers. Later when I had the actual lineups I repeated the simulations and wagers. Following is a table of my actual results. Also included are the results if I had made $100 unit wagers on those games ($100 bet on dogs and to win $100 on favorites).

Morning Bets Amt Bet Net Ret/$ Avg Bet
Flat $100 999 $104,344 -$1,440 0.986 $104.45
Kelly 999 $94,800 $27 1.000 $94.90
Actual LU Bets Amt Bet Net Ret/$ Avg Bet
Flat $100 1097 $117,062 $2,005 1.017 $106.71
Kelly 1097 $154,734 -$752 0.995 $141.05

The morning wagers performed poorly, dropping $1,440 on unit wagers. But actually using Kelly they showed a $27 profit. Note that Kelly resulted in smaller wagers.

The wagers based on actual lineups lost $752 using Kelly, but would have won $2,005 on unit wagers. Here the average Kelly wager was $141 versus $107 on unit wagers.

Combined I actually lost $725 using Kelly, but would have made $565 on flat unit bets. I also would have bet less. My return per dollar bet was $0.995 with Kelly and would have been $1.017 on flat bets. In other words, using Kelly turned a winning season into a losing season.

I wondered about this, and ultimately figured it out. I had averaged larger wagers on losing bets with Kelly than on winning bets. That makes sense. With Kelly your bets increase when you win and decrease when you lose. (If you are familiar with dollar cost averaging in investing, Kelly produces a kind of dollar cost averaging in reverse.)