Starting a new thread with hopefully more insight. In a prior thread, I used regression to create a formula/model from 2000-2011 and tested the formula on 2012 and 2013.
When my model said the total was 2 points higher than it should've been (vs closing line)
The model was 61W - 39L - 1T for 61% (playing the under)
In these games, closing line was closer the actual total 37 times and the opening line was closer 50 times (14 ties)
When my model said the total was 2 points higher than it should've been (vs opening line)
The model was 55W - 46L - 1T for 54.5% (playing the under)
In these games, closing line was closer the actual total 38 times and the opening line was closer 50 times (14 ties)
If I expand the 54.5% vs opening line, where the model said the total was 4 points higher:
28W - 20L - 1T for 58.3%
Now, before you say 49 plays are not enough, that is 49 plays over two years, from weeks 9 to 17. So I played 49 / (9*16) = 34% of the total allowable games.
I've read that I should be testing my model ALWAYS against the opening line because A) beating the closing line is always a good thing and B) the market is efficient. My argument here is, I have a local bookie and can only bet the lines 1 hour prior to game time. In this last hour, the totals are moving much less than the spreads, as only high winds/really bad weather strolling in, or big player inactives occur.
Thoughts are appreciated.