Originally posted by 70kgman
Now I'm not naive enough to think I haven't had a little luck along the way because there have been some unders that hit that shouldn't have. However, the Miami game was interesting because I played it on the over at 186.5 when the line was 186 I would've had the over so who knows what that means but my system seems to become more accurate using the closing line or close to it.
On the contrary the Thunder game was looking like an over play but as the line moved up to 205 it became a play on the under but NP for anything under that. I've seen this throughout the season so I don't think this is by chance, but I'm not convinced yet. My metrics that I use for this particular system are based off of some different numbers but since this is my first season with this I haven't had a chance to really look at the data I've just been tracking the data. I'm going to run some regressions with these metrics and looks at some statistical methods I have in mind just to see how strong the correlation is that exists because I'm starting to think it could be useful.
One more thing that's encouraging is my z-score for this particular model has increased as the plays have increased and sits at 3.25 right now which means it's in the 1 in 1000 rarity level range at almost 300 plays, but I'm steal weary of false positives and only short-term usefulness which is why I need to figure out a way to do more testing with previous years data. However, this bodes well for this particular variation of the model as I've tested and used same season data and should expect a higher standard of significance due to the validation methods used. Wong says you should aim for 2 levels for developing and 2 for testing so I'm about 3/4 there I suppose.