Many of us use programs, spreadsheets, data, etc. to build models predicting scores for contests we are interested in wagering on. We are faced with the question of how good are the predictions? The obvious answer is, do you make money with the model? But that can take a lot of bets to get comfortable with the results. It can also be expensive if the model isn’t very accurate.

The next obvious approach is to back test with previous data. This is free, but a lot of work. You must get the previous data in a usable form, then write your model to iterate through the previous games generating the bets the model likes. This is the approach that I mainly rely on. One problem with this is that I only keep profitable bets. That doesn’t tell me anything about the other score predictions.

Recently I’ve started using correlation analysis. I generate two correlations; one for each team’s predicted score and actual score. This is a particularly useful tool for comparing different predictive models. I’m currently doing CBB analysis and looking at 4 predictive models; LV (implied scores from several books using the average spread and totals lines), Like games (my like game system described on my blog and in another topic here), KenPom predictions, and predictions from a power rating system I developed years ago.

The CBB system I’m developing will rely on 3 of the 4 models (I won’t be using the LV implied scores since the lines are generated from these). The problem is how to weight the other three. Hopefully the correlation study will provide some guidance. I’ll have the correlation results shortly.