What is the most reliable measure? Model error v line error, strike rate?
The reason I'm asking is that I seem to be getting some strange results. My model error is quite close to the line error and my model's strike rate increases as the difference between the predicted line and the actual line increases. However, if I analyse strike rates in intervals the strike rate decreases when the interval is changed from a 5-15 pt difference to a 15-25 pt difference, when in theory it should increase. Should I be worried?