So I'm looking at MLB totals (doesn't matter what -- runs, runs+hits, runs+errors, etc).
I want to know the math behind converting/comparing my prediction with the market price.
Here's my thoughts:
What I ultimately want to do is to estimate the type of distribution I'm dealing with. Let's say I'm estimating team A's runs per game, and I decide I want to use a normal distribution to estimate this (although I would obviously not use this -- just an example). So, for a normal distribution I can completely define it by its mean and variance... so I calculate this for my prediction.
So, it seems, if I want to figure out the probability that team A scores more or less than the posted total, I simply want to integrate my distribution up to this value to calculate this percentage, and then compare this to the price I'm getting for the over/under.
This seems pretty straightforward to me (although defining the distribution may get interesting) -- is it this simple? Is this what all the half-point calculators etc. are effectively doing?
MLB scoring has a weird distribution. On a given half-inning, a team has about a 30% chance of scoring 1+ runs.
I don't remember the exact distributions... But it is pretty close to 40% of all scoring half-innings have exactly 1 run. Of the remaining 60%, 40% have 2 runs, and so on.