View New Posts
123
1. ## A better way to measure handicapper success

Note: This is not supposed to be a "Think Tank"-style thread but rather one that's meant to be accessible to a wider Players Talk audience. As always feel free to ask any relevant questions.

Handicapper A has a record of 12-68 for +79 units (obviously he's been betting many underdogs), while Handicapper B has a record of 63-17 for +26.5 units (obviously he's betting many favorites).

Based on these records who can we say is more likely to be a +EV handicapper?

Well, the truth is that armed with solely this information we can't say very much at all.

Even if we assume that each bettor were solely placing uncorrelated bets (and so, for example, weren't double counting a bet on a 5 inning line and a bet on a full game line) there are least two additional pieces of information that would further need to be considered:
1. How much did each handicapper wager on each bet?
2. At what odds were each bet placed?

One concept of great use to statisticians when analyzing data is that of the standard deviation. This refers to the degree of variability within a set of random (or partially random) data that may be attributed to chance.

For example, were you to flip a fair coin 1,000 times, then on average you'd expect to see 500 heads and 500 tails. But this doesn't mean you'd always expect to see exactly that heads/tails breakdown: sometimes you'd see 520 heads and 480 tails; or 488 heads and 512 tails; or every once in a long, long, long, long while 598 heads and 402 tails. In fact you'd only expect to see exactly 500 heads and 500 tails with probability 2.523%, which, while still the single most likely outcome is nevertheless a big dog to occur (fair odds of about +3864).

So this is where standard deviation comes in to play.

Every random variable is associated with both a "mean" (which is just an "average") and a "standard deviation". The mean tells us the expected value of the random variable, while the standard deviation tells us (loosely speaking) the expected degree to which we expect that that random value will tend deviate from that mean.

So let's go back to the example of flipping 1,000 coins. Obviously, the mean is 500 heads and 500 tails (which is what we'd "expect" to see on average). The standard deviation (and for now, don't worry where I'm getting this figure) is 15.81 heads.

So what does this standard deviation figure really tell us?

Well, for sufficiently large data sets it allows us to estimate the probability of a given event (or a rarer event) occurring. The way we do this is by formulating what's known as a "Z-score". The formula for a Z-score is given as follows:

Z = (Actual - Expected) / (Standard_Deviation)

So let's calculate the Z-score for each of the 4 heads/tails combinations above:
1. Z(500 heads) = (500 - 500) / 15.81 ≈ = 0
2. Z(520 heads) = (520 - 500) / 15.81 ≈ = 1.2649
3. Z(488 heads) = (488 - 500) / 15.81 ≈ = -0.7589
4. Z(598 heads) = (598 - 500) / 15.81 ≈ = 6.1981

OK so now we have a bunch of Z-scores. Now what?
We know from what's known as "The Central Limit Theorem" that as random data sets get sufficiently large the distribution approaches what's known as a "Gaussian" or simply a "Normal" distribution. The net effect of this is that we can treat Z-scores obtained from sufficiently large data sets as normally distributed random variables with a mean of 0 and a unit standard deviation.
Back when I was younger I remember our stats text books had pages of tables that converted between Z-scores and probabilities (not to mention the tables related to other commonly used distributions that are beyond the scope of this brief article). Luckily, those of us with MS Excel or OpenOffice Calc (or Google) no longer need to flip to the back of a book every time we encounter a Z-score.

Using the Excel function =NORMSDIST() (which the Excel help files explain, "Returns the standard normal cumulative distribution function. The distribution has a mean of 0 (zero) and a standard deviation of one. Use this function in place of a table of standard normal curve areas.") we can estimate the probabilities associated with each of the 4 Z-scores above:
1. P(500 or more heads) ≈ 1- NORMSDIST(Z(500 heads)) = NORMSDIST(0) = 50%
2. P(520 or more heads) ≈ 1 - NORMSDIST(1.2649) ≈ 10.30%
3. P(488 or more heads) ≈ 1 - NORMSDIST(-0.7589) ≈ 77.61%
4. P(598 or more heads) ≈ 1 - NORMSDIST(6.1981) ≈ 0.00000002858%

Note that because the NORMSDIST() function gives us the probability of the specified number or fewer heads, we subtract the resultant value from to give us the probability of the specified number or more heads.
There's actually a potentially easier, if less instructive, way of obtaining the same results using the related Excel function NORMDIST() (which the Excel help files explain, "Returns the normal distribution for the specified mean and standard deviation. This function has a very wide range of applications in statistics, including hypothesis testing."). This allows us to obtain identical results as those above without having to manually calculate a Z-score. The format is =NORMDIST(actual, mean, standard_deviation, TRUE). To wit:
1. P(500 or more heads) ≈ 1 - NORMDIST(500, 500, 15.81, TRUE) = 50%
2. P(520 or more heads) ≈ 1 - NORMDIST(500, 520, 15.81, TRUE) ≈ 10.30%
3. P(488 or more heads) ≈ 1 - NORMDIST(500, 488, 15.81, TRUE) ≈ 77.61%
4. P(598 or more heads) ≈ 1 - NORMDIST(500, 598, 15.81, TRUE) ≈ 0.00000002858%
which are of course values to identical to those obtained above.
So the next logical question for many sports bettor might be, "How would one use this to compare records between handicappers?"

Well the answer is pretty simple. We calculate a mean and standard deviation for each handicapper based upon his bets, and then using Z-scores determine the probability of obtaining such a record by chance alone.

The mean is the easy part. If we forget assume no juice, then the expected result for a handicapper is 0, in other words over the long run we expect him to break even. This is actually a bit less onerous an assumption than it might initially appear. Remember, we're not trying to determine if a handicapper is able to perform slightly better than a coin flipper, but rather determine how likely he is to be better than a breakeven handicapper. (Of no less importance is the fact this simplification makes the calculations much easier and allows to only concern ourselves only with the price of each bet rather than necessitate recording the price of the opposing side as well, so as to be able to calculate juice.)

The standard deviation is only slightly trickier. Recall from basic probability that the standard deviation is the square root of what's known as the variance (and that's really all you need to know about variance -- sqrt(variance) = standard deviation, and by the same token (standard deviation)^2 = variance). The standard deviation of a single "binary outcome" bet (meaning that the bet can only either win a certain amount or lose a certain amount -- we leave out pushes from our analysis) is given by this simple formula:

variance = (bet_size)^2 * (decimal_odds - 1)

(To convert from US to decimal odds you can reead the refresher in this post, punch up my Odds Converter, or use the US2DEC() function in my VBA Sports Betting template for Excel.)

So let's look at a couple of examples:
1. A 1 unit bet at => variance = 1^2 * ( - 1) = 4
2. A 1 unit bet at => variance = 1^2 * ( - 1) = 2
3. A 2 unit bet at => variance = 3^2 * ( - 1) ≈ 3.63636
4. A 3 unit bet at => variance = 3^2 * ( - 1) = 1.8

To then determine the variance across multiple bets, we simply sum up the variances of each individual bet.

Taking the square root of the sum then yields the standard deviation (which will be either in dollar or unit terms depending on how we choose to measure bet size).

So the total standard deviation of the 4 bets above would be given by:[indent]standard deviation = sqrt(4+2+3.63636+1.8) = 3.38177 units.

So now let's return to our two original handicappers (A & B) from above, Handicapper A with his record of 50-30 for +36 units, and Handicapper B with his record of 48-32 for +16 units.

Let's say that the two handicappers respective results were obtained from the following 80 bets:
Code:
```	Bet	Decim.
Odds	Size	Odds	Variance
+900	9	10	729
+900	8	10	576
+900	8	10	576
+900	8	10	576
+900	8	10	576
+900	8	10	576
+900	6	10	324
+900	6	10	324
+900	6	10	324
+900	6	10	324
+900	6	10	324
+900	4	10	144
+900	1	10	9
+900	1	10	9
+900	1	10	9
+900	1	10	9
+900	1	10	9
+800	8	9	512
+800	8	9	512
+800	8	9	512
+800	8	9	512
+800	6	9	288
+800	4	9	128
+800	4	9	128
+800	4	9	128
+800	2	9	32
+800	2	9	32
+700	8	8	448
+700	4	8	112
+700	2	8	28
+700	2	8	28
+700	2	8	28
+600	8	7	384
+600	8	7	384
+600	8	7	384
+600	8	7	384
+600	6	7	216
+600	6	7	216
+600	4	7	96
+600	2	7	24
+500	8	6	320
+500	6	6	180
+500	6	6	180
+500	6	6	180
+500	6	6	180
+500	2	6	20
+400	8	5	256
+400	8	5	256
+400	6	5	144
+400	4	5	64
+400	2	5	16
+400	1	5	4
+300	8	4	192
+300	8	4	192
+300	8	4	192
+300	4	4	48
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	8	3	128
+200	6	3	72
+200	6	3	72
+200	6	3	72
+200	4	3	32
+200	2	3	8
+200	2	3	8
+200	2	3	8
+200	2	3	8
+200	2	3	8
+200	1	3	2
+200	1	3	2
+200	1	3	2
+200	1	3	2
+200	1	3	2
+200	1	3	2
Total Variance:   14,810
Standard Deviation = sqrt(14,810) ≈ 121.696 units```
Code:
```	Bet	Decim.
Odds	Size	Odds	Variance
-900	9	1.1111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-900	9	1.11111	9
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-800	8	1.125	8
-700	7	1.14286	7
-700	7	1.14286	7
-700	7	1.14286	7
-700	7	1.14286	7
-700	7	1.14286	7
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-600	6	1.16667	6
-500	5	1.2	5
-500	5	1.2	5
-500	5	1.2	5
-500	5	1.2	5
-500	5	1.2	5
-500	5	1.2	5
-400	4	1.25	4
-400	4	1.25	4
-400	4	1.25	4
-400	4	1.25	4
-400	4	1.25	4
-300	3	1.33333	3
-300	3	1.33333	3
-300	3	1.33333	3
-300	3	1.33333	3
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
-200	1	1.5	0.5
+100	1	2	1
+100	1	2	1
+100	1	2	1
+100	1	2	1
+100	1	2	1
+100	1	2	1
Total Variance:      334
Standard Deviation = sqrt(334) ≈ 18.276 units```
So calculating the Z-score for handicappers A & B we have:
Z(han. A) = (79 units - 0 units) / 121.696 units ≈ 0.6492
Z(han. B) = (26.5 units - 0 units) / 18.28 units ≈ 1.4500

Converting to probabilities using Excel's NORMSDIST() function yields:
P(obtaining handicapper A's result or better purely by chance) ≈ 1 - NORMSDIST(0.6492) ≈ 25.812%
P(obtaining handicapper B's result or better purely by chance) ≈ 1 - NORMSDIST(1.4500) ≈ 7.353%

So what we see is that a bettor placing the same bets as handicapper A would, purely by chance, obtain the same results as A (or better) about a quarter of the time.

Similarly, a bettor placing the same bets as handicapper B would, purely by chance, obtain the same results as B (or better) a bit less less than one time out of every 13.

So anyway, in the interest of simplicity while I've certainly glossed some very important points, I hope this describes a simple framework through which handicappers' records may be compared and analyzed going forward.

So remember ... next time a handicapper tells you he's 32-30 for +15 units a good response would be, "Oh yeah, what's your Z-score?"

(A couple notes of caution -- Z-scores are less reliable over small sample sizes, tending to sometimes vastly overstate actual significance. They'll also be less reliable if the odds examined are over an extremely wide range, especially if there are are a small number of bets at very long odds. There are certainly other ways to measure a handicapper's success, but a discussion of this would be beyond the scope of this article.)
Nomination(s):

2. This makes perfect sense and answers this topic from another thread about a week ago. Nice work.

3. that just hurt my brain.

I like my "variations" or "probability" better. "this or that team sucks and this or that team is better" or "this or that team is playing better and this or that team isn't" and also the "probability's" of players being injured or key players who are injured and/or coaching changes, management changes, turmoil within in a team.
all of those look better to me instead of some math on paper.
If i like it, i bet it.

4. Ganch,

Since handicapper B bets more favorites, it is intuitive than his results would have smaller variance.

I understand that his result is more highly significant (less likely to be the result of randomness), but don't understand why intuitively. Especially since N = 80, my gut would have told me that capper A would have a higher Z-score.

So the conclusion is that B's numbers are less likely to be the result of randomness, but if I was considering who to tail, shouldn't I also factor in that A is making a much higher profit? I might say "B's results are statistically significant, so he's definitely beating 50%, while A's results are more likely to be the result of randomness but he really raked it in".

6. Maybe N=80 is effectively not that large for the dog bettor because of the fact that he's betting so many +10000 dogs. Even one or two lucky wins might drastically influence his stated +X units result.

7. So if I play the lottery every day for a year, and finally hit it Dec. 31, even though N = 365, it looks like I'm kicking the lottery's ass, but in reality I'm not.

The issue of "whom to tail" is a whole lot more complicated than the simple analysis above, which only seeks to answer the question of how to determine which handicapper's positive results are more likely to be attributable to chance

This "tailing" question is probably better left to the Think Tank, but just in brief, proper analysis would certainly require the utilization of Bayesian inference as well as knowledge of the bettor's (that is the bettor seeking to tail) utility function.

Just to give a very simple example ... if you had 100% confidence that a particular bettor could successfully pick 1 bet out of a billion paying out out at odds of a trillion to 1, for an astounding EV of 99900%, while if you had 99% confidence that another bettor could pick games at +100 at a rate of 54% (Note that I'm using the term "confidence" in a slightly different sense than that above), for a much less impressive EV of 8%, chances are (assuming any reasonable level of risk aversion) you'd still go with the latter over a handful of bets, despite the significantly higher EV and statistical significance of the former handicapper's results.

Anyway, as I said, this a decidedly more complex issue, better suited to the Think Tank than Player's Talk.

9. make lots of withdrawal's. the end.

10. Ganch, I'm still waking up and rubbing my eyes. Are you talking about Z-scores?

11. Ganch,

Why doesn't the same reasoning I used for the dogs apply to the favorites?
If I randomly pick only huge favs of -1000 or more, I'm still going to hit a huge % of them, so you might need a larger sample size again. But, of course, in your example, the guy seems to be betting a lot of favorites, but nothing too chalky.

Definitely the guy who only bets totals ~ -110 and is hitting 70% with a nice +X is going to be highly significant.

12. Indeed, the question of risk neutrality becomes an important issue.

13. Originally Posted by mathdotcom
Indeed, the question of risk neutrality becomes an important issue.
Yes, and what types of drawdowns will you see with that risk. Much like picking a mutual fund manager. Do you want the best % gain? But what risk did you embrace to get those gains?

14. Ganch, don't you think lifestyle should factor into the equation?

To illustrate:
A guy can pick 60% winners ATS for NFL and NHL dogs. This doesn't take him very long at all. Let's say three or four hours a week. The number of plays will be more limited than if he spend more time, but the quality of his selections goes down as he spends more time in search of more plays.

You could say a guy who spends 80 hours a week capping is better than the guy who spends four hours, but is he really, when the latter spends his time sipping pina coladas on the beach surrounded by gorgeous chiquitas?

You're a math genius. Any chance of a betting tool to factor in lifestyle?

15. keep yourself in the game
would you riska high amount on a stock that had a 40% of hitting zero by morning
http://www.freeunderdog.com/bankroll-management.html
Nomination(s):

16. Bear Sterns comes to mind

17. Originally Posted by mathdotcom
Why doesn't the same reasoning I used for the dogs apply to the favorites?
If I randomly pick only huge favs of -1000 or more, I'm still going to hit a huge % of them, so you might need a larger sample size again.
Right, that's exactly why you need a sufficiently large sample relative to the odds at play.

Remember with Z-scores we're ultimately appealing to the Central Limit Theorem.

18. Assuming the favorite capper bets were constant to-win amount and a third made at -125, a third at -150 and a third at -200 how big should a sample size be for the Z score to be some definition of reliable?

Same question for dog bettor assuming his price range as equally divided between +125, +150, +200 for the same to-win amount?

19. Originally Posted by Dark Horse
Ganch, don't you think lifestyle should factor into the equation?

To illustrate:
A guy can pick 60% winners ATS for NFL and NHL dogs. This doesn't take him very long at all. Let's say three or four hours a week. The number of plays will be more limited than if he spend more time, but the quality of his selections goes down as he spends more time in search of more plays.

You could say a guy who spends 80 hours a week capping is better than the guy who spends four hours, but is he really, when the latter spends his time sipping pina coladas on the beach surrounded by gorgeous chiquitas?

You're a math genius. Any chance of a betting tool to factor in lifestyle?
Perhaps if the goal of this exercise were to determine which of the two were the smarter person or the one with the better quality of life, then yes.

But in order to determine the significance of a betting record, then no.

20. so the answer is if you want to give yourself a chance to get lucky bet longshots

21. Originally Posted by bookie
Assuming the favorite capper bets were constant to-win amount and a third made at -125, a third at -150 and a third at -200 how big should a sample size be for the Z score to be some definition of reliable?

Same question for dog bettor assuming his price range as equally divided between +125, +150, +200 for the same to-win amount?
Picking a number out of the blue I'd say with 20×3 = 60 in each sample you'd find pretty decent results using the above (just to keep it even). But that said, with odds so close in range it's not like you'd need a balanced number of bets at each odds. 20 bets at -125, 2 bets at -150, 10 bets at -200, and 10 bets at +150 would probably work just fine with a Z-test.

Even that would probably be on the conservative side, although with less than that I'd probably recommend the just as easy to implement t-test, which ultimately takes in to account fatter tails.

Really the best thing one can do is run a Monte Carlo simulation, and then compare the results to those obtained analytically. But with 60 data points at reasonable odds you should really be fine.

22. Good timing with this post Ganch. Some of the squabbles recently about who is and who is not a top capper were clearly getting out of hand. I think most of the posters here who regularly include their posted picks results in their signature lines etc. would be crushed if they really understood how insignificant their posted picks z-scores were.

To avoid singling people out on this forum who appear to think that impressive results from small samples mean anything, I will use my own results as an example. Personally, I have only ever made 2 posted picks before or at game time on this forum. One was a CFL under play at -114 and the other a reverse RL -1.5 on STL at +354. Both won very easily. So, betting 1 unit (which is the conventional, albeit quite useless, way on this forum) I am 2-0 +4.54 units. Both my record, the units won, and the blowout level of these posted wins mean nothing and IMO anyone giving any credit to these types of numbers (or similar numbers from other posters) should re-read Ganchrow's original post above over and over until they get it. It took me about 18 months of reading Ganchrow's advice before some of it really started to sink in and although much of it is still way over my head I could actually read and understand everything he said in the post above. 18 months ago I would not have understood any of it and would have only had a fleeting idea of what he was talking about. If I can do it, anyone can and should as I would rank the information in the above post as one of the 3 most important things I have ever learned about gambling (the other 2 being the Kelly Criterion and the public's preference for skewness).

23. Originally Posted by Ganchrow
although with less than that I'd probably recommend the just as easy to implement t-test, which ultimately takes in to account fatter tails.
Could you give us an Excel formula like 1-NORMSDIST(...) for a small sample (like 1 of each event described above instead of 20 as you suggest) using the t-test method instead of z-test?

24. Originally Posted by Ganchrow
Perhaps if the goal of this exercise were to determine which of were the smarter person or the one with the better quality of life, then yes.

But in order to determine the significance of a betting record, then no.
I realize. Still would be nice to have some sort of calculator to determine the ideal balance between quality of life and betting record.

The main reason I brought this up is that sports betting is so easy to get drawn into that, in many cases, it becomes the lifestyle; instead of the means to the lifestyle. The misleading principle being that we believe that if we spend more time we'll improve our edge; which is often not the case. Also, by spending more time a person may feel he needs to place bets in order to justify the amount of time spent; a vicious cycle. So it would be cool to be able to enter 'amount of time spend capping' with each wager. Then, at the end of the season, we would have a better idea of where we spend our time most effectively, and where it is wasted. Maybe something for MySBR?

Didn't mean to sidetrack (too much).

25. Originally Posted by VideoReview
Could you give us an Excel formula like 1-NORMSDIST(...) for a small sample (like 1 of each event described above instead of 20 as you suggest) using the t-test method instead of z-test?
A sample size of 1? No.

But for a slightly larger sample size try:

=TDIST(Z, N-1, 1)

where Z is just the Z-score > 0 as calculated as above, and N is the sample size.

And for Z-scores < 0:
=1-TDIST(-Z, N-1, 1)

26. The CLT makes a handful of assumptions, but is a powerful motha****a.

The moral of the story here is give more weight to guys with records whose posted picks are spreads or totals.

27. Originally Posted by mathdotcom
The moral of the story here is give more weight to guys with records whose posted picks are spreads or totals.
No, I'd say the moral of the story as told here is to give more weight to to those whose posted picks have high z-scores.

The issue on which I suspect you may be trying to touch is really one of a lack of smoothness across the outcome set.

If you have a whole bunch of picks at, say, even odds and a handful at odds of say ±10,000, then there will tend to be many possible spikes in the outcome set, thereby rendering the CLT, in general, decidedly less effective.

The point is if you have a bunch of different picks all at fairly similar odds then the Gaussian approximation will generally be "good enough".

28. Yes but people here don't know how to calculate Z-scores and even if they can they couldn't be bothered.

So a rough approximation would be to give more weight to guys who are making picks around pk.

Eg. I am 36-34 in MLB totals, +10 units
You are 10-60 in ML betting, +10 units

Your success is more likely to be the result of randomness.

29. Originally Posted by mathdotcom
Yes but people here don't know how to calculate Z-scores
Well that was the whole point of the original post.

Originally Posted by mathdotcom
and even if they can they couldn't be bothered.
Their loss.

Originally Posted by mathdotcom
So a rough approximation would be to give more weight to guys who are making picks around pk.

Eg. I am 36-34 in MLB totals, +10 units
You are 10-60 in ML betting, +10 units

Your success is more likely to be the result of randomness.
And to give even more weight to those betting at shorter odds.

JJGold is 50-20 in ML betting, for +10 units. All else being equal (and assuming his wagers are the same size as both of ours and that he's placing all his bets at roughly similar short odds without any long odds outliers), his results would be even less likely to be the results of chance.

But really the point is that in general there are just too many variables at play to draw meaningful conclusions based solely on W/L + units up/down. I'd much prefer to see handicapper quote their records as something like:

MLB Record: p-value of 11% over W picks
NBA Record: p-value of 24% over X picks
NFL Record: p-value of 48% over Y picks
Total Record: p-value of 31% over Z picks

Dare to dream, I suppose.

30. Originally Posted by mathdotcom
Yes but people here don't know how to calculate Z-scores and even if they can they couldn't be bothered.
I though it was wins-losses/sqrt sample size.

But if I have to include odds for each wager that makes things far more time-consuming, to the point where I'd wish for another Ganch-tool.

31. as long as your making money thats the bottom line

32. Hell of a post.

33. Originally Posted by Ganchrow
Remember, we're not trying to determine if a handicapper is able to perform slightly better than a coin flipper, but rather determine how likely he is to be better than a breakeven handicapper.

...

So what we see is that a bettor placing the same bets as handicapper A would, purely by chance, obtain the same results as A (or better) about a quarter of the time.

Similarly, a bettor placing the same bets as handicapper B would, purely by chance, obtain the same results as B (or better) a bit less less than one time out of every 13.
Thanks for this post Ganchrow. Just calculated my z-score for my plays this season. Also, thanks for posting that Excel template.

Just to clarify, when you say "a bettor" and "purely by chance", above, we are talking about the chance that the measured capper would have been outperformed by a breakeven capper, not a coin flipper. So the result of the normsdist(z) is basically the likelihood that the given handicapper is a long-term, better-than-breakeven handicapper. Is this the right way to look at it?

34. Originally Posted by Sinister Cat
Just to clarify, when you say "a bettor" and "purely by chance", above, we are talking about the chance that the measured capper would have been outperformed by a breakeven capper, not a coin flipper.
Yes, correct. We're talking about a breakeven capper, i.e., one betting at zero juice.

Originally Posted by Sinister Cat
So the result of the normsdist(z) is basically the likelihood that the given handicapper is a long-term, better-than-breakeven handicapper. Is this the right way to look at it?
Exactly. This is what's typically relevant. We're generally more concerned with whether a handicapper is statistically better than breakeven, rather than whether he's just statistically better than a bettor paying full juice (although the former would obviously imply the latter).

123 Last
Top