How many picks?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Bryant248
    Restricted User
    • 12-24-11
    • 2

    #1
    How many picks?
    I know there are threads for this but I can't remember the word to look them up.. how many picks would be a reasonable number to have recorded before you can be sure that you're a winning capper? Any advice is appreciated
  • goblue12
    SBR MVP
    • 02-08-09
    • 1316

    #2
    I can usually tell if someone makes or loses money based on one bet.
    Comment
    • samserif
      SBR Hustler
      • 09-19-11
      • 63

      #3
      What you're looking for is the binomial proportion confidence interval. "Binomial" means you've got two outcomes, win or lose; "proportion" means that you're looking for the percentage of wins; and "confidence interval" means the range that contains your actual winning percentage (as opposed to your observed winning percentage) given a certain number of bets.

      Here is a calculator for this sort of thing. Let's say you've made 100 bets and gotten 60 wins. You enter the number of trials as 100, number of successes as 60, click, and voila, out comes a table of intervals. It's a table, not an answer, because when you asked "...before you can be sure you're a winning capper", the obvious follow-up question is "What do you mean by 'sure'?"

      When talking about confidence intervals, it's sort of a convention to assume a 95% confidence level. Why 95% and not 90% or 99%? Well, why do sports books typically offer odds at -110? Someone did it and everyone else followed. In the 60 wins out of 100 example, you'll see that the 95% confidence interval ranges from 0.496 to 0.696. This means that for "sure=95%", your actual winning percentage could be under 50% and you've just been very lucky. Or it could be nearly 70% and you've been unlucky. Again, it depends on what you mean by "sure".

      For fun, try 600 wins out of 1000 bets and see how the interval tightens up. Or, try 6 out of 10 and see how a really bad capper can look good over a small number of picks.
      Comment
      • tukkk
        SBR Sharp
        • 10-04-10
        • 391

        #4
        take vig into count too!
        Comment
        • Chimneyfish
          SBR MVP
          • 09-30-10
          • 1217

          #5
          Originally posted by goblue12
          I can usually tell if someone makes or loses money based on one bet.
          No you can't.
          Comment
          • samserif
            SBR Hustler
            • 09-19-11
            • 63

            #6
            I just pulled out my copy of Stanford Wong's Sharp Sports Betting because I remembered that he had a section on this very subject. He offers a simpler approach to figuring out how significant your winning percentage is: divide the "excess wins" (number of wins minus number of losses) by the square root of the total number of bets. This is the number of standard errors of your winning percentage from a coin flip (i.e., a mean of 0.5). When you hit two standard errors, you're at the 95% confidence level. (And yep, a 95% confidence level can be viewed as two standard errors in width.)

            Here's an example. Suppose you've made 100 bets and won 60 of them. The ratio, also known as the standard error, is (60-40)/sqrt(100) = 2, which puts you exactly at 95% confidence. (I'm assuming that the calculator I used in the previous post has some rounding errors, which is why it came out slightly different.)

            But here's something more interesting. His next section is titled "Two Standard Errors is Too Few". This is the area where I ducked in my previous post and said "How sure is enough?" According to Stanford Wong, 95% isn't nearly enough. His advice: if you're backtesting using historical data, hold out for 99.9%. If you settle for 95%, there's still a reasonable chance you're wrong and on your way to making an expensive mistake.

            By the way, Justin7 makes a similar argument in his book, in the chapter "Handicapper Success Test", using a different approach and offering different suggestions. I highly recommend both books. If you don't find the explanations in one book intuitive, the other one might do the trick.
            Comment
            • tukkk
              SBR Sharp
              • 10-04-10
              • 391

              #7
              Originally posted by samserif
              The ratio, also known as the standard error, is (60-40)/sqrt(100) = 2, which puts you exactly at 95% confidence
              the ratio is 2 indeed, but the confidence level is not 95% because the break-even point is not 50%
              Comment
              • samserif
                SBR Hustler
                • 09-19-11
                • 63

                #8
                Originally posted by tukkk
                the ratio is 2 indeed, but the confidence level is not 95% because the break-even point is not 50%
                In the example I gave, the confidence is 95% that the mean of 60 wins vs. 40 losses isn't 0.5. I'm just showing the stats; anyone who hopes to apply this to real world betting needs to accommodate the vig, as you mentioned earlier. At that point, the math's a bit more difficult. I was just showing that a 3:2 ratio of wins to losses over 100 games isn't enough to proclaim victory even without including the vig.
                Comment
                • Dylan
                  SBR Rookie
                  • 12-23-10
                  • 48

                  #9
                  Originally posted by samserif
                  Here's an example. Suppose you've made 100 bets and won 60 of them. The ratio, also known as the standard error, is (60-40)/sqrt(100) = 2, which puts you exactly at 95% confidence.
                  I might be missing something but how do you calculate the confidence level from the standard error?
                  Comment
                  • samserif
                    SBR Hustler
                    • 09-19-11
                    • 63

                    #10
                    The way it works is that you start with the standard error, which is the standard deviation of the sampled mean (which in this case is your current winning percentage), divided by the square root of the number of samples.

                    Then you decide on what confidence level you want to use; i.e., how certain do you want to be that your method really works and you haven't just been lucky or unlucky? Your confidence level can be expressed either in terms of standard deviations or in terms of percentage. Some people express it in terms of s.d.'s whereas others like percentages (e.g., 95%).

                    With the standard error and the confidence level, you can compute your margin of error. Just multiply them. For example, if you've selected a confidence level of 2 standard deviations (which is a tad greater than 95%), then your margin of error is 2 times your standard error.

                    Finally, you get the confidence interval, which is:

                    (your sampled winning percentage) plus/minus (your margin of error)

                    Here's how to visualize it. Imagine drawing a normal distribution (bell curve) around your current winning percentage. The "width" of the distribution is proportionate to the number of samples (more samples = better estimate = narrower distribution). it's possible that the real performance of your algorithm -- in other words, the true winning percentage over time -- is much different than your current winning percentage. This would happen if the true percentage were way out to one side of the distribution. But luckily, we can calculate the probability of that happening and say something like "I know that my winning percentage has a [blah blah blah] chance of being within [blah blah blah] percentage points of my current winning record."

                    It happens that a 95% confidence level corresponds to about 1.96 standard errors. So when Stanford Wong writes that 2 standard errors aren't enough, he's saying that the traditional 95% confidence level isn't good enough to quit your day job.
                    Comment
                    • Dylan
                      SBR Rookie
                      • 12-23-10
                      • 48

                      #11
                      Originally posted by samserif
                      The way it works is that you start with the standard error, which is the standard deviation of the sampled mean (which in this case is your current winning percentage), divided by the square root of the number of samples.

                      Then you decide on what confidence level you want to use; i.e., how certain do you want to be that your method really works and you haven't just been lucky or unlucky? Your confidence level can be expressed either in terms of standard deviations or in terms of percentage. Some people express it in terms of s.d.'s whereas others like percentages (e.g., 95%).

                      With the standard error and the confidence level, you can compute your margin of error. Just multiply them. For example, if you've selected a confidence level of 2 standard deviations (which is a tad greater than 95%), then your margin of error is 2 times your standard error.

                      Finally, you get the confidence interval, which is:

                      (your sampled winning percentage) plus/minus (your margin of error)

                      Here's how to visualize it. Imagine drawing a normal distribution (bell curve) around your current winning percentage. The "width" of the distribution is proportionate to the number of samples (more samples = better estimate = narrower distribution). it's possible that the real performance of your algorithm -- in other words, the true winning percentage over time -- is much different than your current winning percentage. This would happen if the true percentage were way out to one side of the distribution. But luckily, we can calculate the probability of that happening and say something like "I know that my winning percentage has a [blah blah blah] chance of being within [blah blah blah] percentage points of my current winning record."

                      It happens that a 95% confidence level corresponds to about 1.96 standard errors. So when Stanford Wong writes that 2 standard errors aren't enough, he's saying that the traditional 95% confidence level isn't good enough to quit your day job.
                      Thank-you very much for your very great and very thorough explanation. It all makes sense now. Happy New Year!
                      Comment
                      SBR Contests
                      Collapse
                      Top-Rated US Sportsbooks
                      Collapse
                      Working...