I have a question about the reliability of collections of data with somewhat small sample sizes that add up to a big sample size.
If you have a system in which there are several (15-20) independent categories of wagers yet based on similar theories, which if taken separately are small samples (like 10-20 events/yr, some bigger, some smaller), but when they're all added up the annual sample size totals in the hundreds and has in aggregate similar results from year to year. I.e., category 1 could underperform in year 2, but Category 2 overperformed, and so on down the line and the underperformers are evened out by overperformers and when you added the results of all the categories for that year they totaled 57% winning percentage, and every year was about the same, right around 57%.
So in the aggregate you have pretty remarkably consistent results, but based on a collection of small independent but logically related categories, whose results vary from year to year. Do you have a statistically reliable edge here?
If you have a system in which there are several (15-20) independent categories of wagers yet based on similar theories, which if taken separately are small samples (like 10-20 events/yr, some bigger, some smaller), but when they're all added up the annual sample size totals in the hundreds and has in aggregate similar results from year to year. I.e., category 1 could underperform in year 2, but Category 2 overperformed, and so on down the line and the underperformers are evened out by overperformers and when you added the results of all the categories for that year they totaled 57% winning percentage, and every year was about the same, right around 57%.
So in the aggregate you have pretty remarkably consistent results, but based on a collection of small independent but logically related categories, whose results vary from year to year. Do you have a statistically reliable edge here?