Login Search

NBA Stituational Bet, SDQL

Last Post
#3070

Default

Every day, for maybe a a month and a half or so, I have been running a program to go through the Nash google sheet as well as my own compilation of queries, looking for the active ones for that day. At the end of the day I mark the queries as having won or lost, and store them off in a spreadsheet.

So far I have stored off around 520 queries/results, and around 201 of them were winners and 320 were losers. I am still engaged in looking for patterns and trying to determine the best way to weed out the bad queries from the good ones.

But what seems kind of striking to me at this point is that 520 is a pretty large sample size. The question I have started asking is, how big a sample size would I need of this, before I could be confident in keeping all the queries I have, just as they are, and then simply fading them rather than playing them? I don't think 520 is a big enough sample size to justify doing so, but what sample size would justify it? 1000? 5000? 10,000?

If I stored off 1,000 of these query results, and they were still losing at a 60% clip, would that sample size justify simply fading them?
Last edited by pip2; 02-08-15 at 09:46 AM.
#3071

Default

Quote Originally Posted by pip2 View Post
Every day, for maybe a a month and a half or so, I have been running a program to go through the Nash google sheet as well as my own compilation of queries, looking for the active ones for that day. At the end of the day I mark the queries as having won or lost, and store them off in a spreadsheet.

So far I have stored off around 520 queries/results, and around 201 of them were winners and 320 were losers. I am still engaged in looking for patterns and trying to determine the best way to weed out the bad queries from the good ones.

But what seems kind of striking to me at this point is that 520 is a pretty large sample size. The question I have started asking is, how big a sample size would I need of this, before I could be confident in keeping all the queries I have, just as they are, and then simply fading them rather than playing them? I don't think 520 is a big enough sample size to justify doing so, but what sample size would justify it? 1000? 5000? 10,000?

If I stored off 1,000 of these query results, and they were still losing at a 60% clip, would that sample size justify simply fading them?
I'm doing a somewhat similar thing. First, I'm backtesting all the trends on the spreadsheet for 4+ seasons ('14 YTD, '13,'12,'08,'06). I'm only thru the first 100 trends, but that backtest shows really good results (60%+) for all the previous full seasons, but '14 is only 523-495 so far. I'm keeping a running list of trends that seem to obviously be weak, and I will use this growing list to filter my plays going forward. I'll be curious to see if the numbers stay flat for '14 as I work my way thru the remaining 2/3 of the trends. It seems statistically unlikely, but Nash13 may be right to look very suspiciously at how the trends project into the future.

In the meantime, I've been using the analyzer software to play all non-conflicting trends with small units. This should theoretically be better than the raw backtest, because many conflicting plays are eliminated. Since 1/27 those trends have gone 99-114 (was actually doing OK until the last two days, which have gone 11-34).

NHL has been considerably better, going 79-56, +18.5 units since 1/27. I did a 3 season backtest there that was also positive, but I didn't do '14 YTD, which I will probably do once I finish with NBA. It takes a significant time commitment to comb thru each season for each trend, so I'm just plugging away as time allows.
*I've liked the NHL more all along, as my overall impression of the NBA trends is that we may have too many smaller trends, which allows for more volatility.
Last edited by Cutler'sThumb; 02-08-15 at 10:40 AM.
#3072

Default

Quote Originally Posted by Cutler'sThumb View Post
I'm doing a somewhat similar thing. First, I'm backtesting all the trends on the spreadsheet for 4+ seasons ('14 YTD, '13,'12,'08,'06). I'm only thru the first 100 trends, but that backtest shows really good results (60%+) for all the previous full seasons, but '14 is only 523-495 so far. I'm keeping a running list of trends that seem to obviously be weak, and I will use this growing list to filter my plays going forward. I'll be curious to see if the numbers stay flat for '14 as I work my way thru the remaining 2/3 of the trends. It seems statistically unlikely, but Nash13 may be right to look very suspiciously at how the trends project into the future.

In the meantime, I've been using the analyzer software to play all non-conflicting trends with small units. This should theoretically be better than the raw backtest, because many conflicting plays are eliminated. Since 1/27 those trends have gone 99-114 (was actually doing OK until the last two days, which have gone 11-34).

NHL has been considerably better, going 79-56, +18.5 units since 1/27. I did a 3 season backtest there that was also positive, but I didn't do '14 YTD, which I will probably do once I finish with NBA. It takes a significant time commitment to comb thru each season for each trend, so I'm just plugging away as time allows.
*I've liked the NHL more all along, as my overall impression of the NBA trends is that we may have too many smaller trends, which allows for more volatility.
One impression I get from reading down my saved list of results, is that in one of those days, say, when there are 10 winners and 15 losers, or maybe even 20 losers and 5 winners, there are some names frequently associated with those rare winning queries: Nash, Hiyahya, and Jmon. Their names might be associated with other winners that are simply labeled as "NBAXX" as well.

It isn't that simple, because they have a lot of queries in general, so they also have some losing queries associated with their names, but I wonder if I might also fix my query libraries by just weeding out everything that wasn't approved/assembled by one of those guys...
#3073

Default

i 100% aggree with both posts above. just as in other terms they would say: "it is not about what you have done in the past, it is about what you can do in th future."
so there is no point in finding a 70% ATS trend which will go forward 50/50 or even worse from now on.
major point: what can we do about it?
i see the trend sheet as a pool. not every trend is good or value, some are overfitted other are just sound. i declare statistical and logical criteria and after that i take all things together with things the queries don't show. and then i play the games.
so far i am YTD up 40 units. if that stays like this, i am ok. as long as i am not loosing money everything is fine.
#3074

Default

Quote Originally Posted by nash13 View Post
i 100% aggree with both posts above. just as in other terms they would say: "it is not about what you have done in the past, it is about what you can do in th future."
so there is no point in finding a 70% ATS trend which will go forward 50/50 or even worse from now on.
major point: what can we do about it?
i see the trend sheet as a pool. not every trend is good or value, some are overfitted other are just sound. i declare statistical and logical criteria and after that i take all things together with things the queries don't show. and then i play the games.
so far i am YTD up 40 units. if that stays like this, i am ok. as long as i am not loosing money everything is fine.
I agree with you but along with being fine if I make money, I am perfectly fine if the queries I currently have can consistently lose at a 60% rate, because for me that isn't materially different than winning 60% consistently. My problem is that I am not sure of how big a sample size I need to have in order to declare that the 60% rate is consistent enough to bet on..
#3075

Default

Quote Originally Posted by pip2 View Post
I agree with you but along with being fine if I make money, I am perfectly fine if the queries I currently have can consistently lose at a 60% rate, because for me that isn't materially different than winning 60% consistently. My problem is that I am not sure of how big a sample size I need to have in order to declare that the 60% rate is consistent enough to bet on..
I can't agree with you here.
If query is good, that means that the logic behind it was correct and it brings profit.
If query is bad, it just means that the logic didn't prove itself and that the query shouldn't be relied on, but it doesn't mean the opposite, so even if in the short run fading it should be good, I wouldn't be as sure about it in the long run, as I would be about a query that is good to begin with.