...
...
Last edited by JAnthony; 03-06-14 at 11:28 AM.
I've read through the manual you guys are referring to and I'm still not seeing how to use operands. I tried using the Sum feature on points (ie per quarter) to do some 1st half / 2nd half comparison, and it just errors out every time. Maybe this database simply can't provide "points scored in 2nd half when total first half points are greater than or equal to ###"?
Has anyone ever done anything along these lines or tried since I mentioned it earlier in the thread? I could really use some help here.
Is is possible to include such parameters as "team's average 3pt shooting %" (not in the past games, but current overall)?
The manual leaves a lot to be desired. The google group can be a good spot to browse and learn a lot. I believe the baseball and NFL databases are more robust and have more shortcuts than the NBA one.
I haven't ever tried to do what you are attempting, but it would be very interesting to do. As has been mentioned in this thread, a lot can be learned by trial and error.
There was an SDQL posted earlier that might give clues. Let's take a look (I'm new to this too):
H and total>205 and p:HL and po:P3-p:P3 + po:P4-p:P4>14 and season>=2010
The reason I pulled this out was it shows us the quarter operands (P1/2/3/4). However, I think where you're still stuck is analyzing games as halves... database just doesn't seem designed to. What you might end up having to do is just looking at p:P1 + P2 total > 120 (or something like that? thats not pulling anything up). Then pull up schedules for the team's prior game and looking one by 1.
Oh, believe me, there's been LOTS of trial and error. That's why I'm reaching out...
Thanks for that reference. You might be right about at least looking at the first half and then manually going from there. That won't be TOO bad, because at least I'll have a list of games with first halves with high scoring, which is my starting point...
from google grups:
AF and line < -3 and line > -10 and Average(margin@team and season) >= 3 and points > 104 and ppoints > 104 and pppoints > 104 and rest < 2 and WP > 64.1 and game number > 16
ATS: 85-29-2 (2.73, 74.6%) avg line: -6.3
take thunder .
Ben posted this on the group, nice, simple, and makes sense:
A and p:AFL and 1<= rest<=2 and season>=2005
ATS: 196-130-7 (1.71, 60.1%)
With that large a sample it's easy to use it as the basis for a wide variety of drilled down scenarios by adding filters and screwing around, etc. Filters always have to make 'sense', meaning they have to fit in with whatever logic you're basing the entire system on. If they don't, you run the risk that you're simply finding exotic win rates without those win rates carrying on in the future.
Example of just tweaking SDQL to provide a boost to the base query, no logic at play per se, but it doesn't necessarily go against the core query's concept either:
A and p:AFL and rest in [1,2,3] and 67>=game number>=21 and 48<=WP<=80 and P:ats margin<0 and season>=2005
ATS: 65-18-3 (4.91, 78.3%)
Now does any of the above filtering I've done mean that you're guaranteed to exceed the base query's 60% win rate?
Nope.
In fact all you may wind up doing by drilling down like I've presented here is eliminate more than half the games you COULD have been playing on (and winning on) at that previous 60% rate, which means you actually may 'hurt' yourself by the use of filters.
You won't lose more but you won't win as much as you could have if my data-mined version of the query still performs at 60%...if my refinements don't actually improve anything within the actual system...which means you limited what could have been a much more successful system for your bankroll in the long run if you had just stuck to the less sexy 60% base query.
And that higher volume of plays at 60% might make up for the other systems you're playing every season that have begun to fall below 52.4%, because as time passes systems erode due to changes in the game, rules, scoring, etc., and you've got to have other newer or less-obsolete/higher-volume systems carrying your overall bankroll.
But you never know, and if the logic fits then you may actually improve the performance of the core system through filters, and that's why we do it. So always make sure you tinker around with base queries you find online or develop yourself, because one, it's good practice, and two, you may find stronger versions of your original idea as you play.
Last edited by Mako-SBR; 03-07-14 at 02:45 PM.
After reading this thread, I decided to try this out. Very good stuff can be used if you can get enough queries together to have plays daily or as often as possible.
I believe I got one for today. Teams following a loss in which they scored less than 80 points.
SDQL: AD and p:L and points<=80 and season>=2010
SU: 11-164 (-18.68, 6.3%)
ATS: 24-146-5 (-10.09, 14.1%) avg line: 8.6
O/U: 6-168-1 (-21.51, 3.4%) avg total: 191.0
Pretty good sample size and very strong trends. Basically we are fading a team coming off a loss where they did not score 80 points because they are just 24-146-5 ATS in the next game. Additionally playing the under in those games has amazingly hit at the following rate: (168-6-1) 96.6%
Nice, that's exactly what the thread is for, introducing SBRers to SDQL, helping them learn it, and hopefully benefitting from the systems they create!
You made a mistake in that query though Jay...forgot the "p:" on the points<=80 to tell SDQL you want it to look at the team's "previous" points scored...which means SDQL is going to tell you only when a team finishes their current game and scores 80 or less in said game, and give a wildly false result.
Corrected query:
AD and p:L and p: points<=80 and season>=2010
SU: 42-164 (-8.57, 20.4%) ATS: 96-107-3 (-0.88, 47.3%) avg line: 7.7 O/U: 100-101-5 (1.56, 49.8%) avg total: 191.7
Good job for your first attempt though, keep it up.
P:AL and op:margin>10 and p:HW and 65>WP>o:WP and -10<=line<=-5 and H
My logic in this is taking the better team in a away loss revenge spot. Both teams coming off wins and home team is a decent favourite. If you get rid of the 65 still hits over 60% and gives you a larger sample
Any input is welcomed thanks