Login Search

An introduction to research

Last Post
#16

Default

I guess this sort of guess to my point. Granted Python is good for scraping, what does it have for me as far as modeling sports? If I'm a newbie, not wanting to learn two languages to do my programming (one to scrape, one or organize my modeling efforts, perhaps R) which one should I use? In my world, checked out scientific routines are important. Reinventing statistical wheels might be fun for some, but I rather directly model sports techniques.
#17

Default

Like I said, language is secondary. Choose the one you're comfortable using, be it Python or .net or whatever. For this project, it's still going to be the output that matters. As far as I know, no language is more suited to statistical analysis than another. At least not for a project such as this.

I'm a .net developer, I would chose C# over Python every day of the week and twice on Sundays.. but for someone else the preferences is probably completely opposite. That doesn't mean that I can create something that more accurately analyzes data than the other guy.
#20

Default

Quote Originally Posted by Wrecktangle View Post
I guess this sort of guess to my point. Granted Python is good for scraping, what does it have for me as far as modeling sports? If I'm a newbie, not wanting to learn two languages to do my programming (one to scrape, one or organize my modeling efforts, perhaps R) which one should I use? In my world, checked out scientific routines are important. Reinventing statistical wheels might be fun for some, but I rather directly model sports techniques.
R is completely different from python, they do very different things, and both are very valuable to know. Python is more versitile than R in my opinion, and if you were only going to learn one language, python would be it imo. However, I don't want to waste more time arguing for python, it's mostly a personal choice -- you can use any language you'd like, the concepts taught here should still generally apply.
#21

Default

Ijump12, I think many are interested to see your write-up Please continue

As far as this arguing about what language every body prefers to code in please start your own threads and let the man continue with his thread, lets allow this to be the educational thread it was meant be. I am sure everybody is capable of starting their own thread if they feel the need to prove that they are more intelligent and that their technique is better.
Give Points

Points Awarded:

Pokerjoe gave sharpcat 1 SBR Point(s) for this post.

#22

Default

Quote Originally Posted by MonkeyF0cker View Post
If you'd like to concentrate on only one language, I'd honestly go with C#. There are a lot of things that I do with arrays and structs in my models that would be extremely cumbersome and inefficient in Python.
I think efficiency is kind of a moot point at this stage. Were not dealing with things in handicapping that efficiency would not matter. I've written a python baseball simulator that processes millions of rows of PlayByPlay data, and it has no trouble.
#24

Default

Section D) How to scrape the internet for data

One of the most important aspects of research is the data that you have. Without data, there can't be any model. Fortunately, most data is free -- Unfortunately, most data isn't immediately in the best computer parsable formats [like .csv, or .xml]. To get the data into formats we can use we will need to "scrape" websites for it.

A couple "packages" have been created that will greatly improve our ability to scrape webpages. It can certaintly be done in python without them -- but they will make your life a whole lot easier:

Mechanize - This will allow us to open webpages easily (http://wwwsearch.sourceforge.net/mechanize/)
Beautiful Soup - This will allow us to parse apart the webpages (http://www.crummy.com/software/BeautifulSoup/)

Installing Beautiful Soup is pretty easy, you can just put the http://www.crummy.com/software/Beaut...lSoup-3.0.0.py Beautiful soup python file in the same directory you are running your code from.

Installing Mechanize is a little tougher, on a *nix machine, cd to the directory of where you downloaded it and extract it (tar -xzvf [filename]). Then cd into the extracted directory and install it by typing "sudo python setup.py install" It should install, you can post here if you have any problems. As far as windows goes, you may be on your own -- I can't imagine it's very tough, and there's probably a tutorial somewhere online.

Now that the installation is out of the way, it's time to get down to business. I'll give you the basics here, and you should be able to refer to the documentation for more complicated examples. I'm going to assume you have a basic familiarity of html -- if you don't, you may want to search for a quick tutorial. Let's make our first example getting a list of today's injuries from statfox for MLB baseball:

PHP Code: [View]


from BeautifulSoup import BeautifulSoup
SoupStrainer ## This tells python to use Beautiful Soup
from mechanize import Browser   ## This tells python we want to use a browser (which is defined in mechanize)
import re   ## This tells python that we will be using some regular expressions.
            ## .. Regular expression allow us to search for a sequence of characters
            ## .. within a larger string
import time
import datetime

## The first step is to create our browser..
br Browser()

## Now let's open the injuries page on statfox. This one line will open and retreive the html.
response br.open("http://www.sbrodds.com/StoryArchivesForm.aspx?ShortNameLeague=mlb&ArticleType=injury&l=3").read()

## Now we need to tell Beautiful Soup that we would like to search through the response.
## .. This next line will tell beautiful soup to only return links to the individual inuries.
## .. We know that all the links to the injuries have "ShortNameLeague=mlb&ArticleType=injury" 
## .. in their url, so we search for these links. Each of these links has a title that describes
## .. the injury which we will use in the next line.
linksToInjuries SoupStrainer('a'href=re.compile('ShortNameLeague=mlb&ArticleType=injury'))

## This will put the title of all links in the "linksToInjuries" into an array.
## We then call Set on our array to change the array to a "set" which by definition has no duplicates.
injuryTitles set([injuryPage['title'] for injuryPage in BeautifulSoup(responseparseOnlyThese=linksToInjuries)])


## Finally let's print all the injuries out that are for today's date.
today datetime.date.today()
# the function strftime() (string-format time) produces nice formatting
# All codes are detailed at http://www.python.org/doc/current/lib/module-time.html
date =  today.strftime("%m/%d"

## Now let's print out the injuries that we have.
for title in injuryTitles:
    
## See if the date is in the title, if it is: print it.
    
if re.search(datetitle):
        print 
title 
It might seem like a lot at first, but it's not much code. Take it slow and use google when you dont know what a function does. Googling "python [some piece of code you dont understand]" will work magic. Ask here and i can further break down any slice of code.

Sorry I haven't had much time -- If anyone can post an example of what kind of data they would like to be scraped, I will create one more example using both BeautifulSoup and Mechanize.
#27

Default

Very nice thread ljump12. Your elegant python examples have convinced a perl programmer to spend a bit more time with python.

However, I'm conflicted. This is selfish of me, but as sports bettors we have to be selfish when it comes to this. If everyone knows about an edge, then it isn't an edge anymore. Do we really want to be giving everyone these step-by-step instructions on how to research betting trends? The information on how to program web scrapers is widely available, but putting it all down right here has made it significantly easier to learn how to apply it directly to our field.

I'm sure you thought of this before you started this thread -- I guess I'm curious what your thoughts are.
#28

Default

Quote Originally Posted by sycoogtit View Post
Very nice thread ljump12. Your elegant python examples have convinced a perl programmer to spend a bit more time with python.

However, I'm conflicted. This is selfish of me, but as sports bettors we have to be selfish when it comes to this. If everyone knows about an edge, then it isn't an edge anymore. Do we really want to be giving everyone these step-by-step instructions on how to research betting trends? The information on how to program web scrapers is widely available, but putting it all down right here has made it significantly easier to learn how to apply it directly to our field.

I'm sure you thought of this before you started this thread -- I guess I'm curious what your thoughts are.
This is a very valid concern. Here's the thing, and its kind of selfish on my part too. I'm not, and probably won't be a huge sports bettor. It's not that i cant be... It's something if I put 100% effort into i believe i could do well, but I don't really want to. Since im not doing it, i figure i may as well help other people. You may feel differently about what I'm doing, and I can totally respect that. I guess the bottom line is that, even given these tools and this "tutorial" (if you could call it that), not many are going to follow through with it, so i wouldn't be too worried.

Finally one of my biggest hopes for this thread is that it so sparks discussion. Please feel free to post on anything related..
#29

Default

I see where both of you guys are coming from, but I tend to agree with ljump here. I don't think we're going to have 100 clowns from players talk see this thread and all of the sudden go from looking for the 100 unit lock of the century to setting up web scrapers, churning out dbs with 20k samples, and firing away +EV plays into soft spots in the market by Friday. My guess would be that most of the people who have the patience (and intelligent quotient) to read, understand, and apply the lessons in this thread already know how to do this type of programming or who have contacts who they share/get help from.

As long as you aren't attaching databases with +EV models to each post, I think everyone is going to be okay.