1. #1
    peacebyinches
    pull the trigger
    peacebyinches's Avatar SBR PRO
    Join Date: 02-13-10
    Posts: 1,108
    Betpoints: 7802

    handicapping mumbo jumbo

    Well, maybe I can help you out a little, heres something I've been working on and it definitely has some potential...

    Basically, this new model comes with a fuzzy-logic hyperlinked web pipeline, (which I have to add here that I owe it all to my ubiquitous open-source presentation simulator!!) After assembling a personal vector-based investment decompressor into an older database I had once toyed with (but gave up on) I was able to convert what I thought was a poor model into much stronger multi-matrix 3-Dimensional algorithm that decreased computational errors by at least 32%. It really came down a few lines of code after ostream& operators such as : "Matrix append (const DiagMatrix &a) const" that expaned simple correlative processes involivng money line differentials and pre-game line changes into a model that incorporated variables that analyzed the line changes as a function of time i.e. certain line changes meant more for certain games when they occurred at certain discrete time intervals.

    If your curious about this I can go into it some more, but I still need to tweak a it some before I feel truly confident in what may just be a packet-switched parallel service emulator.

  2. #2
    MonkeyF0cker
    Update your status
    MonkeyF0cker's Avatar Become A Pro!
    Join Date: 06-12-07
    Posts: 12,144
    Betpoints: 1127

    Quote Originally Posted by peacebyinches View Post
    Well, maybe I can help you out a little, heres something I've been working on and it definitely has some potential...

    Basically, this new model comes with a fuzzy-logic hyperlinked web pipeline, (which I have to add here that I owe it all to my ubiquitous open-source presentation simulator!!) After assembling a personal vector-based investment decompressor into an older database I had once toyed with (but gave up on) I was able to convert what I thought was a poor model into much stronger multi-matrix 3-Dimensional algorithm that decreased computational errors by at least 32%. It really came down a few lines of code after ostream& operators such as : "Matrix append (const DiagMatrix &a) const" that expaned simple correlative processes involivng money line differentials and pre-game line changes into a model that incorporated variables that analyzed the line changes as a function of time i.e. certain line changes meant more for certain games when they occurred at certain discrete time intervals.

    If your curious about this I can go into it some more, but I still need to tweak a it some before I feel truly confident in what may just be a packet-switched parallel service emulator.
    LOL. That's a lot of big words to sound like a moron.

  3. #3
    hutennis
    hutennis's Avatar Become A Pro!
    Join Date: 07-11-10
    Posts: 847
    Betpoints: 3253

    Quote Originally Posted by peacebyinches View Post
    Well, maybe I can help you out a little, heres something I've been working on and it definitely has some potential...

    Basically, this new model comes with a fuzzy-logic hyperlinked web pipeline, (which I have to add here that I owe it all to my ubiquitous open-source presentation simulator!!) After assembling a personal vector-based investment decompressor into an older database I had once toyed with (but gave up on) I was able to convert what I thought was a poor model into much stronger multi-matrix 3-Dimensional algorithm that decreased computational errors by at least 32%. It really came down a few lines of code after ostream& operators such as : "Matrix append (const DiagMatrix &a) const" that expaned simple correlative processes involivng money line differentials and pre-game line changes into a model that incorporated variables that analyzed the line changes as a function of time i.e. certain line changes meant more for certain games when they occurred at certain discrete time intervals.

    If your curious about this I can go into it some more, but I still need to tweak a it some before I feel truly confident in what may just be a packet-switched parallel service emulator.
    This is good!

    Quote Originally Posted by airattackers View Post
    Your help would be greatly appreciated!!! Send me a personal message..
    Grow up!!!

    If you already have, then go see a doctor ASAP.

  4. #4
    Pot luck
    Pot luck's Avatar Become A Pro!
    Join Date: 05-05-11
    Posts: 40
    Betpoints: 788

    Quote Originally Posted by peacebyinches View Post
    Well, maybe I can help you out a little, heres something I've been working on and it definitely has some potential...

    Basically, this new model comes with a fuzzy-logic hyperlinked web pipeline, (which I have to add here that I owe it all to my ubiquitous open-source presentation simulator!!) After assembling a personal vector-based investment decompressor into an older database I had once toyed with (but gave up on) I was able to convert what I thought was a poor model into much stronger multi-matrix 3-Dimensional algorithm that decreased computational errors by at least 32%. It really came down a few lines of code after ostream& operators such as : "Matrix append (const DiagMatrix &a) const" that expaned simple correlative processes involivng money line differentials and pre-game line changes into a model that incorporated variables that analyzed the line changes as a function of time i.e. certain line changes meant more for certain games when they occurred at certain discrete time intervals.

    If your curious about this I can go into it some more, but I still need to tweak a it some before I feel truly confident in what may just be a packet-switched parallel service emulator.
    Nice.

  5. #5
    MonkeyF0cker
    Update your status
    MonkeyF0cker's Avatar Become A Pro!
    Join Date: 06-12-07
    Posts: 12,144
    Betpoints: 1127

    Quote Originally Posted by peacebyinches View Post
    ???
    Uhh. Really? LOL.

    1. "Matrix append (const DiagMatrix &a) const" is an Octave class method. It is not an overloaded ostream& operator.
    2. Your words seem to imply (since its mostly empty bumbling jargon) that you're attempting to apply vector autoregression (or at least time series analysis) to find inefficiencies between moneylines and spreads. That's like building an F-16 so you can fly to your neighbor's house.
    3. 3-dimensional vectors (matrices) are trivial entities in virtually any compiled language such as C++ (see std::vector). The entire append method that you listed is a whole 6 lines of code without error checking.

    Code:
    Matrix::append (const DiagMatrix& a) const
     **
       octave_idx_type nr = rows ();
       octave_idx_type nc = cols ();
       if (nr != a.rows ())
         **
           (*current_liboctave_error_handler) ("row dimension mismatch for append");
           return *this;
         **
     
       octave_idx_type nc_insert = nc;
       Matrix retval (nr, nc + a.cols ());
       retval.insert (*this, 0, 0);
       retval.insert (a, 0, nc_insert);
       return retval;
     **
    Any more questions?

    P.S. - Do you have any idea what packet switching is? I guess I already know the answer.
    Last edited by MonkeyF0cker; 03-19-12 at 06:37 PM.

  6. #6
    peacebyinches
    pull the trigger
    peacebyinches's Avatar SBR PRO
    Join Date: 02-13-10
    Posts: 1,108
    Betpoints: 7802

    I think I see where some of the confusion is coming from, partly my fault as I was in a rush when I wrote the previous summary.

    By utilizing incremental latency compilers and coupling a transmutable texture-mapped workgroup visualizer (basically and extensible design encoder will work) to a unsupervised learning algorithm I managed to qualify specific situations where local weighted regression was necessary. This is really nothing more than a non-parametric learning algorithm with asynchronous properties. You mentioned that some of these programming methods are overkill, but the notion that computing methodologies collude with typical sensitivities will soon be a ubiquitous conception. There really is potential for introspective algorithms and multimodal epistemologies to confer with 64 bit architectures in a way that is modular, omniscient, and reliable.

    To get down to the specifics, by modeling continuous dimensional variables with typical matrix notation (and eigenvector ordinary differential equations):
    G−1ij(n) = [n/logn]
    X''=-\frac{\omega^2**{c^2**X and T''=-\omega^2 T.\
    It makes it easier to distinguish the random error (noise) that seems to plague every handicapping formula. Creating a workable version of this kind of probabilistic algorithm has been far from easy but as I continue to debug my code I see great potential for this and similar syntheses of game-theoretic modalities.
    And yes, one of the programs I was using was octave (basically matlab).

  7. #7
    Jayvegas420
    Vegas Baby!
    Jayvegas420's Avatar SBR PRO
    Join Date: 03-09-11
    Posts: 28,154
    Betpoints: 14709

    so does this mean you weight the line changes vs the amount of time (or duration of time) in the line change.
    Then weight these results vs the outcome of the game?

  8. #8
    hutennis
    hutennis's Avatar Become A Pro!
    Join Date: 07-11-10
    Posts: 847
    Betpoints: 3253

    Quote Originally Posted by peacebyinches View Post
    I think I see where some of the confusion is coming from, partly my fault as I was in a rush when I wrote the previous summary.

    By utilizing incremental latency compilers and coupling a transmutable texture-mapped workgroup visualizer (basically and extensible design encoder will work) to a unsupervised learning algorithm I managed to qualify specific situations where local weighted regression was necessary. This is really nothing more than a non-parametric learning algorithm with asynchronous properties. You mentioned that some of these programming methods are overkill, but the notion that computing methodologies collude with typical sensitivities will soon be a ubiquitous conception. There really is potential for introspective algorithms and multimodal epistemologies to confer with 64 bit architectures in a way that is modular, omniscient, and reliable.

    To get down to the specifics, by modeling continuous dimensional variables with typical matrix notation (and eigenvector ordinary differential equations):
    G−1ij(n) = [n/logn]
    X''=-\frac{\omega^2**{c^2**X and T''=-\omega^2 T.\
    It makes it easier to distinguish the random error (noise) that seems to plague every handicapping formula. Creating a workable version of this kind of probabilistic algorithm has been far from easy but as I continue to debug my code I see great potential for this and similar syntheses of game-theoretic modalities.
    And yes, one of the programs I was using was octave (basically matlab).
    This is even better. On so many LEVELS

  9. #9
    MonkeyF0cker
    Update your status
    MonkeyF0cker's Avatar Become A Pro!
    Join Date: 06-12-07
    Posts: 12,144
    Betpoints: 1127

    Quote Originally Posted by peacebyinches View Post
    I think I see where some of the confusion is coming from, partly my fault as I was in a rush when I wrote the previous summary.

    By utilizing incremental latency compilers and coupling a transmutable texture-mapped workgroup visualizer (basically and extensible design encoder will work) to a unsupervised learning algorithm I managed to qualify specific situations where local weighted regression was necessary. This is really nothing more than a non-parametric learning algorithm with asynchronous properties. You mentioned that some of these programming methods are overkill, but the notion that computing methodologies collude with typical sensitivities will soon be a ubiquitous conception. There really is potential for introspective algorithms and multimodal epistemologies to confer with 64 bit architectures in a way that is modular, omniscient, and reliable.

    To get down to the specifics, by modeling continuous dimensional variables with typical matrix notation (and eigenvector ordinary differential equations):
    G−1ij(n) = [n/logn]
    X''=-\frac{\omega^2**{c^2**X and T''=-\omega^2 T.\
    It makes it easier to distinguish the random error (noise) that seems to plague every handicapping formula. Creating a workable version of this kind of probabilistic algorithm has been far from easy but as I continue to debug my code I see great potential for this and similar syntheses of game-theoretic modalities.
    And yes, one of the programs I was using was octave (basically matlab).
    How in the world were you using the binary application of Octave and using ostream (C++) operators? LOL.

    Let me give you a hint. You couldn't unless you were using the Octave C++ classes. You gonna change your mind now?

    64-bit architecture allows for OMNISCIENT learning algorithms? Ok. LOL. Try again.

    What in the world does texture mapping (especially visualization) have to do with sports statistics eigenvector determination?

    Funny that you go straight to matrices and eigenvectors and paste some worthless formulas into a post yet you forget to leave out something incredibly complex - like what your supposed algorithm is attempting to learn.

  10. #10
    peacebyinches
    pull the trigger
    peacebyinches's Avatar SBR PRO
    Join Date: 02-13-10
    Posts: 1,108
    Betpoints: 7802

    I apologize if I am not explaining the particular outcome-based experiential processes of this proposed system properly, I can be a bit scatter brained at times! I tend to visualize these kind of problems in broad paradigms.

    Perhaps my explanation encompassed some extensible conjectures; I am just reinforcing why it is that effective supervised learning algorithms must be able to recontextualize cross-curricular networks. Implementing high-capacity scriptable interfaces in a manner that distinctly operationalizes each variable is necessary in formulating sensible, value based models. Simply deploying research-based convergence protocols whose sole purpose is to disaggregate competency-based paradigms only expedites developmentally inappropriate system-intelligences.

    These processes can elucidate the actual value of specific moneylines when they network distributed relevance discrepancies across the dimensions of vigorish, value, and the CHANGE of value over the added dimension of time. By adding extra parameters it requires matrix interface functionalities to deliver bayesian queries that ultimately yield +EV across the multi-nodal networks.

  11. #11
    MonkeyF0cker
    Update your status
    MonkeyF0cker's Avatar Become A Pro!
    Join Date: 06-12-07
    Posts: 12,144
    Betpoints: 1127

    Quote Originally Posted by peacebyinches View Post
    I apologize if I am not explaining the particular outcome-based experiential processes of this proposed system properly, I can be a bit scatter brained at times! I tend to visualize these kind of problems in broad paradigms.

    Perhaps my explanation encompassed some extensible conjectures; I am just reinforcing why it is that effective supervised learning algorithms must be able to recontextualize cross-curricular networks. Implementing high-capacity scriptable interfaces in a manner that distinctly operationalizes each variable is necessary in formulating sensible, value based models. Simply deploying research-based convergence protocols whose sole purpose is to disaggregate competency-based paradigms only expedites developmentally inappropriate system-intelligences.

    These processes can elucidate the actual value of specific moneylines when they network distributed relevance discrepancies across the dimensions of vigorish, value, and the CHANGE of value over the added dimension of time. By adding extra parameters it requires matrix interface functionalities to deliver bayesian queries that ultimately yield +EV across the multi-nodal networks.
    More babbling about nothing. You'd think you were paid by the syllable with your ridiculous posts. Are we using supervised learning algorithms now or unsupervised? Are you confusing yourself with your own posts?

    In summary, see my first post. Bayesian time series analysis of spread/ML markets. Have fun wasting your time with that.

  12. #12
    Jayvegas420
    Vegas Baby!
    Jayvegas420's Avatar SBR PRO
    Join Date: 03-09-11
    Posts: 28,154
    Betpoints: 14709

    so its not just the line movements & the time it takes them to move, you still in addition to that need to account for variables such as player performances & team percentages within the game itself?
    am i even close here?
    When you talk about variables I sort of lose you because as I understand it, variables can be finite if you choose them or they could be infinite if you are trying to incorporate all variables into you system or, learning algorythm.
    I guess what I'm politely trying to say is that your posts are fairly useless if they can't be interpreted by the masses properly.
    I tend to babble a lot myself but I feel I can get my point across most of the time.

    maybe the question to be asked here is, regardless of the formula or algorythm or regression method you are executing....What are you trying to learn?

    Maybe you could explain that more easily because as far as I can tell, you are trying to predict the likeliest outcome for a game in which the line moves in a certain direction, at a certain time, during a certain time frame?
    I dont know throw me a bone here.

Top