OT -- Monty Hall-type problem

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Ganchrow
    SBR Hall of Famer
    • 08-28-05
    • 5011

    #1
    OT -- Monty Hall-type problem
    This is an old one. Interesting nevertheless:

    A person has just won a prize on a game show. He's allowed to select one of two identical boxes and will win whatever's inside. He's told that both boxes contain cash, but that one box contains twice as much cash as the other.

    He opens a box, and discovers it contains $9,000. He is given the opportunity to pay $1,000 and switch boxes (this same option would exist no matter what was in the box). Should he do so?
  • SBR_John
    SBR Posting Legend
    • 07-12-05
    • 16471

    #2
    No. He takes the cash and hammers the Spurs on the ML. Heck, that was easy.
    Comment
    • Ganchrow
      SBR Hall of Famer
      • 08-28-05
      • 5011

      #3
      Originally posted by SBR_John
      No. He takes the cash and hammers the Spurs on the ML. Heck, that was easy.
      Let's assume he actually likes money.
      Comment
      • Razz
        SBR Hall of Famer
        • 08-22-05
        • 5632

        #4
        Yes, of course. Assuming there is a 50% chance the other box holds $18K and a 50% chance the other box holds 4.5K, his possible outcomes are +8K and -5.5K. Even non-gamblers would be insane to pass that up.
        Comment
        • SBR_John
          SBR Posting Legend
          • 07-12-05
          • 16471

          #5
          Originally posted by Ganchrow
          Let's assume he actually likes money.
          haha! good one.
          Comment
          • LT Profits
            SBR Aristocracy
            • 10-27-06
            • 90963

            #6
            The YES seems obvious as you are getting about 7/5 on an even money prop. Am I missing something?
            Comment
            • Lucas
              SBR MVP
              • 12-20-05
              • 1062

              #7
              Originally posted by Razz
              Yes, of course. Assuming there is a 50% chance the other box holds $18K and a 50% chance the other box holds 4.5K, his possible outcomes are +8K and -5.5K. Even non-gamblers would be insane to pass that up.
              it depends on his utility function...
              if that guy needs let say 6K because he must pay surgery unless he dies then rational is for sure no switch
              Comment
              • Korchnoi
                SBR Sharp
                • 10-20-06
                • 406

                #8
                I vote No, in principle.

                In reality, I'd want to know what the average value of a prize is for this gameshow so I can make a guess at the possible distribution. If the show was on network TV, I'd pay a grand to switch. If the show were like non-prime-time on the game show network, I'd probably keep it.
                Comment
                • Ganchrow
                  SBR Hall of Famer
                  • 08-28-05
                  • 5011

                  #9
                  Originally posted by Korchnoi
                  In reality, I'd want to know what the average value of a prize is for this gameshow so I can make a guess at the possible distribution.
                  You have no a priori knowledge of the distribution whatsoever, but you do know that the only prize quantities that could be offered would both be non-negative and real.
                  Comment
                  • LT Profits
                    SBR Aristocracy
                    • 10-27-06
                    • 90963

                    #10
                    Korchnol just opened my eyes a bit!

                    I didn't factor in how often the show has given away 18K in the past as opposed to 4.5K. If the split has historically been about 50/50, then I would obviosuly still say YES as my even money prop holds true. However if the smaller prize is given away at least 61.5% of the time (8 out 0f 13), then I say NO because I am stll getting 7/5 on a prop that historically should be at least 8/5.
                    Comment
                    • Ganchrow
                      SBR Hall of Famer
                      • 08-28-05
                      • 5011

                      #11
                      Originally posted by LT Profits
                      I didn't factor in how often the show has given away 18K in the past as opposed to 4.5K.
                      This is the first time the show has ever done this (or anything like it), so there's no historical data to examine.
                      Comment
                      • sportsfanatic
                        SBR MVP
                        • 03-10-07
                        • 3967

                        #12
                        Originally posted by Ganchrow
                        This is an old one. Interesting nevertheless:

                        A person has just won a prize on a game show. He's allowed to select one of two identical boxes and will win whatever's inside. He's told that both boxes contain cash, but that one box contains twice as much cash as the other.

                        He opens a box, and discovers it contains $9,000. He is given the opportunity to pay $1,000 and switch boxes (this same option would exist no matter what was in the box). Should he do so?
                        No. He should just stay with whatever he first chooses. You get a 50/50 chance to land on whichever amount is double the other on your first pick. Switching after the fact is a waste of time. Switching boxes would be just like choosing the other box on your first pick. Meaning the odds were and always will be 50/50 that you pick the box with the greater amount.
                        Comment
                        • tacomax
                          SBR Hall of Famer
                          • 08-10-05
                          • 9619

                          #13
                          Originally posted by sportsfanatic
                          No. He should just stay with whatever he first chooses. You get a 50/50 chance to land on whichever amount is double the other on your first pick. Switching after the fact is a waste of time. Switching boxes would be just like choosing the other box on your first pick. Meaning the odds were and always will be 50/50 that you pick the box with the greater amount.
                          I'm plumping for that option. But I'm having a hard time trying to prove it mathematically.
                          Originally posted by pags11
                          SBR would never get rid of me...ever...
                          Originally posted by BuddyBear
                          I'd probably most likely chose Pags to jack off too.
                          Originally posted by curious
                          taco is not a troll, he is a bubonic plague bacteria.
                          Comment
                          • tacomax
                            SBR Hall of Famer
                            • 08-10-05
                            • 9619

                            #14
                            Now I'm moving the other way. I'll sit down and do it later.
                            Originally posted by pags11
                            SBR would never get rid of me...ever...
                            Originally posted by BuddyBear
                            I'd probably most likely chose Pags to jack off too.
                            Originally posted by curious
                            taco is not a troll, he is a bubonic plague bacteria.
                            Comment
                            • TLD
                              SBR Wise Guy
                              • 12-10-05
                              • 671

                              #15
                              [I talk about the Wikipedia article on this problem below, so if that constitutes a “spoiler” and you prefer to continue working on the problem, you may want to skip this post.]

                              I’m a “no-changer” myself.

                              I think of it as one of those problems where math and logic pull in different directions, and almost everyone assumes that math “wins” in such a case.

                              The math side would be that as long as the amount in the chosen box, call it Box A, is high enough—which it certainly is if we stipulate that it contains $9,000—then:

                              {[(.5 * 2A) + (.5 * .5A)] - $1,000} > A

                              That is, there is a 50% chance you’ll improve your position by the A amount (in this example, $9,000) and a 50% chance you’ll worsen you’re position by half-A (in this case $4,500), so on average you are gaining $2,250 by making the switch, which is more than the $1,000 it costs.

                              For the logic side, imagine there are two contestants, it is stipulated that each box contains a minimum of $9,000, they are each assigned a box by the flip of a coin, and then, before opening either, each is given the option to receive the amount of money in their box, or to receive the amount of money in the other box, with, again, a fee of $1,000 to switch.

                              Well, as above, for any A in the specified range:

                              {[(.5 * 2A) + (.5 * .5A)] - $1,000} > A

                              But equally for any B in the specified range:

                              {[(.5 * 2B) + (.5 * .5B)] - $1,000} > B

                              Which would mean that from the standpoint of the first contestant, B is the better box to take, and from the standpoint of the second contestant, A is the better box to take. But logically that is a contradiction, as it cannot be the case that the grass is greener on both sides of the fence. All they’d be doing by switching is pointlessly rebating $2,000 in prize money back to the show.

                              So there has to be a problem with one analysis or the other, either with the math that says switching gives you greater expected value, or with the logic that says whatever makes it wise or unwise to switch from A to B would apply equally to switching from B to A and both switches cannot be advantageous.

                              Maybe just because I’m more a logic guy than a math guy, I find the second analysis more compelling, and thus approach the problem with the assumption that the first analysis is somehow flawed.

                              And I label it an assumption, because it is not the case that I first discovered a flaw and then made my choice. Instead, for me it is analogous to seeing a magician saw a lady in half on stage. I don’t disbelieve the apparent event (that he really sawed her in half) as a result of spotting the “trick;” I assume there’s a trick because I already disbelieve the apparent event.

                              Or like Zeno’s paradoxes. If he “disproves” motion, for example, the challenge becomes figuring out the trick or error in his reasoning, since I (and I assume most people) will reject his conclusion even before discovering any such trick or error, rather than as a result of discovering it.

                              So I thought through various possibilities for rejecting the math. Interestingly, I probably spent more time on the one that was later posted here—namely, that there isn’t really a 50-50 chance of the other box being double or half, since the amount in the first box and your knowledge of game shows in general or this game show in particular or whatever constitutes empirical evidence that needs to be taken into account—than any other. But I’m fairly sure some methodology can be specified for determining the amount in the boxes—e.g., write down every number from $20,000 to $50,000 in increments of $1,000, pick one from a hat, put that amount in one box, flip a coin, if heads put double that amount in the other box and if tails put half that amount in the other box, present boxes to contestant, etc.—such that all those empirical considerations are eliminated. And even if not, I would sense it’s still “cheating” in a sense to go this route, as you are evading the paradox rather than solving it.

                              Anyway, I thought of various things like that, and none of them unambiguously “worked.” I ended up with several “maybe” solutions that seemed to have at least some small promise, but I didn’t want to devote even more time to truly thinking them through, and I suspected I might need to know more math to properly assess them anyway.

                              So I decided to cheat and see if I could find online what the “correct” answer was, and if it turned out to be a more properly developed version of anything I had speculated thus far.

                              I managed to find a version of this riddle on Wikipedia, and found the discussion quite interesting and surprising. As I understand it (I know I’m oversimplifying here, but I hope I’m not also misrepresenting the article, as some of the math and such I kind of brushed over because I was too lazy to try to wrestle with every detail), the consensus is that:

                              1. You’re not really helping yourself by switching, and

                              2. No one’s been able to formally articulate and prove #1.

                              Several attempted solutions are discussed—which overlapped considerably with my speculations—but for each it was explained how the problem could be plausibly reformulated to close such loopholes.

                              One radical approach that’s noted, but not explained or assessed, is to reject the whole theory of mathematical probability the initial analysis is based on—the “Bayesian” approach—in favor of a “frequency” model. I gather the “frequency” model would leave the probabilities in this problem somehow undefined or incalculable and thus there is no paradox to be overcome. But it didn’t go into why, and I’m not well versed enough in mathematics and probability theory to understand why that would be the case, or to have an opinion about whether a Bayesian or frequency interpretation of probability is more justified in general.

                              So I guess I’m glad I didn’t think it through even more, because it looks like those that did really didn’t advance much beyond my position of “I’m convinced that it can’t be advantageous to switch and thus there’s a problem with the math, but damned if I can explain why.”

                              (As a final note, I will say I remain open-minded to the “switch” strategy. I say this in part because I was in a similar position as regards the better known “Monty Hall” problem—the one where there are three doors, one good and two bad, you make your choice, Monty reveals one of the other doors instead, which is a bad one, and gives you the option to now switch to the other unopened door—and I finally grasped why you’re supposed to switch. For the longest time, my assessment was “I see why mathematically you should switch—just like I see that 1.25A > A in the current problem—but since it makes no sense to me logically that switching could help you, I’m going to stick to my guns and assume there’s some problem with the math.” But after browsing further in Wikipedia last night and reading about the three door problem, I think I finally get it. Now instead of math pulling me in one direction and logic in the other, I see how they both favor switching and so there is no real paradox.

                              So it’s certainly possible I’ll eventually see this one the same way. But for now I remain a “no-changer.”)
                              Comment
                              • PeterWellington
                                SBR Rookie
                                • 12-20-06
                                • 49

                                #16
                                I wouldn't switch.

                                You're essentially paying a $1K fee for something you could have selected in the first place. The fact that you know a specific amount in one box is irrelevant because it doesn't give you any new information (other box is still either half or double).
                                Comment
                                • Ganchrow
                                  SBR Hall of Famer
                                  • 08-28-05
                                  • 5011

                                  #17
                                  Very, very nicely explained, TLD.

                                  In reality this isn't so much a problem but rather more of a paradox. As you've obviously discovered this is famously known as the "Two Envelopes" paradox. Like most paradoxes, of course, it's the result of a misunderstanding. On the one hand, we have the standard notion of expectations (the expected value of switching envelopes is positive in so far as 2x * 50% + x/2 * 50% > x for all x > 0), contrasted with the reductio ad absurdum made apparent by bringing the problem to its logical conclusion (if one envelope went to you and another went to your twin brother, the apparent logic would imply you both switch, yet it’s both logically and mathematically impossible for you both to have positive expectation from exchanging envelopes).

                                  But anyway, this isn’t a conflict of math and logic (and indeed historically speaking, the only apparent clashes between the two inevitably prove themselves the results of lapses in understanding), but rather of informed math and naïve math. Razz’s approach above is probably the same as that undertaken by most with an understanding of expectation. It’s a simple enough solution that it lends itself to quickly answering and then not giving a second thought. It just seems so obvious. But if you really think about it, the logic quickly falls apart. Think about it a bit longer and it eventually becomes apparent that there is no way that switching can have positive expected value. None.

                                  TLD has done a rather good job at explaining why this is so. I’d also recommend that thos interested read the Wikipedia article on the two envelope paradox. In broad terms it’s easy to see that the expected value of switching must be precisely zero for the simple fact that no new information is revealed when the first envelope is opened.

                                  I very much like TLD’s epistemological approach, and thought the comparison to the magician sawing the assistant in half quite apt (I find the same technique quite useful with approaching psychics or other such charlatans). So the question then becomes, “What is the flaw in Bayesian reasoning that might lead us to believe that switching holds value?”

                                  Before addressing the fallacy of the argument, it probably makes sense to spell out the fallacious argument a little more explicitly. So here goes: (You’ll notice I’ve discarded the notion of the $1,000 price tag for switching … I had initially inserted it at as syntactic sugar, but it really only serves to obfuscate the issue.)

                                  Because there are only two envelopes, the logic runs, you have the same chance of picking the larger amount as you do the smaller amount (50% each). Hence, after determining that the amount in your box is $X, there’s a 50% chance the other box contains $2X and a 50% chance that it contains $X/2. This means that the expected value from giving up the current amount and switching to the other is 50% * 2X + 50% * X/2 – X = X/4. Because we assume X > 0 (as well as implicitly assuming risk neutrality – assuming otherwise could really obfuscate the issue) it will always make sense to switch.

                                  So where’s the error in our math/logic? We know this is wrong but why is it so?

                                  One implicit assumption with this approach is that we possess no knowledge of the nature of the distribution from which the initial quantities were drawn. If we did, then that could obviously make a much more informed switching descision. For example, let’s say that these prizes were being given out by SBR. You learn the average prize paid out by SBR has historically been $1,000. If you learn your prize were only $500 it might very well make sense to switch to the other envelope given the facts.

                                  The real meat of the paradox, however, is when we assume that opening the envelope reveals absolutely nothing. One way this would be true if the dollar values were drawn from a distribution such as ( … $1/16, $1/8, $1/4, $½, $1, $2, $4, $8, $16, … ) with any value in the infinite sequence being equally likely. This way, we have, after opening the first envelope, we still have no idea whether that envelope represents a “large” quantity (implying a greater likelihood of the other being “small”), or whether it represents a “small” quantity (implying a greater likelihood of the other being “large”),

                                  A very similar way to consider it (and we ultimately give up no generality by doing so) would be to assume three possible envelopes with values of $X, $2X, or $4X (where X is any real positive number). One envelope is chosen by the game operator at random, and the other is chosen (randomly, if necessary) such that one envelope contains the twice other. Before selecting an envelope (assuming no option to switch later on) what would the player’s expected value be? If you weren’t careful you might say it would be ($X+$2X+$4X)/3 = $7X/3 ≈ $2.33X.

                                  But this is wrong. Once you understand why it’s wrong, the root cause of the paradox becomes readily apparent.

                                  We know that of the two envelopes, one contains twice the other. Hence, the only possible combinations of envelopes are ($X, $2X) and ($2X, $4X), each one equally probable. This means that the player’s expected value is actually given by ($X + 2*$2X + $4X)/4 = $9X/4 = $2.25X, which is a smaller than we’d expect were all 3 envelopes equally likely to have been chosen.

                                  So hopefully this should completely illuminate the issue. Here’s what we have:
                                  • Pick one envelope and open it. Call the value $Y.
                                  • There is a 25% chance that $Y is the lowest value ($X), a 25% chance it’s the highest value ($4X), and a 50% it’s the middle value ($2X). Hence, the current expected value of our position is (as above) 25%*$X + 50%*$2X + 25%*$4X = $2.25X.
                                  • If our current quantity is $X (25% probability) and we switch our excess gain will be $2X - $X = $X.
                                  • If our current quantity is $4X (25% probability) and we switch our excess loss will be $4X - $2X = $2X.
                                  • If our current quantity is $2X (50% probability) and we switch, then there’s a further 50% probability of a excess gain of $4X - $2X = $2X, and a 50% probability of an excess loss of $2X - $X = $X.
                                  • Summing the expectations we get 25%*$X + 25%*-$2X + 50%*(50%*$2X + 50%*-$X) = $0.
                                  • Therefore, switching holds no excess value whatsoever.
                                  • Analogous mathematical reasoning holds when considering any number of possible boxes from which the initial two were drawn.
                                  QED

                                  Here’s another way of looking at it:
                                  • A pair of values is selected by the game designer. These have values of $X and $2X.
                                  • There’s a 50% probability of the layer selecting $X and a 50% probability of the player selecting $2X.
                                  • Hence, his expected gain from switching is 50%*($X-$2X) + 50%*($2X-$X) = $0.
                                  • Therefore, switching holds no excess value whatsoever.
                                  QED

                                  So that’s really it. That’s the answer. It ceases to be much of a paradox once you consider it as above. For some, the apparent paradox may result from implicitly assuming that the game organizer is selecting two independent random dollar values. He’s not. He’s selecting one value at random (the actual value doesn’t even matter … just arbitrarily call it $Y), and then randomly selecting whether the other will be twice that or half that. He then presents the player with both envelopes -- one with “$X” and one with “$2X”. Then by switching, half the time he’d lose $X, and half the time he’d gain $X (whether that value happened to be called $Y or $Y/2).

                                  Simple, right?
                                  Comment
                                  • NeedProtection
                                    SBR High Roller
                                    • 02-25-07
                                    • 113

                                    #18
                                    Are you people idiots? This problem can be solved in 5 lines. Then the thread should be locked and sent to parents as an example of how dumb your kids will be if they let them bet on sports.

                                    Assume 2 envelopes, $10 and $20 in each.

                                    Pick an envelope 10 times and never switch. Avg result = $150

                                    switch every time
                                    10 becomes 20 half the time +100
                                    20 becomes 10 half the time +50

                                    = $150

                                    Wow that was tough.

                                    Did someone in this thread really advocate paying to switch?
                                    Comment
                                    • Ganchrow
                                      SBR Hall of Famer
                                      • 08-28-05
                                      • 5011

                                      #19
                                      The problem's actually not quite as straightforward as you naïvely assert. Certainly your line of reasoning holds true in the degenerate case of two envelopes each of value known ex-ante -- but that quite misses the point in the context of this discussion. If the distribution is known a priori then there is no distinction between the Bayesian and frequentist approaches and as such is there is no apparent paradox and indeed all we have is the elementary exercise in expected value that you so aptly demonstrated.

                                      In other words, the difficulty with your line of reasoning is that with only two envelopes of predetermined value there is no paradox and as such, ignoring any substantive consideration of the underlying distribution of dollar amounts it clearly begs the question. Sure, it does get you to the right answer ... but in a manner somewhat akin to explaining away Zeno's paradox with the contention that Achilles never actually lived.

                                      Now either you already know this and are using it as a straw man argument to fuel your self-righteous puerility, or you simply lack sufficient understanding of the underlying discussion to appreciate the subtleties of the issue at hand. Either way, if you decide to lay down your angry stick long enough to have a civilized discussion, I'll be waiting.
                                      Comment
                                      • Dark Horse
                                        SBR Posting Legend
                                        • 12-14-05
                                        • 13764

                                        #20
                                        Started from scratch, without reading the long explanations.

                                        A) +$9000
                                        B) -$1000 +$4500 = +$3500
                                        C) -$1000+$18000 = +$17000

                                        So the person who switches agrees to bet $5500 to make $8000 at even odds. Obviously, if I could make that same bet 100 or 1000 times, that would be the way to go. But if I can make it only once, that may change things.

                                        How do I see money? Do I see it as a way to make more money? What is the value of the $9000 in relation to my net worth? What is the value of the $9000 in the real world?

                                        Can I change the numbers by adding three zeroes? Personally, I would take the $9 million, because of its true value in the world. For the same reason, I would accept the bet at $9000.

                                        (Now I'm going to (try to) read the explanations).
                                        Comment
                                        • SBR_John
                                          SBR Posting Legend
                                          • 07-12-05
                                          • 16471

                                          #21
                                          Man there are some smart fuks in this thread! Way to go guys!

                                          I think I will wait for the Thomas the Tank Engine question thread.
                                          Comment
                                          • PeterWellington
                                            SBR Rookie
                                            • 12-20-06
                                            • 49

                                            #22
                                            Originally posted by Ganchrow
                                            The problem's actually not quite as straightforward as you naïvely assert. Certainly your line of reasoning holds true in the degenerate case of two envelopes of value known ex-ante -- but that quite misses the point in the context of this discussion. If the distribution is known a priori then there is no distinction between the Bayesian and frequentist approaches and as such is there is no apparent paradox and indeed all we have is the elementary exercise in expected value that you so aptly demonstrated.

                                            In other words, the difficulty with your line of reasoning is that with only two envelopes of predetermined value there is no paradox and as such, ignoring any substantive consideration of the underlying distribution of dollar amounts it clearly begs the question. Sure, it does get you to the right answer ... but in a manner somewhat akin to explaining away Zeno's paradox with the contention that Achilles never actually lived.

                                            Now either you already know this and are using it as a straw man argument to fuel your self-righteous puerility, or you simply lack sufficient understanding of the underlying discussion to appreciate the subtleties of the issue at hand. Either way, if you decide to lay down your angry stick long enough to have a civilized discussion, I'll be waiting.
                                            He was harsh but I think it *is* that easy for some people to make sense of this question. I would actually be willing to bet that if you rounded up 100 math "flunkies" and asked them this question most would get it correct immediately. I think the right answer and the intuitive answer are one and the same for most people. I think the problem is a lot of others have just enough math and probability knowledge to overthink things.
                                            Comment
                                            • Ganchrow
                                              SBR Hall of Famer
                                              • 08-28-05
                                              • 5011

                                              #23
                                              Originally posted by PeterWellington
                                              He was harsh but I think it *is* that easy for some people to make sense of this question. I would actually be willing to bet that if you rounded up 100 math "flunkies" and asked them this question most would get it correct immediately. I think the right answer and the intuitive answer are one and the same for most people. I think the problem is a lot of others have just enough math and probability knowledge to overthink things.
                                              To put it the manner in which I should have responded initially, I quote from the Wikipedia entry on the two envelope paradox: "The puzzle is to find the flaw, the erroneous step, in the switching argument [of the problem]. This includes determining exactly why and under what conditions that step is not correct, so we can be absolutely sure we don't make this mistake in a more complicated situation where the fact that we do something wrong isn't this obvious. Put shortly, the problem is to solve the paradox." Therefore, claiming the paradox doesn't really exist simply begs the question, no matter how many math 'flunkies' (is that actually the word for which you were looking -- if so then perhaps than I misunderstood your meaning) are able to get the superficial problem correct immediately.

                                              Analogously, when one learns about Zeno's paradox in algebra class, it's not for the purpose of demonstrating that a sufficiently fast runner really could beat a tortoise in a race, but rather to demonstrate that the sum of an infinite series can in fact be finite. Simply declaring a priori that Zeno's paradox is false completely fails at illuminating the underlying problem. In other words, the real challenge is to find the flaw.
                                              Comment
                                              • PeterWellington
                                                SBR Rookie
                                                • 12-20-06
                                                • 49

                                                #24
                                                Originally posted by Ganchrow
                                                To put it the manner in which I should have respond initially, I quote from the Wikipedia entry on the two envelope paradox: "The puzzle is to find the flaw, the erroneous step, in the switching argument [of the problem]. This includes determining exactly why and under what conditions that step is not correct, so we can be absolutely sure we don't make this mistake in a more complicated situation where the fact that we do something wrong isn't this obvious. Put shortly, the problem is to solve the paradox." Therefore, claiming the paradox doesn't really exist simply begs the question, no matter how many math 'flunkies' (is that actually the word for which you were actually looking -- if so then perhaps than I misunderstood your meaning) are able to get the superficial problem correct immediately.
                                                I totally understand what you're saying. What I'm saying is most good brain teasers make your intuitive answer wrong, or make it hard to even come up with something you think is right. I think most math "flunkies" (failed out of basic math class) would come up with the correct answer right away and be able to explain it in a way that makes sense. Here's an analogy:

                                                Let's say there's a magic show with 100 kids in the crowd and one adult. The adult happens to be a magician by trade but is just watching the performance. The stage magician holds up the Ace of Spades to the crowd, places it back on top of the deck, then hands it to a volunteer face down. If at that point you were to ask the crowd what card the volunteer is holding most would say the Ace of Spades, but the magician would most likely suspect a switch (although he didn't actually see it). Now let's say the performing magician asks the volunteer to turn the card over and it is in fact the Ace of Spades.

                                                If you were to ask the audience why they thought it was the Ace, they would tell you it was because he plainly showed the card then handed it to the volunteer. If you were to ask the magician in the crowd why he didn't think it was the Ace he would tell you that what the stage magician did is typically associated with a card switch. In other words, he overthought the issue based on his experience whereas the crowd didn't have that option.

                                                And I think this puzzle is very similar. Most people who don't know how to calculate expectation would say "Why would I pay a thousand bucks to switch when I could have just picked the other box in the first place?" and they would actually be right on. What I'm saying is that some people don't see the apparent paradox, and they don't really need to. It's kind of like asking those kids in the magic show to explain why the Ace of Spades the magician handed the volunteer is still the Ace of Spades. There's no need to explain the sleight of hand that didn't happen.
                                                Comment
                                                • Ganchrow
                                                  SBR Hall of Famer
                                                  • 08-28-05
                                                  • 5011

                                                  #25
                                                  Originally posted by PeterWellington
                                                  I totally understand what you're saying. What I'm saying is most good brain teasers make your intuitive answer wrong, or make it hard to even come up with something you think is right. I think most math "flunkies" (failed out of basic math class) would come up with the correct answer right away and be able to explain it in a way that makes sense. Here's an analogy:
                                                  -snip-
                                                  To be perfectly honest, Peter, I'm not sure I exactly understand what you're getting at here. If your contention is that the paradox presupposes some understanding of elementary concepts of expectation, you're quite obviously correct. If a given individual can't see where the apparent paradox lies, I'd argue that a discussion of the paradox's resolution is probably not appropriate for him. (Just as a demonstration of acutely self-aware sleight-of-hand is may not be be appreciated by children, either.)

                                                  If OTOH, you think this is just a silly paradox not worthy of serious consideration, that's certainly your prerogative. I'd point out, however, that there are at least some people out there who apparently disagree with you in this regard. If you're looking for a more traditional type of brain teaser, I'll direct you to last puzzle I posted. It still remains unsolved.

                                                  Originally posted by PeterWellington
                                                  Most people who don't know how to calculate expectation would say "Why would I pay a thousand bucks to switch when I could have just picked the other box in the first place?"
                                                  With a priori knowledge of the distribution from which the original dollar amounts are drawn, this line of reasoning is in general not correct.
                                                  Comment
                                                  • TLD
                                                    SBR Wise Guy
                                                    • 12-10-05
                                                    • 671

                                                    #26
                                                    I think I see what Peter’s saying. A paradox isn’t a paradox for people who are either 1) Sophisticated enough to spot the temptation placed in their path and understand why it is to be avoided, or 2) Unsophisticated enough that they’re not even aware of the temptation placed in their path.

                                                    Though in the latter instance, as you state Ganchrow, rather than say such a person has solved the paradox, it makes more sense to say “If a given individual can't see where the apparent paradox lies, I'd argue that a discussion of the paradox's resolution is probably not appropriate for him.”

                                                    But I did want to go back and ask a little more about your solution.

                                                    First off, of course I agree that my contrast of math and logic wasn’t a particularly felicitous one. I started to say “math” (or at least a plausible approach stated primarily in mathematical terms) versus “common sense,” but the latter term seemed insufficient to capture the strength of a reductio ad absurdum, plus really both plausible approaches would have some appeal to most people’s “common sense.” Anyway, in spite of my too casual use of the terms, I agree that both approaches can be described in terms of both math and logic, and that the conflict will ultimately turn out to be between plausible but mistaken math/logic and correct math/logic.

                                                    Beyond that, I can follow your solution (I’m pretty sure), but in a way I don’t feel satisfied. I’ll try to explain why.

                                                    Initially my position was:

                                                    1. There is an at least superficially plausible approach to this problem—call it the “Bayesian” one—that concludes that switching is the correct strategy.

                                                    2. There is another at least superficially plausible approach to this problem—call it the “reductio” one—that concludes that switching and not switching must be of equal value.

                                                    3. I don’t see the flaw in either, but intuitively find the reasoning of #2 to be less likely to be flawed, so tentatively I’m in the “no point in switching” camp.

                                                    Now there has been added another point:

                                                    2a. There is yet another at least superficially plausible approach to this problem—call it the “Ganchrow” one—that also concludes that switching and not switching must be of equal value.

                                                    And so now I would say:

                                                    3a. I don’t see the flaw in #1, #2, or #2a, but because I intuitively find the reasoning of #2 to be less likely to be flawed than #1, and because I’m convinced Ganchrow understands this stuff and hence his math in 2a is unlikely to turn out to be flawed, I am somewhat more confidently in the “no point in switching” camp.

                                                    So as you can see, I still haven’t really grasped how your post shows how #1 is flawed, only that it is. For me it’s still a reductio:

                                                    #1 leads to a certain conclusion.
                                                    #2a is seemingly correct and leads to a contradictory conclusion.
                                                    Therefore, #1 is wrong.

                                                    And maybe I’m asking for too much. I mean, once you’ve spelled out the correct approach to the problem, I’m not sure what else there is to do.

                                                    Perhaps if I break down the Bayesian position, you could identify at which point the error enters (again, I’m sure in effect you already have, but I’m being dense):

                                                    1. There is $9,000 in the envelope you choose.

                                                    2. There is a 50% chance this will turn out to be double what is in the other envelope and a 50% chance this will turn out to be half what is in the other envelope.

                                                    3. If the former, you will lose $4,500 by switching.

                                                    4. If the latter, you will gain $9,000 by switching.

                                                    5. A situation where you have a 50% chance of losing $4,500 and a 50% chance of gaining $9,000 is equivalent to a situation where you have a 100% chance of gaining $2,250 (at least using all the relevant assumptions noted earlier in the thread, most importantly risk neutrality).

                                                    Or perhaps on a more general level you could address—if it’s something that can be made sense of to a lay audience—just what the Bayesian theory of probability is and what are its flaws. So far, about all I could say based on this thread is that I can infer it is probably flawed due to the existence of counterexamples, wherein other approaches seem to be equally or more convincing and to lead to contradictory conclusions. But that’s different from truly grasping what the flaw is that leads a Bayesian to such false conclusions.

                                                    (The sad thing is I should understand all this and once upon a time probably did, to at least some extent. My first Philosophy professor, who was a brilliant guy and ended up a dear friend of mine, specialized in the philosophy of science and did a lot of work in probability theory, especially as it pertained to expressing irreducibly probabilistic scientific laws, such as those of quantum mechanics appear to be. I recall his lectures and readings arguing against the Bayesian theory and the frequency theory, and in favor of his “single case propensity” theory. I could sort of follow it at the time, but I think it was one of those things where I could memorize enough to get me through tests, but it was never really in there deep and I quickly forgot most of it.)
                                                    Comment
                                                    • Wild Reet
                                                      SBR High Roller
                                                      • 02-09-07
                                                      • 116

                                                      #27
                                                      Once you choose the first envelope the assumption that the other envelope has a 50% chance of being double and a 50% chance of being half is the source of error.

                                                      Under the parameters given if I gave you two envelopes one with $4500 and the other with $9000 and you select $9000 there is not a 50% probability the other envelope is $18,000. It is 0%.

                                                      Only if the amount in the second envelope was randomly generated after you were given the first (double the given envelope 50% of the time and half the given envelope 50% of the time) would it have positive value to switch.

                                                      In that case the Bayesian calculations would hold.

                                                      I think.
                                                      Comment
                                                      • Ganchrow
                                                        SBR Hall of Famer
                                                        • 08-28-05
                                                        • 5011

                                                        #28
                                                        Originally posted by TLD
                                                        I think I see what Peter’s saying. A paradox isn’t a paradox for people who are either 1) Sophisticated enough to spot the temptation placed in their path and understand why it is to be avoided, or 2) Unsophisticated enough that they’re not even aware of the temptation placed in their path.
                                                        I think the idea is that every interested individual in the former category should nevertheless be cognizant of said temptation, whether or not they ultimately fall victim to the fallacious mode thinking. I, for example, understand why this isn't really a paradox and why the answer, when properly understood, violates no logial principles or axioms. Nevertheless, I still find it amusing and fully understand why the logic can be a bit tricky. By the time you finish reading this, I hope you'll feel the same way.

                                                        Originally posted by TLD
                                                        First off, of course I agree that my contrast of math and logic wasn’t a particularly felicitous one. I started to say “math” (or at least a plausible approach stated primarily in mathematical terms) versus “common sense,” but the latter term seemed insufficient to capture the strength of a reductio ad absurdum, plus really both plausible approaches would have some appeal to most people’s “common sense.” Anyway, in spite of my too casual use of the terms, I agree that both approaches can be described in terms of both math and logic, and that the conflict will ultimately turn out to be between plausible but mistaken math/logic and correct math/logic.
                                                        Precisely. Very well put.

                                                        Originally posted by TLD
                                                        Initially my position was:

                                                        1. There is an at least superficially plausible approach to this problem—call it the “Bayesian” one—that concludes that switching is the correct strategy.

                                                        2. There is another at least superficially plausible approach to this problem—call it the “reductio” one—that concludes that switching and not switching must be of equal value.

                                                        3. I don’t see the flaw in either, but intuitively find the reasoning of #2 to be less likely to be flawed, so tentatively I’m in the “no point in switching” camp.

                                                        Now there has been added another point:

                                                        2a. There is yet another at least superficially plausible approach to this problem—call it the “Ganchrow” one—that also concludes that switching and not switching must be of equal value.
                                                        And so now I would say:

                                                        3a. I don’t see the flaw in #1, #2, or #2a, but because I intuitively find the reasoning of #2 to be less likely to be flawed than #1, and because I’m convinced Ganchrow understands this stuff and hence his math in 2a is unlikely to turn out to be flawed, I am somewhat more confidently in the “no point in switching” camp.

                                                        So as you can see, I still haven’t really grasped how your post shows how #1 is flawed, only that it is. For me it’s still a reductio:

                                                        #1 leads to a certain conclusion.
                                                        #2a is seemingly correct and leads to a contradictory conclusion.
                                                        Therefore, #1 is wrong.
                                                        Sure ... OK ... I'm with you. I'll just point out that reductio ad absurdem is a perfectly reasonable mathematical approach, and insofar as situation 2 is an accurate depiction of reality, we'd sort of expect it to be fungible with 2a. I initially went with the reductio approach simply because I thought it might be more intuitively appealing than the more mathematical. But because you asked for it, TLD, and because I really like your own reductio with respect to the "winner pays the vig" fallacy, I'm going to spell out the math in just a short while. So prepare to buckle in.

                                                        Originally posted by TLD
                                                        2. There is a 50% chance this will turn out to be double what is in the other envelope and a 50% chance this will turn out to be half what is in the other envelope.
                                                        False. The 50% figure is in general incorrect. This is the problem. We can not in general say that there's a 50% chance of the other quantity being larger and a 50% chance of the other quantity being smaller.

                                                        The reasoning is quite subtle but central to properly understanding the problem. Because we don't posses a priori knowledge of the underlying distribution we can't say for sure that the particular realization isn't a terminal value. In other words, let's say we repeat this experiment an infinite number of times in precisely the same manner, discarding every observation where $9,000 isn't picked initially. Well, if $9,000 is a central value (meaning that both greater and smaller prizes are possible) then your statement above would apply -- 50% the of time time you'd have $4,500 in the other envelope and the remaining 50% of the time you'd have $18,000 in the other envelope. Easy peasey.

                                                        But ... and this is the really important part ... if $9,000 were a terminal value (let's say it were the highest possible prize awarded), then each and every time you saw $9,000 there would always be only $4,500 in the other envelope. 100% of time. Now of course iff you knew that $9,000 represented a terminal value then you'd know not to switch if $9,000 were in your envelope and would know to switch if it weren't. But you don't know this, because the distribution is unknown to you ahead of time.

                                                        So that's how it works, if you don't have a terminal value then you will indeed average a gain of a quarter of the current value by switching. If you did have maximum terminal value then you'd always lose half the current value by switching. And this value makes up for 100% of the expected gains you'd have accumulated from switching at lower values. It's similar, in a way, to the Martingale. Most of time you win a little by switching, but every once in a while ... you lose a lot.

                                                        At this point you mighty ask, what if there were no maximum terminal value? What if any value were equally likely? (The answer is surprisingly technical and probably not well suited to this conversation). Neverthless, with the warning "Technical stuff follows. Ignore if you like:"
                                                        It's very tricky to apply the uniform distribution (in other words, where some given set of outcomes all occur with equal probability) over infinitely many outcomes. All we can do is look at an arbitrarily large number of outcomes, see how things behave, and then take the limit as the number of values approaches infinity. With the uniform distribution you can't simply convergence.

                                                        This is unlike a more well-behaved ("everywhere differentiable") distribution where we know that as we get further and further out along the tails (in this case, that would mean as the dollar values were to increase), those extreme values become increasingly less important and can start being ignored because they become so highly unlikely. Not so with the uniform distribution. All values are equally likely and so none can be ignored. Hence, the answer does not converge to a decision rule such as "always switch". We'd be no more willing to switch with ten outcomes than we would with three possible outcomes, no more willing to switch with a thousand than with three, no more willing with a trillion than with a thousand, and no more more willing with a googleplex than with a trillion ... etc. Hence, even with "infinity many possible outcomes" (to be precise, I should really say "infinity many possible outcomes unbounded from above, as the series could run 1, 1/2, 1/4, 1/8, ... -- but that makes the writing even more clunky) it makes no sense to switch even though we can't truly conceptualize ever picking a maximum terminal value.

                                                        And so now as promised here's the mathematical proof as to why with all outcomes from the infinite series ($1, $2, $4, $8, $16, ... ) equally likely, switching provides exactly $0 in EV. (I'm purposely keeping the math as simple as possible so I'd ask any real mathematicians among us to excuse the inelegance).
                                                        So let's say there are n possible dollar values numbered from 1 through n, where the dollar amount of the i<sup>th</sup> value is 2<sup>i-1</sup>. When randomly selecting envelope quantities, each value is equally likely to be picked. If the value picked is > 1 and < n, the other value is chosen by flipping a coin. If value #1 is chosen, then the other value is value #2, and if value #n is chosen, then the other value is value #n-1.

                                                        Therefore, the probability of value k being in one of the two envelopes. is given by:

                                                        Code:
                                                        P(k) = 1 / (n-1), for  1 < k < n and
                                                        P(k) = 1 / (2n-2), for k = 1, n
                                                        Let’s suppose that the player chooses value i (with either value i-1 or value i+1 left unchosen), then what is his EV from switching? I’m claiming that the EV is $0.
                                                        • The probability of the player having chosen i=1 (a value of $2<sup>1-1</sup> = $1) is 1/(2n-2). His gain from switching in this case would be $2<sup>2-1</sup> - $2<sup>1-1</sup> = $1.
                                                        • The probability of the player having chosen i=n (a value of $2<sup>n-1</sup>) is 1/(2n-2). His gain from switching in this case would be $2<sup>n-2</sup> - $2<sup>n-1</sup> < 0 (in other words, a loss).
                                                        • The probability of the player having chosen 1 < i < n, is 1/(n-1). His gain from switching in this case would be, 1) with 50% probability, $2<sup>i</sup> - $2<sup>i-1</sup> > 0 (if the other envelope contained more); and 2) with 50% probability $2<sup>i-2</sup> - $2<sup>i-1</sup> < 0 (if the other envelope contained less). Hence, his total expected gain from switching in this case would be ($2<sup>i</sup> - $2<sup>i-1</sup> + $2<sup>i-2</sup> - $2<sup>i-1</sup>)/2 = $2<sup>i-3</sup>.
                                                        So taking the expected value we have:
                                                        [ATTACH]1022[/ATTACH]

                                                        Now we know that for a geometric sum, in general the following equivalence holds:

                                                        <img style="width: 200px; height: 51px;" class="tex" src="http://upload.wikimedia.org/math/3/1/4/314cbf6390683378512ab474a13148df.png" alt="\sum_{k=m}^n ar^k=\frac{a(r^m-r^{n+1})}{1-r}">

                                                        You’ll notice that ∑<sup>n</sup><sub>i=2</sub> $2<sup>i-2</sup> is a geometric sum of the above form with a=¼, r=2, and m=2. Hence the summation reduces to ¼*(2<sup>2</sup> – 2<sup>n+1</sup>)/(1 – 2) = 2<sup>n-1</sup> - 1

                                                        And plugging back in above we see that:
                                                        [ATTACH]1023[/ATTACH]
                                                        QED
                                                        Comment
                                                        • PeterWellington
                                                          SBR Rookie
                                                          • 12-20-06
                                                          • 49

                                                          #29
                                                          Originally posted by Ganchrow
                                                          To be perfectly honest, Peter, I'm not sure I exactly understand what you're getting at here. If your contention is that the paradox presupposes some understanding of elementary concepts of expectation, you're quite obviously correct. If a given individual can't see where the apparent paradox lies, I'd argue that a discussion of the paradox's resolution is probably not appropriate for him. (Just as a demonstration of acutely self-aware sleight-of-hand is may not be be appreciated by children, either.)
                                                          This is exactly what I'm saying, and I'm drawing a conclusion from it. Actually, I'm defending a point that NeedProtection made because I think it may have gotten lost among his insults.

                                                          You dismissed his conclusion as naive, but it's not. Naive would mean he's not considering something material, but his reasoning is air tight. The discussion of the paradox is only worthwhile for those who see it (and I think it's a good riddle for this board). The paradox is a product of a certain way of thinking about the problem, but it doesn't truly exist. It seems like you want everyone to address the paradox but they can't address it if they don't see it and it doesn't exist. It's like me asking you to see a ghost in your room and then having you explain why it doesn't exist. And you would say "What ghost?" and I would say you're being naive.
                                                          Comment
                                                          • Ganchrow
                                                            SBR Hall of Famer
                                                            • 08-28-05
                                                            • 5011

                                                            #30
                                                            Originally posted by PeterWellington
                                                            This is exactly what I'm saying, and I'm drawing a conclusion from it. Actually, I'm defending a point that NeedProtection made because I think it may have gotten lost among his insults.

                                                            You dismissed his conclusion as naive, but it's not. Naive would mean he's not considering something material, but his reasoning is air tight. The discussion of the paradox is only worthwhile for those who see it (and I think it's a good riddle for this board). The paradox is a product of a certain way of thinking about the problem, but it doesn't truly exist. It seems like you want everyone to address the paradox but they can't address it if they don't see it and it doesn't exist. It's like me asking you to see a ghost in your room and then having you explain why it doesn't exist. And you would say "What ghost?" and I would say you're being naive.
                                                            I wasn't referring to his conclusion as naïve, but rather to his perfunctorily incomplete dismissal of the problem.

                                                            But it's not just that. While his conclusion was correct, his argument was not. In fact, NP's reasoning would be considerably better described as "wrong" as opposed to "air tight". What he did was set up a simplistic straw man and then proceed to "solve" that. While he came to the right conclusion he did so for the wrong reason. If you don't see this then I'd ask you to ponder (using NP's rationale) why the player's decision might change if he were told that his case did not contain the maximum value.

                                                            I think the supposed paradox is quite readily comprehendable by anyone with a modicum of understanding about expected value who's able to at least temporarily shelve his obstinance. If a person doesn't see the issues involved in the formation of the paradox, then if interested he should feel free to ask, or if uninterested choose to ignore. But an intelligent person should be able to recognize the logical conundrum that this paradox can cause even without subscribing to the fallacious mode of thinking himself.
                                                            Comment
                                                            • TLD
                                                              SBR Wise Guy
                                                              • 12-10-05
                                                              • 671

                                                              #31
                                                              Well, I’m not going to pretend to be able to fully follow all of that, especially the equations, but I think I’m at least closer to the Eureka moment I’m looking for. I think your discussion of “terminal” values is what is mostly opening my eyes.

                                                              I’ll have to try to state it in my own words though to see if I’m getting it, and to see where my remaining gaps in understanding are.

                                                              Imagine that the game show is willing to put anywhere from $500 to $20,000 in the envelopes. One way they might randomize the decision of how much to put in each envelope could be as follows:

                                                              1. Write out amounts from $1,000 to $10,000 in $1,000 increments on ping pong balls, and choose one from a lottery style machine backstage. This amount goes in one box.

                                                              2. Flip a coin. For heads, place double the amount of the first box in the second box. For tails, place half the amount of the first box in the second box.

                                                              3. Flip another coin. For heads, put the first box in position A (say, on the left as the contestant faces the boxes) and the second box in position B (on the contestant’s right). For tails, put the boxes in the other order.

                                                              Note that they had to use a range of $1,000 to $10,000 for the first box they filled, not $500 to $20,000, because if you get too close to either end then it cannot be halved or doubled without going past their predetermined limits.

                                                              And this is why if you knew going in what the lowest and highest possible amounts were, then opening the first box could give you crucial information. (In what follows, though I’m assuming you know the range, I’m assuming you do not know the increments of the ping pong balls. They happen to have been in $1,000 increments, but you don’t have that information.)

                                                              Let’s say, for instance, the box you chose has $1,000 in it. There are three ways that could be.

                                                              1. The $1,000 ping pong ball was chosen, and the coin came up heads, so $2,000 was placed in the other box.

                                                              2. The $1,000 ping pong ball was chosen, and the coin came up tails, so $500 was placed in the other box.

                                                              3. The $2,000 ping pong was chosen, and the coin came up tails, so $1,000 was placed in the other box.

                                                              Obviously the one “missing” is that $500 was chosen for the initial box and doubled. The reason that’s not a possibility is that if $500 were on one of the ping pong balls, then it would open up the possibility of a $250 box, but the game show decided going in that the prizes would range from $500 to $20,000.

                                                              So rather than it being 50-50 that the box you didn’t choose would have half or double what’s in the box you did choose, two-thirds of the time there is $1,000 in your box there will be $2,000 in the other, and only one-third of the time there is $1,000 in your box will there be $500 in the other.

                                                              Let’s look at an example from the other end. Let’s say the box you choose has $16,000 in it. There is only one way that could be:

                                                              1. The $8,000 ping pong ball was chosen, and the coin came up heads, so $16,000 was placed in the other box.

                                                              Here there are three “missing” possibilities. It can’t be $16,000 halved, it can’t be $16,000 doubled, and it can’t be $32,000 halved, due to the predetermined range of the prize money they are willing to give away.

                                                              So if you picked a $16,000 box, rather than being 50-50 the other box would be half that versus double that, there is a 100% chance the other box will have just $8,000 in it.

                                                              So any time the first box contains an amount sufficiently close to the bottom or top, that takes the probability of half versus double for the other box off of the 50-50. If you think through finding $500 in your box, for instance, there is a 100% chance the other box will be double that. If you find $8,000 in your box, there is a two-thirds chance the other box will have $4,000 and only a one-third chance it will have $16,000. And so on.

                                                              Finally, let’s look at one in the “middle.” What if your box contained $4,000? That could come about in any of four ways:

                                                              1. The $2,000 ping pong ball was chosen, and the coin came up heads, so $4,000 was placed in the other box.

                                                              2. The $4,000 ping pong ball was chosen, and the coin came up heads, so $8,000 was placed in the other box.

                                                              3. The $4,000 ping pong ball was chosen, and the coin came up tails, so $2,000 was placed in the other box.

                                                              4. The $8,000 ping pong ball was chosen, and the coin came up heads, so $4,000 was placed in the other box.

                                                              Here there really is a 50-50 chance that the other box will contain double the $4,000 and a 50-50 chance it’ll contain half.

                                                              So if your box contains an amount close enough to the bottom, either you will always double that by switching, or at least you will more often double that by switching. If your box contains an amount close enough to the top, either you will always halve that by switching, or at least you will more often halve that by switching. If your box contains an amount neither close enough to the bottom nor close enough to the top, then you will equally often double that by switching as halve that by switching (so switching is better, since doubling helps you more than halving hurts you).

                                                              All this may seem irrelevant to the two envelope paradox, though, as we had intentionally closed the loopholes so that you can’t even guess at the range of possible prizes. When you see that $9,000 in your box, by our stipulations you do not know how close that is to the bottom or the top of the range of possible prizes.

                                                              But this is (one of the places) I was looking at the problem incorrectly, and why in spite of preferring the “no switch” strategy for other reasons, I still was unable to refute the “switch” strategy. Implicitly I was treating “You have no way of knowing if the $9,000 is too close to the top, too close to the bottom, or in the middle where it would equally likely be halved or doubled” as equivalent to “You know that the $9,000 is in the middle where it would equally likely be halved or doubled.”

                                                              So I thought we were trying to set up the problem so as to make the question of “Is this amount more likely at the high end or the low end of their prize range?” irrelevant to the solution—“Never mind that consideration; you don’t know that.” But that’s not the case. Because (Eureka moment?), if you really have no information about the range, then that doesn’t mean any given amount—$9,000 for instance—is just as likely to be relevantly close to the top as relevantly close to the bottom. What it means is that that amount is more likely to be relevantly close to the top than relevantly close to the bottom. So it’s more likely it can’t be doubled than that it can’t be halved.

                                                              Let’s think of it in terms of my original $500-$20,000 prize range. Note how few values there are for your box where you could infer that it is certain or probable the other box contains double that. It would have to be less than $2,000 down to the $500 minimum. Compare that to all the values there are for your box where you could infer that it is certain or probable the other box contains half that. Anything over $5,000 up to the $20,000 cap.

                                                              And this will be true (I think) for all possible ranges. There will always be a much wider area above the 50-50 area than below it.

                                                              So initially I thought if you don’t know the range going in, then the possibilities of being too close to the top or too close to the bottom could be disregarded because they effectively cancel each other out. But they do not. You don’t have to know the range to know that that $9,000 in the box you chose is more likely to be too high to double than it is to be too low to halve.

                                                              So assuming total ignorance as to the range of possible values, for any amount you find in your box, you will gain more by doubling than by halving, but you are more likely to halve than to double by switching, and mathematically those factors are in balance to where switching neither adds nor subtracts expected value.

                                                              I hope I have shown I’m at least part of the way to “getting it.” One scary part, though, is your talk of an “infinite” series, to understand which I might need a better grasp of your math than I have. My discussion is based on the assumption that there is some finite low point and finite high point for what the game show is willing to put in the boxes, where the key is your box is more likely to be near the top than near the bottom. But if we’re talking instead about a case of an infinite range where there is no top and no bottom to the possibilities, then intuitively it doesn’t seem like the argument I’ve sketched would apply.

                                                              It probably still does apply, and I suspect your formulas show that. But for me, if I can’t say it in English it doesn’t “count.” For now I’ve advanced to where I understand “Even if I don’t know the minimum or maximum, I see why switching doesn’t mean a 50-50 chance of doubling versus halving,” but I’m not to where I understand “If there is no minimum and no maximum, I see why switching doesn’t mean a 50-50 chance of doubling versus halving.”
                                                              Comment
                                                              • PeterWellington
                                                                SBR Rookie
                                                                • 12-20-06
                                                                • 49

                                                                #32
                                                                Originally posted by Ganchrow
                                                                But it's not just that. While his conclusion was correct, his argument was not. In fact, NP's reasoning would be considerably better described as "wrong" as opposed to "air tight". What he did was set up a simplistic straw man and then proceed to "solve" that. While he came to the right conclusion he did so for the wrong reason. If you don't see this then I'd ask you to ponder (using NP's rationale) why the player's decision might change if he were told that his case did not contain the maximum value.
                                                                I still don't see where his argument was wrong. You're saying that if you changed one of the assumptions of the problem then his logic would be off, but that wasn't the puzzle that was presented. According to your original assumptions I don't see any flaws in his logic. From reading your posts on this forum, you're obviously a bright guy and well versed in this area, but I've only heard general reasons why you say his argument is faulty ("straw man", "naive", etc.) . Could you give some specific reasons using your original assumptions?
                                                                Comment
                                                                • Ganchrow
                                                                  SBR Hall of Famer
                                                                  • 08-28-05
                                                                  • 5011

                                                                  #33
                                                                  Originally posted by PeterWellington
                                                                  Could you give some specific reasons using your original assumptions?
                                                                  He writes: "Assume 2 envelopes, $10 and $20 in each." This assumption fundamentally alters the problem by presupposing a distribution of starting values. That's the whole point of the paradox ... because you don't know the distribution you don't switch, if you knew the distribution you might switch. Beyond that I can only suggest you reread the other posts in this thread.

                                                                  And again, I'll ask, if NP's argument were accurate why wouldn't the logic also apply to the situation where he were told that his case did not contain the maximum value? In such a situation why couldn't he also "assume 2 envelopes, $10 and $20 in each" and then utilize the same expected value calculation? All I'm doing with this is attempting to demonstrate that precisely the same fallacious argument used by NP earlier could just as readily be used in alternate circumstances -- only in those circumstances he wouldn't be so lucky with his conclusion.

                                                                  Anyway, that's beside the point. It's clear why NP's "proof" is faulty and I don't really know what more to say about it. If you still don't believe me and have access to a university economics or applied math department you might want to check in with them and see what they have to say about the issue. Just a thought.
                                                                  Comment
                                                                  • NeedProtection
                                                                    SBR High Roller
                                                                    • 02-25-07
                                                                    • 113

                                                                    #34
                                                                    He's told that both boxes contain cash, but that one box contains twice as much cash as the other.
                                                                    Why does the amount I chose matter? It is X and 2X. I chose 10 and 20 to make it easier to understand. Listen, I am not really sure how else to put it, but I'll try to in a language we can all understand:

                                                                    If anyone out there wants to try this out and pay me to switch envelopes (assuming one envelope, box, whatever has 2x as much as the other) then you can back up the truck.

                                                                    We can have trusted third party help administer this and let's just do it for a few hours days weeks months or years and see who has the most money and who quits first.

                                                                    Only stipulation: 2 envelopes, one with 2x the other, and you have to pay me to switch and you have to switch since the whole premise is you don't know which envelope is more. You can choose the amounts.

                                                                    Good luck.

                                                                    Oh and I realize that you say that by choosing the amounts it somehow flaws the the experiment or something ridiculous.

                                                                    Pay me to switch and see how much money you end up with.

                                                                    This whole discussion is ridiculous.
                                                                    Comment
                                                                    • NeedProtection
                                                                      SBR High Roller
                                                                      • 02-25-07
                                                                      • 113

                                                                      #35
                                                                      Originally posted by Ganchrow
                                                                      He writes: "Assume 2 envelopes, $10 and $20 in each." This assumption fundamentally alters the problem by presupposing a distribution of starting values. That's the whole point of the paradox ... because you don't know the distribution you don't switch, if you knew the distribution you might switch.
                                                                      The original poster said "He's told that both boxes contain cash, but that one box contains twice as much cash as the other.". Therefore, the distribution is known. Give me a break. Of course if we don't know how much is in each envelope then none of this matters.

                                                                      I mean seriously - are you serious? -- "Well there are 2 envelopes of unknown amounts in unknown quantities and distributions. You choose one. Should you pay to switch? envelopes"

                                                                      Is this your version of the problem becuase the OP is pretty clear on his premises and there is no paradox unless we don't know the distributions of the envelopes, in which case of course we don't know what to do. LOL.

                                                                      What a gay thread.
                                                                      Comment
                                                                      Search
                                                                      Collapse
                                                                      SBR Contests
                                                                      Collapse
                                                                      Top-Rated US Sportsbooks
                                                                      Collapse
                                                                      Working...