Math Trivia: Auctions

Here are two common mechanisms for auctions:

1) Everyone privately bids, and whoever bids the highest wins and pays what they bid.

2) Everyone privately bids, and whoever bids the highest wins and pays the bid of whoever bids the second highest.

In the second mechanism, the optimal strategy for each bidder is to bid however much they actually value what they are bidding for. In the first mechanism, the optimal strategy is more complicated, and players should bid less than their true value.

Lets assume that there is a fixed number of bidders. The value each bidder assigns to the object being auctioned is generated independently at random from a fixed continuous probability distribution that everyone knows. If everyone follows an optimal strategy, then the amount that the object sells for will follow some probability distribution.

Not surprisingly, the distribution that the auction house sells the object for is dependent on which auction mechanism they use. and the two different mechanism actually do produce different distributions.

Which mechanism do you think will produce a better expected value for the auction house. In the first, people will bid less, but in the second, the highest bid is ignored. Amazingly, in spite of producing different distributions, these two auction mechanisms produce the same expected value for the auction house!

This is actually part of a more general revenue equivalence theorem, which shows that the auction house expects the same revenue for any of a large class of auction mechanisms. This class even includes the auction where all players have to pay whatever they bid, even though only only the highest bidder wins.

Logic Puzzle: Poker Interrupted

Imagine you and four friends deal out a hand of a game of poker, and look at your hands. Each person is dealt 5 cards, and the remaining 27 cards are still in the deck. Unfortunately, the game is interrupted before it has a chance to start. You decide to memorize your hands, and play the hand tomorrow. The next day, you want everyone to somehow get the same 5 cards they had yesterday out of a shuffled deck, without gaining any information about what cards the other players have. All players are assumed to be honest, and will not cheat in any way.

This puzzle is my own design. I will eventually post the solution in the comments, but if you solve it before then, show off and post your own solution.

Logical and Indexical Uncertainity

Imagine I shot a photon at a half silvered mirror which reflects the photon with “probability” 1/2 and lets the photon pass through with “probability” 1/2.

Now, Imagine I calculated the trillionth decimal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term “probability” in this situation too, and to you, the “probability” that the digit is odd is 1/2.

What is the difference between these too situations? Assuming the many worlds interpretation of quantum mechanics, the first probability comes from indexical uncertainty, while the second comes from logical uncertainty. In indexical uncertainty, both possibilities are true in different parts of whatever your multiverse model is, but you are unsure which part of that multiverse you are in. In logical uncertainty, only one of the possibilities is true, but you do not have information about which one. It may seem at first like this should not change our decision theory, but I believe there are good reasons why we should care about what type of uncertainty we are talking about.

I present here 6 reasons why we potentially care about the 2 different types of uncertainties. I do not agree with all of these ideas, but I present them anyway, because it seems reasonable that some people might argue for them. Is there anything I have missed?

1) Anthropics

Suppose Sleeping Beauty volunteers to undergo the following experiment, which is described to her before it begins. On Sunday she is given a drug that sends her to sleep, and a coin is tossed. If the coin lands heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug that makes her forget the events of Monday only, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. Beauty wakes up in the experiment and is asked, “With what subjective probability do you believe that the coin landed tails?”

People argue about whether the “correct answer” to this question should be 1/3 or 1/2. Some say that the question is malformed, and needs to be rewritten as a decision theory question. Another view is that the question actually depends on the coin flip:

If the coin flip is a indexical coin flip, then there are effectively 3 copies of sleeping beauty, and in 1 on those copies, the coin came up tails, so you should say 1/3. On the other hand, if it is a logical coin flip, then you cannot compare the two copies of you waking up in one possible world with the one copy of you waking up in the other possible world. Only one of the worlds is logically consistent. The trillionth digit of pi is not changed by you waking up, and you will wake up regardless of the state of the trillionth digit of pi.

2) Risk Aversion

Imagine that I were to build a doomsday device. The device flips a coin, and if the coin comes up heads, it destroys the Earth, and everything on it. If the coin comes up tails, it does nothing. Would you prefer if the coin flip were a logical coin flip, or a indexical coin flip?

You probably prefer the indexical coin flip. It feels more safe to have the world continue on in half of the universes, then to risk destroying the world in all universes. I do not think this feeling arises from biassed thinking, but instead from a true difference in preferences. To me, destroying the world in all of the universes is actually much more than twice as bad as destroying the world in half of the universes.

3) Preferences vs Beliefs

In updateless decision theory, you want to choose the output of your decision procedure. If there are multiple copies of yourself in the universe, you do not ask about which copy you are, but instead just choose the output which maximizes your utility of the universe in which all of your copies output that value. The “expected” utility comes from your logical uncertainty about what the universe is like. There is not much room in this theory for indexical uncertainty. Instead the indexical uncertainty is encoded into your utility function. The fact that you prefer to be given a reward with indexical probability 99% than given a reward with indexical probability 1% should instead be viewed as you preferring the universe in which 99% of the copies of you receive the reward to the universe in which 1% of the copies of you receive the reward.

In this view, it seems that indexical uncertainty should be viewed as preferences, while logical uncertainty should be viewed as beliefs. It is important to note that this all adds up to normality. If we are trying to maximize our expected utility, the only thing we do with preferences and beliefs is multiply them together, so for the most part it doesn’t change much to think of something as a preference as opposed to belief.

4) Altruism

In Subjective Altruism, I asked a question about whether or not when being altruistic towards someone else, you should try to maximize their expected utility relative to you probability function or relative to their probability function. If your answer was to choose the option which maximizes your expectation of their utility, then it is actually very important whether indexical uncertainty is a belief or a preference.

5) Sufficient Reflection

In theory, given enough time, you can settle logical uncertainties just by thinking about them. However, given enough time, you can settle indexical uncertainties by making observations. It seems to me that there is not a meaningful difference between observations that take place entirely within your mind and observations about the outside world. I therefore do not think this difference means very much.

6) Consistency

Logical uncertainty seems like it is harder to model, since it means you are assigning probabilities to possibly inconsistent theories, and all inconsistent theories are logically equivalent. You might want some measure of equivalence of your various theories, and it would have to be different from logical equivalence. Indexical uncertainty does not appear to have the same issues, at least not in an obvious way. However, I think this issue only comes from looking at the problem in the wrong way. I believe that probabilities should only be assigned to logical statements, not to entire theories. Then, since everything is finite, you can treat sentences as equivalent only after you have proven them equivalent.

7) Counterfactual Mugging

Omega appears and says that it has just tossed a fair coin, and given that the coin came up tails, it decided to ask you to give it $100. Whatever you do in this situation, nothing else will happen differently in reality as a result. Naturally you don’t want to give up your $100. But Omega also tells you that if the coin came up heads instead of tails, it’d give you $10000, but only if you’d agree to give it $100 if the coin came up tails.

It seems reasonable to me that people might feel very different about this question based on whether or not the coin is logical or indexical. To me, it makes sense to give up the $100 either way, but it seems possible to change the question in such a way that the type of coin flip might matter.

Logic Puzzle: Make 15

Consider the following two player game:

Two players take turns naming numbers 1 through 9. Players may not name a number that has already been named. If at any point three of the numbers that one of the players has named add up to exactly 15, that player wins. After all 9 numbers have been named, if neither player has already won, the game is declared a tie.

Is this game a first player win, a second player win, or a tie? This problem shouldnt be too difficult to solve on a computer, but you get bonus points if you can convince a friend of the correct answer in two minutes with out a computer. Solution in comments.

Math Trivia: Interlocked Cubes

Here is a math fact that may surprise you:

There exists a configuration in three dimensional space of a finite number of unit cubes such that no cube can be removed without moving the others. That is to say it is not possible to hold all but one of the cubes fixed in space and move the last cube arbitrarily far away from the others.

This should surprise you. Your intuition is probably saying “Why don’t you just take the highest cube and move it straight up?” That intuition would be correct if we were talking about spheres, but not with cubes. A cube that is on average below another cube can still be partially above that cube.

The configuration is actually not all that difficult to imagine. First notice that if you balance a cube on one of its corners and take a horizontal cross section right in the middle, that cross section is a regular hexagon. You can take a tiling of the infinite plane by hexagons, and extend that tiling to cubes. If we tile the xy plane with these hexagons, and extend it to cubes, all of those cubes will have the same z coordinate. Each cube will be touching 6 other cubes, and even though they have have the same height, 3 of those are positioned “above” the cube to stop it from going  up, while 3 of them will be positioned “below” the cube to stop it from going down. If this is hard for you to imagine, or you want to see a proof, you can see it in this paper. (Look at figure 6)

Now that you believe that you can tile a plane, instead tile a very large square with cubes, so that no cube can be moved up or down. Make all the cubes very slightly smaller, so that you have some freedom to bend this large square. Wrap it around to make a large cylinder, then wrap it around in the other direction to make a large torus. When you wrap it around, you will glue edges of the square to other edges of the square, by interlocking the cubes on the boundaries.

The final configuration is a large torus, which locally looks like a plane tiled by the hexagonal cross sections of the cubes, but with the cubes slightly shrunken.

Thought Crimes

In my morals, at least up until recently, one of the most obvious universal rights was freedom of thought. Agents should be allowed to think whatever they want, and should not be discouraged for doing so. This feels like a terminal value to me, but it is also instrumentally useful. Freedom of thought encourages agents to be rational and search for the truth. If you are punished for believing something true, you might not want to search for truth. This could slow science and hurt everyone. On the other hand, religions often discourage freedom of thought, and this is a major reason for my moral problems with religions. It is not just that religions are wrong, everyone is wrong about lots of stuff. It is that many religious beliefs restrict freedom of thought by punishing doubters with ostracizing or eternal suffering. I recognize that there are some “religions” which do not exhibit this flaw (as much).

Recently, my tune has changed. There are two things which have caused me to question the universality of the virtue of freedom of thought:

1) Some truths can hurt society

Topics like unfriendly artificial intelligence make me question the assumption that I always want intellectual progress in all areas. If we as modern society were to choose any topic which restricting thought about might be very useful, UFAI seems like a good choice. Maybe the freedom of thought in this issue might be a necessary casualty to avoid a much worse conclusion.

2) Simulations

This is the main point I want to talk about. If we get to the point where minds can simulate other minds, then we run into major issues. Should one mind be allowed to simulate another mind and torture it? It seems like the answer should be no, but this rule seems very hard to enforce without sacrificing not only free thought, but what would seem like the most basic right to privacy. Even today, people can have preferences over the thoughts of other people, but our intuition tells us that the one who is doing the thinking should get the final say. If the mind is simulating another mind, shouldn’t the simulated mind also have rights? What makes advanced minds simulating torture so much worse than a human today thinking about torture.  (Or even worse, thinking about 3^^^^3 people with dust specks in their eyes. (That was a joke, I know we cant actually think about 3^^^^3 people.))

The first thing seems like a possible practical concern, but it does not bother me nearly as much as the second one. The first seems like it is just and example of the basic right of freedom of thought contradicting another basic right of safety. However the second thing confuses me. It makes me wonder whether or not I should treat freedom of thought as a virtue as much as I currently do. I am also genuinely not sure whether or not I believe that advanced minds should not be free to do whatever they want to simulations in their own minds. I think they should not, but I am not sure about this, and I do not know if this restriction should be extended to humans.

What do you think? What is your view on the morality of drawing the line between the rights of a simulator and the rights of a simulatee? Do simulations within human minds have any rights at all? What conditions (if any) would make you think rights should be given to simulations within human minds?

Math Trivia: Concentration

You probably know about the game of Concentration.

Some number of pairs of matching cards (often 27 pairs, but well just assume it is an odd number and at least 5) are placed face down on a table. Players take turns flipping over two cards, and checking whether they match. The cards are chosen and flipped one at a time. If the cards match, they are removed and the player who flipped them gets a point and gets another turn. When all the cards are removed, the player who has the most points wins.

What you probably don’t know is that if the first player plays optimally, he will lose with probability at most 45%. What you probably won’t believe is that if the second player plays optimally, he will lose with probability at most 45%.

What about the other 10% of the games? They never end! Both players just continue picking up cards that they already know about, to avoid giving any information to the other player. I will now give a sub-optimal, but simple strategy that is similar to the optimal strategy. It will win most games you play if the person you are playing against is naively flipping over cards and giving away information.

If there are two or more completely hidden pairs, flip a new card. If it matches a card you’ve seen before, take the match, if not, flip a card you already know (If possible).

If there is only one completely hidden pairs, never flip any new cards.

If there are no completely hidden pairs, flip a new card, then flip the card that matches it.

I haven’t checked that this particular strategy works for small numbers of cards, but a strategy that loses only 45% of the time exists as long as there are at least 5 pairs. For large n, notice that you win rough half the points won before the second to last pair is revealed. After the second to last pair is revealed, if your opponent wants the game to end. He has to keep flipping new cards. He will win all the pairs until he flips the first card in the last pair. Then, assuming he cannot successfully match it, you win all the remaining pairs. You therefore on average about 2/3 of the pairs after the second to last pair is revealed.

Notice that if both players play this strategy (or the optimal one), games with lots of pairs often do not finish. The actual optimal strategy is complicated, because you are trying to maximize probability of winning and not just the expected number of points, and because this is a discrete problem with lots of special cases, but hopefully this gives you a rough idea.

If you want to rigorously check my claim, you can easily write a program to recursively compute the probability of winning in different states of the game. There aren’t that many equivalence classes of states.