Less Wrong

Now that I am getting more traffic from sources such as reddit, facebook, and twitter, some of my viewers might be unaware of the existence of lesswrong.com. Less Wrong is “a community blog devoted to refining the art of human rationality.” If you are unfortunate enough to be unfamiliar with Less Wrong, you should stop wasting your time reading my blog, and instead discover all the amazing content Less Wrong has to offer.

I recommend browsing what looks interesting from the sequences for a little while until you manage to convince yourself that it is worth your time to read everything that Eliezer Yudkowsky has to offer. At which point, you should just read all of his posts in chronological order. You should then make an account, and participate in some of the amazing rationality discussions. If you enjoy the Less Wrong community, then you should also take a look to see if there is a Less Wrong meetup near you.

Much of my content here has been crossposted on Less Wrong. My username is Coscott, and you can see a list of my discussion posts here.

Logic Puzzle: Upside Down Cake

Imagine you have a circular cake, that is frosted on the top. You cut a d degree slice out of it, and then put it back, but rotated so that it is upside down. Now, d degrees of the cake have frosting on the bottom, while 360 minus d degrees have frosting on the top. Rotate the cake d degrees, take the next slice, and put it upside down. Now, assuming the d is less than 180, 2d degrees of the cake will have frosting on the bottom.

If d is 60 degrees, then after you repeat this procedure, flipping a single slice and rotating 6 times, all the frosting will be on the bottom. If you repeat the procedure 12 times, all of the frosting will be back on the top of the cake.

For what values of d does the cake eventually get back to having all the frosting on the top?

The solution will eventually be posted in the comments, but if you solve it before then, show off and post your own solution.

Deadly Rooms of Death

I would like to make you aware that Deadly Rooms of Death is the best video game there is. It is also the most difficult puzzle game I have ever seen. Most games I like have inspired me to take out a pencil and paper, or a calculator, or a spreadsheet to analyze them in one way or another. DROD has done this many times. However, for DROD, I have had to do actual math (e.g. “Oh, I can prove it is possible to solve a general puzzle of this form if and only if this graph I can draw quickly by looking at the puzzle has a perfect matching,” and then I actually use that fact when solving more difficult puzzles.) In addition to stimulating the mathematician in me, this game has also stimulated the scientist in me. I have had to make predictions about how the game works, and run experiments to test those predictions. The game is 100% puzzle game, it is completely deterministic, and there is no need for any quick reflexes.

Here is an article from the Mathematical Association of America about how amazing DROD is.

There are currently 5 DROD games out, as well as 13 official DLC holds, and lots of user made holds. The sixth and final game is due to come out this year. You should start by playing King Dugan’s Dungeon. There are five ways to do this:

1) (Recommended) You can buy it for 10 dollars here, and it comes with the 2nd and 3rd game in the series. (You will probably want to buy the 2nd and 3rd game later anyway, and you can’t beat this price.)

2) You can buy it for 10 dollars here. It comes with a DLC pack and a month of Caravel membership.

3) You can download the demo for Journey to the Rooted Hold, here, download the level pack for Architects’ Edition here, import the level pack, and play play for free. (Architects’ Edition is the old, and now free version that was improved into King Dugan’s Dungeon. You will miss out on most of the hardest secret rooms this way.)

4) You can play the Flash remake of the first part of King Dugan’s Dungeon here. (Only choose this if you are not sure if you want to play DROD yet. If you choose this, and want to continue playing, you will end up having to repeat a lot of puzzles you have already solved, and might see some hints that spoil some of the fun.)

5) If you know me personally, you can ask me for it. I bought extra copies of the game when it was on sale. I am willing to trade them for your agreement to keep me updated on your progress, because I love talking about DROD.


Terminal and Instrumental Beliefs

As you may know from my past posts, I believe that probabilities should not be viewed as uncertainty, but instead as weights on how much you care about different possible universes. This is a very subjective view of reality. In particular, it seems to imply that when other people have different beliefs than me, there is no sense in which they can be wrong. They just care about the possible futures with different weights than I do. I will now try to argue that this is not a necessary conclusion.

First, let’s be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails. 

Now, imagine that you are unaware of the fact that it is an unfair coin. By default, you believe that the coin comes up heads with probability 50%. How can we express the fact that I have a correct belief, and you have an incorrect belief in the language of values?

We will take advantage of the language of terminal and instrumental values. A terminal value is something that you try to get because you want it. An instrumental value is something that you try to get because you believe it will help you get something else that you want.

If you believe a statement S, that means that you care more about the worlds in which S is true. If you terminally assign a higher value to worlds in which S is true, we will call this belief a terminal belief. On the other hand, if you believe S because you think that S is logically implied by some other terminal belief, T, we will call your belief in S an instrumental belief. 

Instrumental values can be wrong, if you are factually wrong about the fact that the instrumental value will help achieve your terminal values. Similarly, an Instrumental belief can be wrong if you are factually wrong about the fact that it is implied by your terminal belief. 

Your belief that the coin will come up heads with probability 50% is an instrumental belief. You have a terminal belief in some form of Occam’s razor. This causes you to believe that coins are likely to behave similarly to how coins have behaved in the past. In this case, that was not valid, because you did not take into consideration the fact that I chose the coin for the purpose of this thought experiment. Your Instrumental belief is in this case wrong. If your belief in Occam’s razor is terminal, then it would not be possible for Occam’s razor to be wrong.

This is probably a distinction that you are already familiar with. I am talking about the difference between an axiomatic belief and a deduced belief. So why am I viewing it like this? I am trying to strengthen my understanding of the analogy between beliefs and values. To me, they appear to be two different sides of the same coin, and building up this analogy might allow us to translate some intuitions or results from one view into the other view.

Logic Puzzle: Pandigital e

If you can construct a number using the digits, 0,1,2,3,4,5,6,7,8, and 9 each exactly once and the operations of addition, subtraction, multiplication, division, exponentiation, and digit concatenation, in any order, we will call that number pandigital. Notice that there are only finitely many pandigital numbers.

How close can you get to the mathematical constant e =\sum_{n=0}^\infty \frac{1}{n!} with a pandigital number?

There are two parts to this logic puzzle. First, try to get the best approximation of e that you can. Then, try to come up with an estimate of how close you can get if you checked all possible pandigital approximations. I will post one particularly good approximation in the comments, but I suggest you think about it for a while before looking.



Math Trivia: Triangular Billiards

Imagine you are playing billiards on a triangular table. The ball  travels in a straight line with no friction, and when it hits an edge, it reflects in the standard way. One might ask whether or not there exists a way to hit the ball so that it follows a closed periodic path. (i.e. the ball eventually returns to where it started moving in the same direction that it started.)

One way to achieve this is by drawing the three altitudes of the triangle. For each edge, draw a line perpendicular to that edge, passing through the third vertex not on that edge. Then, take the three points where the altitudes intersect the edges perpendicularly, and connect them up. You can check that this simple triangular path, bouncing off of each once, is a closed periodic path for the ball.

There is one problem. In order for the altitudes to intersect the edges, the triangle must be acute. What happens when the triangle is not acute? This is actually a 200 year old open problem!

It can be shown that a different method works for all right triangles. This paper gave a computer assisted, but still rigorous, proof in 2006 that there is a closed periodic path for all triangles where all of the angles are less than 100 degrees. However, if one of the angles is can be greater than 100 degrees, this question is shockingly still open!

Preferences without Existence

My current beliefs say that there is a Tegmark 4 (or larger) multiverse, but there is no meaningful “reality fluid” or “probability” measure on it. We are all in this infinite multiverse, but there is no sense in which some parts of it exist more or are more likely than any other part. I have tried to illustrate these beliefs as an imaginary conversation between two people. My goal is to either share this belief, or more likely to get help from you in understanding why it is completely wrong.

A: Do you know what the game of life is?

B: Yes, of course, it is a cellular automaton. You start with a configuration of cells, and they update following a simple deterministic rule. It is a simple kind of simulated universe.

A: Did you know that when you run the game of life on an initial condition of a 2791 by 2791 square of live cells, and run it for long enough, creatures start to evolve. (Not true)

B: No. That’s amazing!

A: Yeah, these creatures have developed language and civilization. Time step 1,578,891,000,000,000 seems like it is a very important era for them, They have developed much technology, and it someone has developed the theory of a doomsday device that will kill everyone in their universe, and replace the entire thing with emptyness, but at the same time, many people are working hard on developing a way to stop him.

B:How do you know all this?

A: We have been simulating them on our computers. We have simulated up to that crucial time.

B: Wow, let me know what happens. I hope they find a way to stop him

A: Actually, the whole project is top secret now. The simulation will still be run, but nobody will ever know what happens.

B: Thats too bad. I was curious, but I still hope the creatures live long, happy, interesting lives.

A: What? Why do you hope that? It will never have any effect over you.

B: My utility function includes preferences between different universes even if I never get to know the result.

A: Oh, wait, I was wrong. It says here the whole project is canceled, and they have stopped simulating.

B: That is to bad, but I still hope they survive.

A: They won’t survive, we are not simulating them any more.

B: No, I am not talking about the simulation, I am talking about the simple set of mathematical laws that determine their world. I hope that those mathematical laws if run long enough do interesting things.

A: Even though you will never know, and it will never even be run in the real universe.

B: Yeah. It would still be beautiful if it never gets run and no one ever sees it.

A: Oh, wait. I missed something. It is not actually the game of life. It is a different cellular automaton they used. It says here that it is like the game of life, but the actual rules are really complicated, and take millions of bits to describe.

B: That is too bad. I still hope they survive, but not nearly as much.

A: Why not?

B: I think information theoretically simpler things are more important and more beautiful. It is a personal preference. It is much more desirable to me to have a complex interesting world come from simple initial conditions.

A: What if I told you I lied, and none of these simulations were run at all and never would be run. Would you have a preference over whether the simple configuration or the complex configuration had the life?

B: Yes, I would prefer if the simple configuration to have the life.

A: Is this some sort of Solomonoff probability measure thing?

B: No actually. It is independent of that. If the only existing things were this universe, I would still want laws of math to have creatures with long happy interesting lives arise from simple initial conditions.

A: Hmm, I guess I want that too. However, that is negligible compared to my preferences about things that really do exist.

B: That statement doesn’t mean much to me, because I don’t think this existence you are talking about is a real thing.

A: What? That doesn’t make any sense.

B: Actually, it all adds up to normality.

A: I see why you can still have preferences without existence, but what about beliefs?

B: What do you mean?

A:  Without a concept of existence, you cannot have Solomonoff induction to tell you how likely different worlds are to exist.

B: I do not need it. I said I care more about simple universes than complicated ones, so I already make my decisions to maximize utility weighted by simplicity. It comes out exactly the same, I do not need to believe simple things exist more, because I already believe simple things matter more.

A: But then you don’t actually anticipate that you will observe simple things rather than complicated things.

B: I care about my actions more in the cases where I observe simple things, so I prepare for simple things to happen. What is the difference between that and anticipation?

A: I feel like there is something different, but I can’t quite put my finger on it. Do you care more about this world than that game of life world?

B: Well, I am not sure which one is simpler, so I don’t know, but it doesn’t matter. It is a lot easier for me to change our world than it is for me to change the game of life world. I therefore will make choices that roughly maximizes preferences about the future of this world in the simplest models.

A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?

B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.

A: Okay, it seems plausible, but kind of depressing to think that we do not exist.

B: Oh, I disagree! I am still a mind with free will, and I have the power to use that will to change my own little piece of mathematics — the output of my decision procedure. To me that feels incredibly  beautiful, eternal, and important.