The post Logic Puzzle: More Simulated Dice appeared first on By Way Of Contradiction.

]]>Imagine you have k coins, which are not necessarily fair. You get to choose the bias for each of the coins. (i.e. You choose the probability that each coin comes up heads.) You may then use whatever procedure you like to simulate a fair n-sided die. You are allowed to flip the same coin more than once. The catch is that your procedure must end in some bounded number of coin flips.

For example, if you have a coin that comes up heads with probability 1/3, and another coin that comes up heads with probability 1/2, then you can simulate a 3-sided die by flipping the first coin, returning 1 if you get heads, and otherwise flipping the second coin and returning 2 or 3 depending on the result.

As a function of n, what is the minimum number k of coins that you need? (i.e. Which dice can be simulated with 1 coin? 2 coins? 3 coins? etc.)

The solution will eventually be posted in the comments. If you have been working on this puzzle for a while, and are starting to lose interest, please make sure you view this minor hint before you give up. I advise anyone who has been working on this puzzle for more than an hour to just view the hint. It is more interesting than it is helpful. (Written in rot13, decode here.)

Gurer rkvfgf na vagrtre x fhpu gung rirel qvr pna or fvzhyngrq hfvat ng zbfg x pbvaf.

The post Logic Puzzle: More Simulated Dice appeared first on By Way Of Contradiction.

]]>The post Maximize Worst Case Bayes Score appeared first on By Way Of Contradiction.

]]>Given a consistent but incomplete theory, how should one choose a random model of that theory?

My proposal is rather simple. Just assign probabilities to sentences in such that if an adversary were to choose a model, your Worst Case Bayes Score is maximized. This assignment of probabilities represents a probability distribution on models, and choose randomly from this distribution. However, it will take some work to show that what I just described even makes sense. We need to show that Worst Case Bayes Score can be maximized, that such a maximum is unique, and that this assignment of probabilities to sentences represents an actual probability distribution. This post gives the necessary definitions, and proves these three facts.

Finally, I will show that any given probability assignment is coherent if and only if it is impossible to change the probability assignment in a way that simultaneously improves the Bayes Score by an amount bounded away from 0 in all models. This is nice because it gives us a measure of how far a probability assignment is from being coherent. Namely, we can define the “incoherence” of a probability assignment to be the supremum amount by which you can simultaneously improve the Bayes Score in all models. This could be a useful notion since we usually cannot compute a coherent probability assignment so in practice we need to work with incoherent probability assignments which approach a coherent one.

Now, let’s move on to the formal definitions and proofs.

Fix some language , for example the language of first order set theory. Fix a consistent theory of , for example ZFC. Fix a nowhere zero probability measure on , for example , where is the number of bits necessary to encode .

A probability assignment of is any function from to the interval . Note that this can be any function, and does not have to represent a probability distribution. Given a probability assignment of , and a model of , we can define the Bayes Score of with respect to by

We define the Worst Case Bayes Score to be the infimum of over all models of .

Let denote the probability assignment that maximizes the function . We will show that this maximum exists and is unique, so is well defined.

In fact, also coherent, meaning that there exists a probability distribution on the set of all models of such that is exactly the probability that a randomly chosen model satisfies . Since the natural definition of a measurable subset of models comes from unions and intersections of the sets of all models satisfying a given sentence, we can think of as an actual probability distribution on the set of all models of .

First, we must show that there exists a probability assignment which maximizes .

Note that either diverges to , or converges to a non-positive real number. If is the identically function, then , so there is at least one for which is finite. This means that when maximizing , we need only consider for which converges to a number between and for all .

Assume by way of contradiction that there is no which maximizes . Then there must be some supremum value such that can get arbitrarily close to , but never equals or surpasses . Consider an infinite sequence probability assignments such that . We may take a subsequence of in order to assume that converges for every sentence . Let be such that for all .

By assumption, must be less than . Take any model for which . Then there exists a finite subset of such that , where

Note that in order to keep the Bayes score at least , any must satisfy if , and if . Consider the space of all functions from to satisfying these inequalities. Since there are only finitely many values restricted to closed and bounded intervals, this space is compact. Further, is a continuous function of , defined everywhere on this compact set. Therefore,

However, clearly , so

contradicting our assumption that converges to .

Next, we will show that there is a unique probability assignment which maximizes . Assume by way of contradiction that there were two probability assignments, and which maximize . Consider the probability assignment , given by

It is quick to check that this definition satisfies both

and

and in both cases equality holds only when

Therefore, we get that for any fixed model, ,

By looking at the improvement coming from a single sentence with we see that

is actually bounded away from , which means that

which contradicts the fact that and maximize .

This means that there is a unique probability assignment, , which maximizes , but we still need to show that is coherent. For this, we will use the alternate definition of coherence given in Theorem 1 here. Namely that is coherent if and only if assigns probability 0 to every contradiction, probability 1 to every tautology, and satisfies for all and .

Clearly assigns probability 0 to every contradiction, since otherwise we could increase the Bayes Score in all models by the same amount by updating that probability to 0. Similarly clearly assigns probability 1 to all tautologies.

If , then we update all three probabilities as follows:

and

where is the unique real number such that the three new probabilities satisfy . This correction can increases Bayes Score by the same amount in all models, and therefore increase , contradicting the maximality of . Therefore is coherent as desired.

Finally, we show that any given probability assignment is coherent if and only if it is impossible to simultaneously improve the Bayes Score by an amount bounded away from 0 in all models. The above proof that is coherent actually shows one direction of this proof, since the only fact it used about is that you could not simultaneously improve the Bayes Score by an amount bounded away from 0 in all models. For the other direction, assume by way of contradiction that is coherent, and that there exists a and an such that for all .

In particular, since . is coherent, it represents a probability distribution on models, so we can choose a random model from the distribution . If we do so, we must have that

However, this contradicts the well known fact that the expectation of Bayes Score is maximized by choosing honest probabilities corresponding the actual distribution is chosen from.

I would be very grateful if anyone can come up with a proof that this probability distribution which maximizes Worst Case Bayes Score has the property that its Bayes Score is independent of the choice of what model we use to judge it. In other words, show that is independent of . I believe it is true, but have not yet found a proof.

The post Maximize Worst Case Bayes Score appeared first on By Way Of Contradiction.

]]>The post Logic Puzzle: Count the Flags appeared first on By Way Of Contradiction.

]]>Your robot may interact with the world in the following ways:

1) Check which of the 4 adjacent edges contain walls.

2) Move to one of the 4 adjacent squares (provided there is no wall in the way).

3) Check if there is a flag on your square.

4) Pick up a flag (provided there is a flag on your square and the robot is not already holding a flag).

5) Put down a flag (provided the robot is holding a flag and there is not already a flag on your square).

6) Generate a random bit.

7) Output a number.

Your robot will be placed in a maze. The maze will contain some number of flags (from 100 to 1000). All flags will be reachable from the robot’s starting position. Your robot is tasked with determining the number of flags. The robot may take as long as it needs, but may only output one number and must output the correct answer eventually, with probability 1.

The catch is that your robot is not Turing complete. It only has a finite amount of memory. You can give your robot as much memory as you need, but it must succeed on arbitrarily large mazes.

As always, the solution will eventually be posted in the comments, but you are encouraged to show off by posting your solution first.

The post Logic Puzzle: Count the Flags appeared first on By Way Of Contradiction.

]]>The post Hanabi appeared first on By Way Of Contradiction.

]]>The basic idea is that 2 to 5 players each have a hand of 4 to 5 cards that they hold backwards. Each player can see all cards in other players hands, but not the cards in their own hands. Players take turns playing cards, discarding cards, or giving hints about other players’ cards. If you attempt to play an invalid card, you get a strike. In the end everyone’s score is the total number of valid cards played.

The game plays well for 2 to 5 players, but is rather difficult for 2 players. My wife and I got a perfect game on one of the easier difficulty levels, but have not yet done so on the hardest difficulty level. I am convinced that a sufficiently well designed convention can win almost always. So far the game has been a hit with everyone I have introduced it to, and a couple people decided to buy it after playing their first game.

Enjoy!

The post Hanabi appeared first on By Way Of Contradiction.

]]>The post Logic Puzzle: 5, 5, 7, 7 = 181 appeared first on By Way Of Contradiction.

]]>Using the numbers 5 and 7, each twice, and the combining them using addition, subtraction, multiplication, division, exponentiation, square root, factorial, unary negation, and/or parenthesis, but no base 10 shenanigans like digit concatenation, come up with an expression which evaluates to 181.

The solution will eventually be posted in the comments, but if you solve it before then, feel free to show off.

The post Logic Puzzle: 5, 5, 7, 7 = 181 appeared first on By Way Of Contradiction.

]]>The post Logic Puzzle: One, Two, Three appeared first on By Way Of Contradiction.

]]>The solution will eventually be posted in the comments, but if you solve it before then, feel free to show off (even with partial progress).

The post Logic Puzzle: One, Two, Three appeared first on By Way Of Contradiction.

]]>The post Less Wrong appeared first on By Way Of Contradiction.

]]>I recommend browsing what looks interesting from the sequences for a little while until you manage to convince yourself that it is worth your time to read everything that Eliezer Yudkowsky has to offer. At which point, you should just read all of his posts in chronological order. You should then make an account, and participate in some of the amazing rationality discussions. If you enjoy the Less Wrong community, then you should also take a look to see if there is a Less Wrong meetup near you.

Much of my content here has been crossposted on Less Wrong. My username is Coscott, and you can see a list of my discussion posts here.

The post Less Wrong appeared first on By Way Of Contradiction.

]]>The post Logic Puzzle: Upside Down Cake appeared first on By Way Of Contradiction.

]]>If d is 60 degrees, then after you repeat this procedure, flipping a single slice and rotating 6 times, all the frosting will be on the bottom. If you repeat the procedure 12 times, all of the frosting will be back on the top of the cake.

For what values of d does the cake eventually get back to having all the frosting on the top?

The solution will eventually be posted in the comments, but if you solve it before then, show off and post your own solution.

The post Logic Puzzle: Upside Down Cake appeared first on By Way Of Contradiction.

]]>The post Deadly Rooms of Death appeared first on By Way Of Contradiction.

]]>Here is an article from the Mathematical Association of America about how amazing DROD is.

There are currently 5 DROD games out, as well as 13 official DLC holds, and lots of user made holds. The sixth and final game is due to come out this year. You should start by playing King Dugan’s Dungeon. There are five ways to do this:

1) (Recommended) You can buy it for 10 dollars here, and it comes with the 2nd and 3rd game in the series. (You will probably want to buy the 2nd and 3rd game later anyway, and you can’t beat this price.)

2) You can buy it for 10 dollars here. It comes with a DLC pack and a month of Caravel membership.

3) You can download the demo for Journey to the Rooted Hold, here, download the level pack for Architects’ Edition here, import the level pack, and play play for free. (Architects’ Edition is the old, and now free version that was improved into King Dugan’s Dungeon. You will miss out on most of the hardest secret rooms this way.)

4) You can play the Flash remake of the first part of King Dugan’s Dungeon here. (Only choose this if you are not sure if you want to play DROD yet. If you choose this, and want to continue playing, you will end up having to repeat a lot of puzzles you have already solved, and might see some hints that spoil some of the fun.)

5) If you know me personally, you can ask me for it. I bought extra copies of the game when it was on sale. I am willing to trade them for your agreement to keep me updated on your progress, because I love talking about DROD.

The post Deadly Rooms of Death appeared first on By Way Of Contradiction.

]]>The post Terminal and Instrumental Beliefs appeared first on By Way Of Contradiction.

]]>First, let’s be clear what we mean by saying that probabilities are weights on values. Imagine I have an unfair coin which give heads with probability 90%. I care 9 times as much about the possible futures in which the coin comes up heads as I do about the possible futures in which the coins comes up tails. Notice that this does not mean I want to coin to come up heads. What it means is that I would prefer getting a dollar if the coin comes up heads to getting a dollar if the coin comes up tails.

Now, imagine that you are unaware of the fact that it is an unfair coin. By default, you believe that the coin comes up heads with probability 50%. How can we express the fact that I have a correct belief, and you have an incorrect belief in the language of values?

We will take advantage of the language of terminal and instrumental values. A terminal value is something that you try to get because you want it. An instrumental value is something that you try to get because you believe it will help you get something else that you want.

If you believe a statement S, that means that you care more about the worlds in which S is true. If you terminally assign a higher value to worlds in which S is true, we will call this belief a terminal belief. On the other hand, if you believe S because you think that S is logically implied by some other terminal belief, T, we will call your belief in S an instrumental belief.

Instrumental values can be wrong, if you are factually wrong about the fact that the instrumental value will help achieve your terminal values. Similarly, an Instrumental belief can be wrong if you are factually wrong about the fact that it is implied by your terminal belief.

Your belief that the coin will come up heads with probability 50% is an instrumental belief. You have a terminal belief in some form of Occam’s razor. This causes you to believe that coins are likely to behave similarly to how coins have behaved in the past. In this case, that was not valid, because you did not take into consideration the fact that I chose the coin for the purpose of this thought experiment. Your Instrumental belief is in this case wrong. If your belief in Occam’s razor is terminal, then it would not be possible for Occam’s razor to be wrong.

This is probably a distinction that you are already familiar with. I am talking about the difference between an axiomatic belief and a deduced belief. So why am I viewing it like this? I am trying to strengthen my understanding of the analogy between beliefs and values. To me, they appear to be two different sides of the same coin, and building up this analogy might allow us to translate some intuitions or results from one view into the other view.

The post Terminal and Instrumental Beliefs appeared first on By Way Of Contradiction.

]]>