A chicken farmer can make 6kg, 9kg and 20kg bags of chicken. However he can't make anything else, so for example if you asked him for 7kg of chicken he couldn't do it. On the other hand if you asked him for 15kg he would be able to do that by selling you both a 6kg bag and a 9 kg bag.

A natural question to ask is which amounts can he make and which amounts can he not make, a sub-question here is to ask what the largest integer amount he cannot make is. It should be noted that he can make as many bags of each weight as he wants.

The solution (stop here if you want to play with it yourself for a few minutes, because spoilers ahead) is to notice that if you don't use any 20 kg bags you can make a lot of multiples of 3. In particular one can make 0,6,9,12,15,18 and so on bags every non-negative multiple of 3 except 3 itself.

The next thing to notice is that if you don't use a 20kg bag that that's all you can do! Adding 6s and 9s isn't ever going to give you something which isn't a multiple of 3.

Now if we use just one 20kg bag? Well then we can make 20,26,29 32,35,... . In other words we can make most positive integers which leave a remainder of 2 when divided by 3 (in a more formal phrasing numbers which are 2 mod 3) and nothing else. The exceptions being things less than 20 and 23.

That only leaves those weights which are 1 mod 3. We need at least two 20kg bags to get anything which is 1 mod 3, but using exactly two 20s we can make 40,46,49 and so on. This means the largest integer weight we can't make is 43kg.

Now the trick here was to notice that 6 and 9 share a common factor of 3. That's all well and good but what if the farmer can make lots of 3 arbitrary positive integers a,b and c? It turns out that this is an open problem but I'll leave it for another post.

## Wednesday, May 31, 2017

## Friday, May 19, 2017

### Frequentist and blogging frequency

So I haven't used this blog in a while, but I'm back-ish. Which means I'm panning to start blogging again but may or may not actually get around to it this time.

In my first ever post I discussed the idea of a hypothesis test. However I didn't get into any of the technical details in that post. For example suppose I believe that my null hypothesis is that my coin comes up head 50% of the time for data I've flipped it 30 times and gotten 20 heads and 10 tails.

Does this mean that I need to give up the idea of a coin which produces heads exactly half the time? In the last post we spoke about the idea of rejecting a hypothesis which is inconsistent with the data, if we had 29 out of 30 heads we'd chuck the idea of a 50-50 coin, if we had 16 heads (again out of 30) we'd keep it. For 20 however it's less obvious. Where is the cutoff?

More generally where should the cutoff be? The usual answer to this is ask "How likely would I be to see data this or more strongly against the null hypothesis if the null hypothesis was indeed true?". We then reject the null hypothesis if this probability, which we call a p-value, is "small" and fail to reject it otherwise.

By tradition "small" is usually taken to mean less than 0.05, sometimes this tradition is broken, some fields have different traditions and some statisticians absolve themselves of this by saying "the p-value is... ".

Going back to our original question of 20 heads in 30 flips, how likely would I be to see data this or more strongly against the null hypothesis if the null hypothesis was indeed true? The surprising answer is: "It depends". A fuller version of this answer is "It depends on what you mean by more strongly against".

The probability of getting 20 or more heads from 30 flips works out to about 0.04937, just under the traditional 5% (0.05). However the probability of getting 20 or more heads OR 10 or less heads is twice that. Would 3 heads out of 30 stronger evidence that our null hypothesis is wrong than getting 20? Before you say "yes of course" what if we had exactly 15 heads but they alternated? i.e. HTHTHTHTHTHTHTHTHTHTHTHTHTHTHT.

Again the answer here seems like it should be "yes of course that's different" but this dependency isn't something we were thinking about beforehand. Worse it's pretty easy to find patterns in a lot of sequences so we could in principle keep adding in things more surprising than our (unnamed) string of 20 heads in 30 flips.

For this reason we need to specify in advance what counts as stronger evidence against our null hypothesis. This is called an alternate hypothesis. Sometimes it's appropriate to make the alternate hypothesis "p>1/2" (or "p<1/2") and sometimes it's appropriate to say p is not 1/2, sometimes it's appropriate to say p depends on the last flip or two in this some way.

Of course this isn't the only way to do things. There are others which perform better in some situations and worse in others. I'll (maybe) discuss these in a later post.

In my first ever post I discussed the idea of a hypothesis test. However I didn't get into any of the technical details in that post. For example suppose I believe that my null hypothesis is that my coin comes up head 50% of the time for data I've flipped it 30 times and gotten 20 heads and 10 tails.

Does this mean that I need to give up the idea of a coin which produces heads exactly half the time? In the last post we spoke about the idea of rejecting a hypothesis which is inconsistent with the data, if we had 29 out of 30 heads we'd chuck the idea of a 50-50 coin, if we had 16 heads (again out of 30) we'd keep it. For 20 however it's less obvious. Where is the cutoff?

More generally where should the cutoff be? The usual answer to this is ask "How likely would I be to see data this or more strongly against the null hypothesis if the null hypothesis was indeed true?". We then reject the null hypothesis if this probability, which we call a p-value, is "small" and fail to reject it otherwise.

By tradition "small" is usually taken to mean less than 0.05, sometimes this tradition is broken, some fields have different traditions and some statisticians absolve themselves of this by saying "the p-value is... ".

Going back to our original question of 20 heads in 30 flips, how likely would I be to see data this or more strongly against the null hypothesis if the null hypothesis was indeed true? The surprising answer is: "It depends". A fuller version of this answer is "It depends on what you mean by more strongly against".

The probability of getting 20 or more heads from 30 flips works out to about 0.04937, just under the traditional 5% (0.05). However the probability of getting 20 or more heads OR 10 or less heads is twice that. Would 3 heads out of 30 stronger evidence that our null hypothesis is wrong than getting 20? Before you say "yes of course" what if we had exactly 15 heads but they alternated? i.e. HTHTHTHTHTHTHTHTHTHTHTHTHTHTHT.

Again the answer here seems like it should be "yes of course that's different" but this dependency isn't something we were thinking about beforehand. Worse it's pretty easy to find patterns in a lot of sequences so we could in principle keep adding in things more surprising than our (unnamed) string of 20 heads in 30 flips.

For this reason we need to specify in advance what counts as stronger evidence against our null hypothesis. This is called an alternate hypothesis. Sometimes it's appropriate to make the alternate hypothesis "p>1/2" (or "p<1/2") and sometimes it's appropriate to say p is not 1/2, sometimes it's appropriate to say p depends on the last flip or two in this some way.

Of course this isn't the only way to do things. There are others which perform better in some situations and worse in others. I'll (maybe) discuss these in a later post.

Subscribe to:
Posts (Atom)