My solution was to generate random sentences in my head. I then iterate through the letters of each sentence, and if the letter is M or later in the alphabet, I select right. If the letter is before M in the alphabet I select left.
According to English letter frequencies [1] that gives a right probability of
3.0129+6.6544+7.1635+3.1671+0.1962+7.5809+5.7351+6.9509+3.6308+1.0074+1.2899+0.2902+1.7779+0.2722
= 48.7294 %
even though you summed 14 of the 26 letters (a 13-13 split would lower the probability to 45.7165%).
I wonder if the patterns of letters in English language usage would still yield predictable patterns of left-and-right. Like naively 'the' is more common than 'zzz', so right-left-left may be more common than right-right-right.
That's likely to introduce bias because the probability of the two halves aren't equal... My initial instinct would be that the latter half would be less likely (since it has Q, X, Z), but maybe not - as a rule of thumb the most common letters are ETAOIN SHURDL - so 6 from the second half, and 6 from the first.
I mentally labeled my pulse as left... right... left... right... and whenever I blinked I took that direction. The tricky part is not thinking about what should be next. It's also a slow way to play, but I was winning until I clicked randomize!
This was a nice way to generate random numbers fast! My go-to method is the second hand on the clock, but that's of course highly autocorrelated if you need numbers quickly.
If you want to hack it, give it a binary de Bruijn sequence with alphabet size k=2. Since it looks to follow whichever pattern already exists, and de bruijn sequences minimize existing patterns, it always beats the game.
> Since it looks to follow whichever pattern already exists, and de bruijn sequences minimize existing patterns, it always beats the game.
I'm pretty sure this isn't true though. A de bruin sequence doesn't guarantee that the order of the n-patterns is random, only that the number of unique n-length patterns is maximized.
Indeed, the algorithm you mention puts n-subsequences with many 0's in the front, and subsequences with many 1's in the back. Sure, every n-length subsequence appears only once, but because the order of the subsequences does follow a predictable pattern, your total sequence is still pretty predictable.
This disparity isn't noticeable when n is small (you chose n=6), so you can comfortably beat the game. But pick a large n and your sequence becomes rather predictable.
Try n=20. In that case, the generated sequence S has length |S|=32,768=2^15. In the first half, there are 9098 0's and 7286 1's. In the second half these numbers are exactly the opposite. Throw this sequence into the game, and you end up with a prediction accuracy of 50%. Not worse than random, but you didn't beat the game either.
I picked n=6 in particular because the code checks the past 5 numbers so 6 means that it can't do its frequency analysis but also avoids the problem you mention. And yes, it isn't random, rather it enforces a unique sequence. Because the code looks for repetition one number at a time, a de bruijn sequence has to be pretty optimal since it attempts not to.
Also this sequence seems to work better:
0000001001000101010011010000110010110110001110101110011110111111
Ah yeah. My monitor cut off the text under the graph so I didn't know they just gave away their predictive algorithm. n=6 obviously works, cool approach!
Intrigued too. It worked very well for the first 100 iterations. Starting iterations 180, I entered a very deterministic loop (6 correct - 1 wrong - repeat), making me a money loser.
150 presses and 51% here, just trying with my fingers on my smartphone :)
And now to 200 with 49%. Feeling not so subtley smug.
And now to 400 with 50%. Admit I'm getting bored now.
Now the question I have to answer, is that because of my professional experience and history, something inherent in my mind, or was it...just random?
edit: just consciously trying to be random, not using any deliberate or external tricks.
Second question: what percentage of the population behaves like me and is it qualitatively different from the population that doesn't?
I don't believe I actually am random, but whatever it is i'm doing the results are somewhat statistically unlikely. Is knowledge/experience with randomness itself sufficient to defeat this method?
The prediction is deterministic, so you can adapt to it and "beat" it every time. Though intuitively and without looking at the implementation, I obviously am not a good adversary: Lowest I reached was 43% after ~50 inputs, stopped at 47% after 103.
With just tapping "randomly", it was looking good until I got 52% at 250 inputs. From there on it went steep downward: 59% at 500 but 57% again at 1000 (I changed how I tapped at the 500 mark; else it would have declined even more).
At the heart of Bells inequality in physics is the assumption of free will, that the experimentalist is "free" to choose a detector setting.
Yet when faced with the task of actually generating random numbers, humans fail miserably. Some how the failure to generate random numbers isn't seen as a lack of free will by anyone.
But in the context of quantum physics, the experimenters "freedom" to do something most humans can't actually do is a given, and denying it is "super determinism" and anti science.
It's funny that physicist take the freedom of their random choices as a given when a simple experiment shows they don't have such freedom.
There is an example in Mathematica [1] that illustrates a similar point using Rock-Paper-Scissors game. It uses a very simple strategy to predict human opponent [2], but it appears to work well. (Although, at the moment the demonstration does not seem to be working at all (I tried it in Firefox and Chrome)).
It appears that the guessing output is deterministic using your inputs as it’s inputs and you could figure out as many consecutive inputs you felt like to produce a specific outcome. For instance 6R1L3R inputs forces the game to “guess” wrong each time. It’s not guessing. This isn’t a random input, but it’s perfectly within a reasonable random distribution. Equally random is 10L, which the guesser will guess right each time.
Losing to the guesser doesn’t indicate a lack of random input, nor does guessing the opposite of the guesser indicate randomness.
We can’t really generate randomness. Only outcomes consistent with some distribution. It’s more of a philosophical point. If you made it it’s not random.
This one doesn't really "beat" this, but here is a simple PRNG that you can trivially compute using mental arithmetics proposed by George Marsaglia [1]:
1. Select an initial random number (seed) between 1 and 58.
(This is accomplished by mentally collecting entropy from the environment, e.g. counting a group of objects you don't knew the count of before)
2. Multiply the least significant digit by 6, add the most significant digit to the result, and use the new result as the next seed/state.
3. The second digit of the state is your generated pseudorandom number.
Ahh, George Marsaglia was one of the great OG's of random ..
For any that like such things and haven't yet seen them, his Ziggurat algorithm family for generating target random distributions dates back to the 60's and was written up ~ 2000; the classic is the ZA for a random binomial distribution.
Good approach for the bulk rapid generation of large amounts of distributed random values.
There's a deterministic sequence that will beat it every time, which you can calculate as follows.
- it assumes your first selection was preceeded by 5 consecutive 'left's
- each press, look at the last 5 bits you selected (for your 1st 5 selections,
include those 'virtual' 5 lefts at the beginning). If you have not entered that sequence of 5 before, select 'right'. If you have entered that sequence before, pick the opposite of whatever you selected last time as your next press.
Yeah, it was only 44% right on my 100 presses but I was trying to beat it, not be random. You can sometimes get surprisingly long runs of wrong guesses out of it by repeating your guess once you get a wrong.
https://www.nature.com/articles/436150a