Take a coin thought to be loaded, i.e., Heads occurs at a different probability than Tails. Note that all physical coins are thought to be this way, by a very small amount. Flip it twice in succession, noting results.
If the sequence was HT, call it Event Zero. If the sequence was TH, call it Event One. If anything else (including the coin landing on its edge), discard results and flip twice again.
Despite the coin being biased, Event 0 and 1 occur each with 50% probability. (If the coin is highly biased, it's interesting to actually experience the process, as the expectations after the first flip are very different.)
Next, assume one flip is dependent on the previous flip only, i.e., the Markov chain looks at one previous state only. This is inspired by the flipping mechanism being slightly deterministic. For example, the previous flip being heads, you are slightly more likely to put the coin back on your thumb to flip as (say) heads, and that initial state slightly affects the flip outcome.
Flip the coin until you get Heads. Flip one more time and note the result. Essentially we are controlling for the previous state. This entire operation counts as one flip for the HT vs TH algorithm above. Repeat until the first algorithm succeeds.
Create an app, or tables, for converting a sequence of biased flips into a uniform random sampling of N events. (Rejection sampling if N is not a power of 2.) An app could pseudorandomly select but temporarily hide (but reveal a cryptographic signature with nonce for later verification) which is the controlled state, the mapping to Event 0 and 1, and the mapping from binary to base N, to lessen the effect of a flipper biasing results because he or she knows how the algorithm works.
No comments :
Post a Comment