Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <h3>General remark</h3> <p>My personal approach about correctness of probability-using algorithms: if you know how to <em>prove</em> it's correct, then it's probably correct; if you don't, it's certainly wrong.</p> <p>Said differently, it's generally hopeless to try to analyse every algorithm you could come up with: you have to keep looking for an algorithm until you find one that you <em>can</em> prove correct.</p> <h3>Analysing a random algorithm by computing the distribution</h3> <p>I know of one way to "automatically" analyse a shuffle (or more generally a random-using algorithm) that is stronger than the simple "throw lots of tests and check for uniformity". You can mechanically compute the distribution associated to each input of your algorithm.</p> <p>The general idea is that a random-using algorithm explores a part of a world of possibilities. Each time your algorithm asks for a random element in a set ({<code>true</code>, <code>false</code>} when flipping a coin), there are two possible outcomes for your algorithm, and one of them is chosen. You can change your algorithm so that, instead of returning one of the possible outcomes, it explores <em>all</em> solutions in parallel and returns all possible outcomes with the associated distributions.</p> <p>In general, that would require rewriting your algorithm in depth. If your language supports delimited continuations, you don't have to; you can implement "exploration of all possible outcomes" inside the function asking for a random element (the idea is that the random generator, instead of returning a result, capture the continuation associated to your program and run it with all different results). For an example of this approach, see oleg's <a href="http://okmij.org/ftp/kakuritu/index.html">HANSEI</a>.</p> <p>An intermediary, and probably less arcane, solution is to represent this "world of possible outcomes" as a monad, and use a language such as Haskell with facilities for monadic programming. Here is an example implementation of a variant¹ of your algorithm, in Haskell, using the probability monad of the <a href="http://hackage.haskell.org/package/probability">probability</a> package :</p> <pre><code>import Numeric.Probability.Distribution shuffleM :: (Num prob, Fractional prob) =&gt; [a] -&gt; T prob [a] shuffleM [] = return [] shuffleM [x] = return [x] shuffleM (pivot:li) = do (left, right) &lt;- partition li sleft &lt;- shuffleM left sright &lt;- shuffleM right return (sleft ++ [pivot] ++ sright) where partition [] = return ([], []) partition (x:xs) = do (left, right) &lt;- partition xs uniform [(x:left, right), (left, x:right)] </code></pre> <p>You can run it for a given input, and get the output distribution :</p> <pre><code>*Main&gt; shuffleM [1,2] fromFreqs [([1,2],0.5),([2,1],0.5)] *Main&gt; shuffleM [1,2,3] fromFreqs [([2,1,3],0.25),([3,1,2],0.25),([1,2,3],0.125), ([1,3,2],0.125),([2,3,1],0.125),([3,2,1],0.125)] </code></pre> <p>You can see that this algorithm is uniform with inputs of size 2, but non-uniform on inputs of size 3.</p> <p>The difference with the test-based approach is that we can gain absolute certainty in a finite number of steps : it can be quite big, as it amounts to an exhaustive exploration of the world of possibles (but generally smaller than 2^N, as there are factorisations of similar outcomes), but if it returns a non-uniform distribution we know for sure that the algorithm is wrong. Of course, if it returns an uniform distribution for <code>[1..N]</code> and <code>1 &lt;= N &lt;= 100</code>, you only know that your algorithm is uniform up to lists of size 100; it may still be wrong.</p> <p>¹: this algorithm is a variant of your Erlang's implementation, because of the specific pivot handling. If I use no pivot, like in your case, the input size doesn't decrease at each step anymore : the algorithm also considers the case where all inputs are in the left list (or right list), and get lost in an infinite loop. This is a weakness of the probability monad implementation (if an algorithm has a probability 0 of non-termination, the distribution computation may still diverge), that I don't yet know how to fix.</p> <h3>Sort-based shuffles</h3> <p>Here is a simple algorithm that I feel confident I could prove correct:</p> <ol> <li>Pick a random key for each element in your collection.</li> <li>If the keys are not all distinct, restart from step 1.</li> <li>Sort the collection by these random keys.</li> </ol> <p>You can omit step 2 if you know the probability of a collision (two random numbers picked are equal) is sufficiently low, but without it the shuffle is not perfectly uniform.</p> <p>If you pick your keys in [1..N] where N is the length of your collection, you'll have lots of collisions (<a href="http://en.wikipedia.org/wiki/Birthday_attack">Birthday problem</a>). If you pick your key as a 32-bit integer, the probability of conflict is low in practice, but still subject to the birthday problem.</p> <p>If you use infinite (lazily evaluated) bitstrings as keys, rather than finite-length keys, the probability of a collision becomes 0, and checking for distinctness is no longer necessary.</p> <p>Here is a shuffle implementation in OCaml, using lazy real numbers as infinite bitstrings:</p> <pre><code>type 'a stream = Cons of 'a * 'a stream lazy_t let rec real_number () = Cons (Random.bool (), lazy (real_number ())) let rec compare_real a b = match a, b with | Cons (true, _), Cons (false, _) -&gt; 1 | Cons (false, _), Cons (true, _) -&gt; -1 | Cons (_, lazy a'), Cons (_, lazy b') -&gt; compare_real a' b' let shuffle list = List.map snd (List.sort (fun (ra, _) (rb, _) -&gt; compare_real ra rb) (List.map (fun x -&gt; real_number (), x) list)) </code></pre> <p>There are other approaches to "pure shuffling". A nice one is apfelmus's <a href="http://apfelmus.nfshost.com/articles/random-permutations.html">mergesort-based solution</a>.</p> <p>Algorithmic considerations: the complexity of the previous algorithm depends on the probability that all keys are distinct. If you pick them as 32-bit integers, you have a one in ~4 billion probability that a particular key collides with another key. Sorting by these keys is O(n log n), assuming picking a random number is O(1).</p> <p>If you infinite bitstrings, you never have to restart picking, but the complexity is then related to "how many elements of the streams are evaluated on average". I conjecture it is O(log n) in average (hence still O(n log n) in total), but have no proof.</p> <h3>... and I think your algorithm works</h3> <p>After more reflexion, I think (like douplep), that your implementation is correct. Here is an informal explanation.</p> <p>Each element in your list is <em>tested</em> by several <code>random:uniform() &lt; 0.5</code> tests. To an element, you can associate the list of outcomes of those tests, as a list of booleans or {<code>0</code>, <code>1</code>}. At the beginning of the algorithm, you don't know the list associated to any of those number. After the first <code>partition</code> call, you know the first element of each list, etc. When your algorithm returns, the list of tests are completely known and the elements are <em>sorted</em> according to those lists (sorted in lexicographic order, or considered as binary representations of real numbers).</p> <p>So, your algorithm is equivalent to sorting by infinite bitstring keys. The action of partitioning the list, reminiscent of quicksort's partition over a pivot element, is actually a way of separating, for a given position in the bitstring, the elements with valuation <code>0</code> from the elements with valuation <code>1</code>.</p> <p>The sort is uniform because the bitstrings are all different. Indeed, two elements with real numbers equal up to the <code>n</code>-th bit are on the same side of a partition occurring during a recursive <code>shuffle</code> call of depth <code>n</code>. The algorithm only terminates when all the lists resulting from partitions are empty or singletons : all elements have been separated by at least one test, and therefore have one distinct binary decimal.</p> <h3>Probabilistic termination</h3> <p>A subtle point about your algorithm (or my equivalent sort-based method) is that the termination condition is <em>probabilistic</em>. Fisher-Yates always terminates after a known number of steps (the number of elements in the array). With your algorithm, the termination depends on the output of the random number generator.</p> <p>There are possible outputs that would make your algorithm <em>diverge</em>, not terminate. For example, if the random number generator always output <code>0</code>, each <code>partition</code> call will return the input list unchanged, on which you recursively call the shuffle : you will loop indefinitely.</p> <p>However, this is not an issue if you're confident that your random number generator is fair : it does not cheat and always return independent uniformly distributed results. In that case, the probability that the test <code>random:uniform() &lt; 0.5</code> always returns <code>true</code> (or <code>false</code>) is exactly 0 :</p> <ul> <li>the probability that the first N calls return <code>true</code> is 2^{-N}</li> <li>the probability that all calls return <code>true</code> is the probability of the infinite intersection, for all N, of the event that the first N calls return <code>0</code>; it is the infimum limit¹ of the 2^{-N}, which is 0</li> </ul> <p>¹: for the mathematical details, see <a href="http://en.wikipedia.org/wiki/Measure_(mathematics)#Measures_of_infinite_intersections_of_measurable_sets">http://en.wikipedia.org/wiki/Measure_(mathematics)#Measures_of_infinite_intersections_of_measurable_sets</a></p> <p>More generally, the algorithm does not terminate if and only if some of the elements get associated to the same boolean stream. This means that at least two elements have the same boolean stream. But the probability that two random boolean streams are equal is again 0 : the probability that the digits at position K are equal is 1/2, so the probability that the N first digits are equal is 2^{-N}, and the same analysis applies.</p> <p>Therefore, you know that your algorithm <em>terminates with probability 1</em>. This is a slightly weaker guarantee that the Fisher-Yates algorithm, which <em>always terminate</em>. In particular, you're vulnerable to an attack of an evil adversary that would control your random number generator.</p> <p>With more probability theory, you could also compute the distribution of running times of your algorithm for a given input length. This is beyond my technical abilities, but I assume it's good : I suppose that you only need to look at O(log N) first digits on average to check that all N lazy streams are different, and that the probability of much higher running times decrease exponentially.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload