Today, I am going to look into generating random permutations. The requirement is that the permutations I generate need to be uniformly random. If I generate a large enough collection of permutations, every permutation should appear equally likely.
The algorithm is easy enough, and the fact that one gets uniformly random random permutations is almost built into it. Imagine you have a set of \(n\) distinct items. Pick a random object completely at random. Continue until nothing is left.
Why is this uniformly random? The verification is done by induction: if we have two distinct objects, and if we chose one object at random we get two sequences and each sequence has a probability of 0.5, since we picked the first object uniformly random with probability of 0.5. Assume, by induction that any sequence of choices of length \(n\) can be done uniformly each with probability \(\frac{1}{n!}\). Take a set of size \(n+1\). Pick an element randomly with probability \(\frac{1}{(n+1)}\). Since the rest of the sequence is picked with a probability \(\frac{1}{n!}\), the probability of choosing this particular sequence is \(\frac{1}{(n+1)!}\).
This is the original algorithm proposed by Fisher and Yates in 1938.
Here is a tail-recursive implementation in lisp.
(defun random-permutation (a &optional b)
(if a
(let ((x (elt a (random (length a)))))
(random-permutation (remove x a) (cons x b)))
b))
RANDOM-PERMUTATION
and we test it:
(let ((init (loop for i from 1 to 6 collect i)))
(loop repeat 24 collect (random-permutation init)))
((3 6 4 1 2 5) (4 1 2 6 5 3) (4 5 2 6 1 3) (5 1 6 4 2 3) (5 6 4 2 3 1)
(1 6 3 5 4 2) (3 5 6 2 1 4) (2 4 5 3 6 1) (2 1 5 4 6 3) (1 2 6 3 4 5)
(6 5 3 1 4 2) (1 4 3 2 6 5) (1 4 5 6 2 3) (3 4 5 2 6 1) (5 1 2 3 6 4)
(5 4 2 6 3 1) (6 3 2 5 1 4) (4 1 6 5 3 2) (3 4 6 2 5 1) (4 5 3 2 6 1)
(1 5 2 4 6 3) (5 1 3 6 2 4) (6 5 3 1 4 2) (2 6 1 5 3 4))