Here’s a relatively simple problem: Given three matrices, , and , check that is the correct product of .
So, simple, yeah? Just compute and check the result against . The multiplication takes operations and the check is , assuming , , and are matrices. It’s polynomial time and guaranteed to be correct. However, what if our primary goal was speed? Perhaps this check needs to be done many times as part of some complex game play and, given the nature of the game, the matrices are very large and 99% correctness it quite acceptable.
Then consider an matrix, , with entries randomly selected to be 1 or 0. It would take only time to calculate (calculated in that order) and again to find , for overall.
When , then . When , then at least half of the time. This is because the multiplication of elements that differ between and by a 0 in effectively hides that inequality, and every element of has a probability of being 0. Therefore each difference between and has a probability of being detected, and further differences only increase the likelihood of correct detection.
Being right 50% of the time isn’t much to get excited about, but, like we mentioned, being right 99% of the time can be useful. Since this approximation runs asymptotically faster than the full calculation ( vs ), we can repeat it a constant number of times and still be faster than the traditional check. If each run uses a new, randomly chosen , then each result has a statistically independent chance of being incorrect, with probability . So, we can simply repeat the check 7 times and have probability of being wrong, even better than 99% certainty.
That’s pretty cool.