Example: whether a graph has a perfect matching or not. The language would start with a universe, all syntatically correct descriptions of graphs, in some chosen encoding. Given an encoding of a graph, the algorithm solving the problem would intrepret the string as a graph and see if it is bipartite and has a perfect matching. If it does, the algorithm accepts the string, else it rejects.
The definition of algorithm is based on the program of a Turing Machine. As we are concerned with efficiency of algorithms with polynomial factors, this is sufficient. A Turing Machine and most other models of computation, more practical or less, have efficiencies related by less than a polynomial factor.
The typical classification of langauges (problems) are those accepted in time polynomial in the length of the input, and those that cannot be accept so speedily. The class of all languages acceptible in polynomial time by a (deterministic) Turing Machine is called P.
The next stage outside of P are those languages which cannot be accepted in polynomial time, but where each string in the language can be verified to be in the language in polynomial time if the Turing Machine is given an additional hint string. Formally,
x ∈ L iff ∃ y s.t. f(x,y) acceptswhere f is computable in polynomial time. These are the NP language. In general it is hard to find y, the hint string, that is, to search for it without method is time consuming. Any P language is an NP language. One ignores the hint and computes f(x) without aid. It is not known if all NP languages are P languages.
Consider a vast tree of all possible computation paths. That is, the computation f(x,y) for all possible y (there are a finite number if f is to run polynomial in the length of x). NP means that if x in in the language, at least one path accepts; if x is not in the language, no path accepts.
However, the other paths do not reject, they are inconclusive. If every path either accepts or rejects, the language is in P. Because you can fix a y once and for all, build it into the function f if you wish, and run the function. It will either accept or reject conclusively in polynomial time.
The dual of NP is co-NP. A language is co-NP whenever x is not in the language there is at least one for which f(x,y) computes rejection of the string. If x is in the language, there must be no such y.
x ∉ L iff ∃ y s.t. f(x.y) rejectsA computation path which does not reject for co-NP is inconclusive. One can also consider languages which are both NP and co-NP. For any x there is at least one y for which f(x,y) correctly decides membership of x, and there is never any y for which f(x,y) is in error.
Other important complexity classes manipulate the size of the accepting (or rejecting) set. RP is the class of languges for which if x is in the language, at least a fraction ε>0 of y accept. If x is not in the language, no y accepts. co-RP is the dual, if x is not in the language, at least a fraction ε>0 of strings y reject; if x is in the language, no y rejects. ZPP is the intersection of RP and co-RP: at least a fraction ε>0 of y correctly decides membership of x and no y leads to an error in classification of x.
Finally, the class BPP labels all computation as either accepting or rejecting and f(x,y) might error in deciding membership of x in the language. However, if x is in the language atleast 2/3 of the y accept; if x is not in the language at least 2/3 of the y reject.
The classes RP, co-RP, ZPP and BPP are basis for randomized complexity. An RP language can use randomness to confirm membership of strings in the language by picking random y until an accepting computation path is found. A ZPP language can decide membership, in or out, by picking random y. This is a feasible strategy for the randomized complexity classes because there are many conclusive paths. It cannot be done for NP - there may be one path among the exponential number of possiblities which is conclusive. It is exponentially unlikely that the correct y will be chosen.
RP, co-RP and ZPP decide membership probabilistically without error. If they determine a string to be in or out of the language, there are indeed in or out of the language. With every decreasing likelihood as more computation paths are attempted, they might, however, remain inconclusive. In fact, RP (co-RP) is as useless to reject (accept) a string as is NP - where as about 1/ε paths need be tried before an accept (reject) on average, fully a fraction (1-ε) of the paths must be tried before one can infer rejection (accepting).
BPP decides membership with an arbitrarily small error. However, it always has an opinion. By repeating a BPP computation for various random y, and taking the majority result, one can reduce the probability of misclassification.
Complexity Relationships
We establish the relationships,
P ⊆ ZPP ⊆ RP ⊆ BPP, NP P ⊆ ZPP ⊆ co-RP ⊆ BPP, co-NPSince P always halts correctly in polynomial time, P ⊆ ZPP. Since ZPP is the intersection of RP and co-RP, ZPP ⊆ RP, co-RP. Since NP only requires 1 accepting path, and RP requires many, RP ⊆ NP. Likewise co-RP ⊆ co-NP.
Finally, let L be an RP language. We show L is in BPP. Let M be the machine accepting L and construct M' from M. First, M' will run M multiple times, accept the input if ever M accepts. The number of repetitions will give an accepting probability for M' of at least 2/3. If M never accepts, M' rejects.
M' is a BPP machine. If l ∈ L then by the repeated calls to M, l is accepted with good probability, that is, M' accepts with probability greater than 2/3. If l ∉ L then M never accepts, so M' rejects, that is, M' rejects l with probability greater than 2/3 (in fact, with probability 1).
A similar construction gives co-RP ⊆ BPP.
The relationship between NP and BPP is not known. Remember, it could be that everything here is equal to P.
Randomized Expected Time
There is another manner in which an algorithm can be randomized polynomial time. The above notions gave a guarenteed runtime polynomial in input size, but the result may be incomplete or incorrect. There is a randomized polynomial time class which requires correct answer and expected runtime polynomial in input size. It is possible that for some coin flips (or input choices) the algorithm run for, say, exponential number of steps. But these must be "exponentially rare" so that they do not adversly affect the average runtime.
An algorithm which runs in expected polynomial time can be modified so that it stops itself after a polynomial number of steps. If the algorithm is so terminated, it returns an incomplete result. Using Markov's inequality, one can bound the probability that the algorithm has been prematurely stopped, and thus yield a RP, co-RP or ZPP algorithm.
In an equally simple manner, a ZPP algorithm can be turned into an expected runtime algorithm by repeating it with different coin flips until it definitively accepts or rejects. A calculation of probability will show the expected number of repetitions will be polynomial.
The situation for BPP, RP and co-RP algorithms is left as a matter for discussion.
Author: Burton Rosenberg
Last Update: 19 Sept 2003