The Derandomization Study Group
Yesterday we started the derandomization study group. Thanks to Srikanth for starting this group. We will meet every Saturday and study the field. The members are Srikanth, Nutan, Somnath, Raghu, Partha and myself. Srikanth gave a nice introduction of the field. The following is roughly what he talked about:
Disclaimer: the summaries below are from my notes and may not be
exactly what Srikanth told or may not be 100% accurate, and would
welcome corrections.
Derandomization is a subfield of theoretical computer science whose
main goal is reducing randomness (possibly to zero) from randomized
algorithms. If we want to reduce the amount of randomness we might
have to sacrifice some other resource like time or
space. The goal of `derandomizing time’ is to reduce randomness used
by the algorithm without increasing the time too much. Similarly, The
goal of `derandomizing space’ is to reduce randomness by the algorithm
without significantly increasing the space used by the algorithm. The
‘holy grail’ derandomizing time would be the
question, where
one asks if we can remove the randomness altogether form algorithms
that solves a problem in polynomial time and uses polynomially many
random bits. I feel that settling this question will not only be a
great achievement in computer science but also a great philosophical
achievement. But very little progress has been done in this
direction. We do not have any unconditional good result. But lots of
progress has been made in the case of derandomizing space. The recent
result by Omer Riengold is a big stride in this direction. While
or
questions seems hard nut to crack a less
ambitious goal would be to reduce the error probability without taking
much extra bit of randomness or to use weak random sources.
The different important combinatorial tools for understanding this field are
1) Expanders
2) Extractors
3) Pseudorandom generators and
4) Codes
Srikanth mentioned some results in the direction of error
amplification, weak random sources, derandomization of space and
derandomization of time. We will be covering all of these
results. Here I will mention only two of them
1) Suppose an algorithm runs in time
and uses r random bits,
and gives the correct answer with probability
. We want
to improve the error probability. The naive approach is to repeat
the algorithm
times and get error probability less than
. But Impagliazzo and Zuckerman showed that we can do
much better. We can use only
random bits and still have
error probability less than
.
2) Impagliazzo and Wigderson’s result:
unless
has
subexponential circuits.
is the class of problems that can be
solved in polynomial time.
is the class of problems which can
be solved by algorithms that takes polynomial time and uses
polynomially many random bits.
. We don’t know
whether there is any language L in E that cannot be computed with
a circuit of size
. But there certainly
might be some language. In that case
.