The Derandomization Study Group

Yesterday we started the derandomization study group. Thanks to Srikanth for starting this group. We will meet every Saturday and study the field. The members are Srikanth, Nutan, Somnath, Raghu, Partha and myself. Srikanth gave a nice introduction of the field. The following is roughly what he talked about:

Disclaimer: the summaries below are from my notes and may not be
exactly what Srikanth told or may not be 100% accurate, and would
welcome corrections.

Derandomization is a subfield of theoretical computer science whose
main goal is reducing randomness (possibly to zero) from randomized
algorithms. If we want to reduce the amount of randomness we might
have to sacrifice some other resource like time or
space. The goal of `derandomizing time’ is to reduce randomness used
by the algorithm without increasing the time too much. Similarly, The
goal of `derandomizing space’ is to reduce randomness by the algorithm
without significantly increasing the space used by the algorithm. The
‘holy grail’ derandomizing time would be the P=^? BPP question, where
one asks if we can remove the randomness altogether form algorithms
that solves a problem in polynomial time and uses polynomially many
random bits. I feel that settling this question will not only be a
great achievement in computer science but also a great philosophical
achievement. But very little progress has been done in this
direction. We do not have any unconditional good result. But lots of
progress has been made in the case of derandomizing space. The recent
result by Omer Riengold is a big stride in this direction. While
L=^?RL or P=^? BPP questions seems hard nut to crack a less
ambitious goal would be to reduce the error probability without taking
much extra bit of randomness or to use weak random sources.

The different important combinatorial tools for understanding this field are

1) Expanders
2) Extractors
3) Pseudorandom generators and
4) Codes

Srikanth mentioned some results in the direction of error
amplification, weak random sources, derandomization of space and
derandomization of time. We will be covering all of these
results. Here I will mention only two of them

1) Suppose an algorithm runs in time poly(n) and uses r random bits,
and gives the correct answer with probability \frac{2}{3}. We want
to improve the error probability. The naive approach is to repeat
the algorithm O(k) times and get error probability less than
\frac{1}{2^k}. But Impagliazzo and Zuckerman showed that we can do
much better. We can use only O(r+k) random bits and still have
error probability less than \frac{1}{2^k}.

2) Impagliazzo and Wigderson’s result: P=BPP unless E has
subexponential circuits. P is the class of problems that can be
solved in polynomial time. BPP is the class of problems which can
be solved by algorithms that takes polynomial time and uses
polynomially many random bits. E=DTIME(2^{O(n)}). We don’t know
whether there is any language L in E that cannot be computed with
a circuit of size 2^{\delta n}, \delta  >0 . But there certainly
might be some language. In that case P=BPP.

Advertisements

~ by bireswar on May 12, 2007.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: