Avoiding Proof By Contradiction

•January 21, 2011 • Leave a Comment

In this post Terence Tao explains how to avoid proof by contradiction. It shows another way to explain Cantor’s proof that real numbers are uncountable. The comment section is very interesting he deals with a doubt regarding the proof by setting up an imaginary conversation between two people.

Interesting Functions

•January 6, 2011 • Leave a Comment

I like the following class of functions very much:

f: \mathcal{Z}\longrightarrow \mathcal{Z} where \mathcal{Z} is the set of integers and f satisfies the following conditions:

1) f is a bijection.
2) f(i) \geq i for all i.

In a later post here I will explain why I like these functions.

A BBC Documentary

•January 5, 2011 • Leave a Comment

Dangerous Knowledge” is a BBC documentary that talks about Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing. This is the youtube link for the documentary. It features mathematician Gregory Chaitin, physicist Roger Penrose among many others. Though it is somewhat overdramatized, the documentary gives a lot of interesting information.

Coding and Information Theory— S. Roman

•April 18, 2010 • 1 Comment

I thought that I will write a book review for the book “Coding and Information Theory” by Steven Roman but I think to write a book review I need to read the whole book. Anyway, this is one of the books I followed for teaching a course on Information and Coding Theory.  The students  attending this course have pure mathematics as background and for such an audience this book is great.  The presentation is lucid yet it is very rigorous. If you like Lemma-Theorem-Corollary type exposition (I like that) then you will probably like this book.  Like most of the coding theory books this is also outdated. If you want to know about the algorithmic aspects of coding theory then this is not a good book  (for algorithmic aspects, the best resource is Madhusudan’s 2001 lecture notes).  There are several typos and a few mistakes in the book. For example the proof of Kraft’s Theorem  [it contains a fixable bug – Ashish pointed it out], a matrix in written incorrectly in the Mattson-Solomon polynomial section.  But overall I like the book for its clear concise presentation.

The Derandomization Study Group

•May 12, 2007 • Leave a Comment

Yesterday we started the derandomization study group. Thanks to Srikanth for starting this group. We will meet every Saturday and study the field. The members are Srikanth, Nutan, Somnath, Raghu, Partha and myself. Srikanth gave a nice introduction of the field. The following is roughly what he talked about:

Disclaimer: the summaries below are from my notes and may not be
exactly what Srikanth told or may not be 100% accurate, and would
welcome corrections.

Derandomization is a subfield of theoretical computer science whose
main goal is reducing randomness (possibly to zero) from randomized
algorithms. If we want to reduce the amount of randomness we might
have to sacrifice some other resource like time or
space. The goal of `derandomizing time’ is to reduce randomness used
by the algorithm without increasing the time too much. Similarly, The
goal of `derandomizing space’ is to reduce randomness by the algorithm
without significantly increasing the space used by the algorithm. The
‘holy grail’ derandomizing time would be the P=^? BPP question, where
one asks if we can remove the randomness altogether form algorithms
that solves a problem in polynomial time and uses polynomially many
random bits. I feel that settling this question will not only be a
great achievement in computer science but also a great philosophical
achievement. But very little progress has been done in this
direction. We do not have any unconditional good result. But lots of
progress has been made in the case of derandomizing space. The recent
result by Omer Riengold is a big stride in this direction. While
L=^?RL or P=^? BPP questions seems hard nut to crack a less
ambitious goal would be to reduce the error probability without taking
much extra bit of randomness or to use weak random sources.

The different important combinatorial tools for understanding this field are

1) Expanders
2) Extractors
3) Pseudorandom generators and
4) Codes

Srikanth mentioned some results in the direction of error
amplification, weak random sources, derandomization of space and
derandomization of time. We will be covering all of these
results. Here I will mention only two of them

1) Suppose an algorithm runs in time poly(n) and uses r random bits,
and gives the correct answer with probability \frac{2}{3}. We want
to improve the error probability. The naive approach is to repeat
the algorithm O(k) times and get error probability less than
\frac{1}{2^k}. But Impagliazzo and Zuckerman showed that we can do
much better. We can use only O(r+k) random bits and still have
error probability less than \frac{1}{2^k}.

2) Impagliazzo and Wigderson’s result: P=BPP unless E has
subexponential circuits. P is the class of problems that can be
solved in polynomial time. BPP is the class of problems which can
be solved by algorithms that takes polynomial time and uses
polynomially many random bits. E=DTIME(2^{O(n)}). We don’t know
whether there is any language L in E that cannot be computed with
a circuit of size 2^{\delta n}, \delta  >0 . But there certainly
might be some language. In that case P=BPP.

Test

•April 6, 2007 • 1 Comment

This looks like a good blogging site. I can \LaTeX


\phi:\mathcal{G}\longrightarrow\mathcal{H}