Abstract (Michael Fellows)

Perhaps the most fundamental fact that has been uncovered by the first few decades of Computer Science is the ubiquitous nature of computational intractability. As the bearers of principally bad news, complexity theorists are routinely reviled as scientists of the impractical or otherwise admonished to change their ways (for example, by giving up the Big O).

The talk will describe a computational complexity theory that attempts to provide a means of systematically exploring what makes a problem hard, and to find ways of dealing with intractable problems by allowing the Devil to make an exponential contribution to overall complexity restricted to a small (but still useful) parameter.

There are 3 interesting things about this theory. First of all, it is different, and even "orthogonal" to classical complexity theory. It is not an elaboration of the familiar classes of NP and PSPACE, etc. Secondly, it is a very concretely oriented and applicable theory. Hundreds of natural problems have proved to be open to exploration in this framework. The positive toolkit of parameterized tractability includes distinctive general techniques, and many problems "dismissed" as NP-complete or worse have been shown to be tractable for small parameter ranges. Thirdly, there seem to be a very small number of natural parameterized complexity degrees. As a window on the universe of natural computational problems, it has thus revealed some surprising unities. The theory even provides a way to give up the Big O.


Last modified: October 25, 1996.
Kim Skak Larsen (kslarsen@imada.sdu.dk)