*Kernelization* is an influential paradigm for attacking hard computational problems. Kernelization algorithms
efficiently preprocess their input to “shrink” it to a smaller,
equivalent one. This can be a valuable first step before applying
exhaustive-search techniques.

After
giving the basic framework of kernelization, I’ll describe my
contribution to a line of works showing *limits* to the achievable
performance of kernelization algorithms for many NP problems (under
complexity-theoretic assumptions).

Based on:

[A. Drucker, New Limits to Classical and Quantum Instance Compression. FOCS’12]