Séminaire d'Optimisation Combinatoire
LAMSADE, Université Paris-Dauphine
Les anciens exposés de 2011
Mardi 4 janvier 2011 à 14h en salle A707
Title: An algorithmic decomposition of claw-free graphs leading to an O(n^3)-algorithm for the weighted stable set problem
Abstract: We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also introduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is coupled with an O(n^3)-time procedure that actually produces the decomposition. We also believe that our algorithmic decomposition result is interesting on its own and might be also useful to solve other kind of problems on claw-free graphs (co-auteurs : Yuri Faenza, Gianpaolo Oriolo).
Lundi 31 janvier 2011 à 15h salle A709
Title: Approximation algorithms and mechanism design aspects for the Minimax solution in approval voting
Abstract: We consider approval voting elections in which each voter
votes for a set of candidates and the outcome
consists of a set of k candidates for some fixed k, e.g.,
committee elections. We are interested in the minimax approval
voting rule in which the outcome represents a compromise
among the preferences of the voters, in the sense that the
maximum (Hamming) distance between the preference of any voter and
the outcome is as small as possible. This voting rule has two
main drawbacks. First, computing an outcome that minimizes
the maximum distance is computationally hard. Furthermore,
any algorithm that always returns such an outcome is not strategyproof,
i.e., it provides incentives to voters to misreport their true preferences.
In order to circumvent these drawbacks, we consider approximation
algorithms. We present a polynomial-time 2-approximation algorithm
that uses a natural linear programming relaxation for
the underlying optimization problem. We are furthermore
interested in approximation algorithms that are resistant
to manipulation by (coalitions of) voters. We complement
previous results in the literature with new upper and
lower bounds on strategyproof and group-strategyproof algorithms.
(The talk is based on joint works with Rob LeGrand, Aranyak Mehta, Ioannis Caragiannis and Dimitris Kalaitzis.)
Lundi 7 février 2011 à 14h salle A707
Title: t-perfection et griffes
Abstract: La célèbre conjecture de Berge affirme qu'un graphe est parfait si et
seulement si il ne contient ni de trou impair ni son complément. Dans les
années 70s l'approche polyédrale était développée pour attaquer la
conjecture. Même si l'approche n'a pas abouti à la preuve de la
conjecture de Berge, elle s'avérait assez porteuse, donnant, par
exemple, une caractérisation polyédrale des graphes parfaits. Motivé par
cette caractérisation des graphes parfaits, Chvátal définissait les
graphes t-parfaits, qui partagent certaines propriétés avec leurs frères,
les graphes parfaits. Dans cet exposé je parlerai de la
caractérisation des graphes t-parfaits.
Vendredi 11 février 2011 à 14h salle A707
Titre : Recherche opérationnelle et optimisation pour la conception testable de circuits intégrés complexes
Résumé : Ce travail de recherche est à l'interface des domaines de la recherche opérationnelle et de la micro-électronique. Plusieurs problèmes d'optimisation et d'aide à la décision découlent de la micro-électronique. La plupart de ces travaux traitent des problèmes d'optimisation combinatoire pour le placement et routage des Circuits Intégrés (CI). Nos travaux de recherche sont à un niveau de conception plus amont : la DFT (Design For Test).
A l'heure actuelle, le Scan path et le Bist mémoire sont les méthodes les plus utilisées dans cette industrie. Il s'agit d'un ensemble de composants et de connexions électroniques qu'on rajoute au CI afin de faciliter l'étape de test. L'objectif de notre travail est de développer algorithmes permettant la génération de solutions optimales pour l'insertion du Scan path et du Bist mémoire dans un CI tout en respectant l'ensemble des contraintes électroniques liées au problème.
Cette exposé se découpe en deux parties. Dans la première partie, nous nous intéressons au problème de l'optimisation de l'insertion des chaînes de Scan. Ce problème a été modélisé comme la recherche de plus courtes chaînes dans un graphe pondéré. Les méthodes de résolution utilisées sont basées sur la recherche de chaînes hamiltoniennes de longueur minimale. La deuxième partie s'intéresse au problème de partage de blocs Bist pour le test des mémoires. Nous avons résolu ce problème en utilisant les algorithmes génétiques pour l'optimisation multi-objectifs.
Lundi 28 février 2011 à 14h salle A707
Title: Extended formulations for the Survivable Network Design with Hop Constraints Problem
Abstract: Given an undirected graph G and set of pairs of vertices D, the SNDH
problem consists in finding a minimum cost subgraph of G
containing, for each d in D, K edge-disjoint paths with length at most
H linking the pair of vertices in d. The parameter K
controls the desired level of survivability, while inputs H is related
to Quality of Service requirements.
A review of formulations and valid inequalities on the space of edge
variables is provided. Then, we present hop-level-MCF,
a new extended formulation over a large number of variables. The
linear relaxation of hop-level-MCF obtains very good lower bounds
on a number of particular cases of the SNDH, allowing the solution of
open instances from the literature. Joint work with A. Ridha Mahjoub, L. Simonetti.
Mardi 1 mars 2011 à 14h salle A707
Title: Online scheduling of bounded length jobs to maximize throughput
Abstract: We consider an online scheduling problem, motivated by the issues
present at the joints of networks using ATM and TCP/IP. Namely,
IP packets have to broken down to small ATM cells and sent out
before their deadlines, but cells corresponding to different packets
can be interwoven.
More formally, we consider the online scheduling problem with
preemptions, where each job j is revealed at release time r_j,
has processing time p_j, deadline d_j and weight w_j.
A preempted job can be resumed at any time.
The goal is to maximize the total weight of all jobs completed on time.
Our main results are as follows: we prove that if all jobs have processing
time exactly k, the deterministic competitive ratio is between
2.598 and 5, and when the processing times are at most k,
the deterministic competitive ratio is \Theta(k/\log k) (co-auteurs : Christoph Durr, Lukasz Jez).
Vendredi 11 mars 2011 à 14h salle A711
Title: Faster Parameterized Algorithms for Constraint Satisfaction Problems
Abstract: Take any boolean Max-CSP with at most $c$ variables per constraint
such that a random assignment satisfies a constraint with probability
$p$. There is an algorithm such that for every instance of the problem
with $m$ constraints, the algorithm decides whether at least $pm+k$
constraints can be satisfied in $O(2^{(c(c+1)/2) k} m)$ time. This
improves on results of [Alon et al., SODA 2010] who gave a
$2^{O(k^2)}+m^{O(1)}$ algorithm, and [Crowston et al., SWAT 2010] who
gave a $2^{O(k \log k)} + m^{O(1)}$ algorithm. We observe that an
$O(2^{\eps k + \eps m})$ time algorithm for every $\eps > 0$ would
imply that 3SAT is in subexponential time, so it seems unlikely that
our runtime dependence on $k$ can be significantly improved. Our proof
also shows that every Max-$c$-CSP has a linear kernel (of at most
$c(c+1)k/2$ variables) under this parameterization, and that every
Max-$c$-CSP admits a nontrivial hybrid algorithm.
Mardi 22 mars 2011 à 14h salle A707
Titre : Systèmes interactifs pour la résolution de problèmes complexes.
Application en optimisation combinatoire, planification et ordonnancement
Résumé: Cette présentation traite de l'utilisation de l'interaction avec un
oracle pour la construction d'algorithmes ayant des performances
garanties.
L'oracle est considéré comme pouvant répondre instantanément et
correctement à n'importe quelle question. Informellement, on
s'intéresse au compromis
"performance de l'algorithme, coût de l'algorithme, quantité
d'information donnée par l'oracle". Bien que l'étude de tels compromis
existe dans différents domaines, tels l'algorithmique distribuée, ou
l'algorithmique online, nous nous
restreindrons au cadre des problèmes d'optimisation offline (avec un
accent sur les problèmes d'ordonnancement), et de la théorie de
l'approximation.
On s'intéresse donc au compromis "ratio d'approximation, temps
d'exécution, quantité d'information donnée par l'oracle".
On souhaite écrire un algorithme interactif $A_{int}$ qui, étant donné
une instance, pose une question à l'oracle, et fournit grâce à cette
réponse une solution garantie. Un tel algorithme peut ensuite être
transformé en un algorithme classique $A_{clq}$ (sans oracle), soit en
remplaçant l'oracle par un
autre algorithme $B$, soit en re-exécutant $A_{int}$ pour toutes les réponses
possibles à la question posée.
Il se trouve que ce formalisme interactif a des liens forts avec des
techniques usuelles de conception de schémas d'approximation.
Le premier objectif de la présentation est donc de situer ce
formalisme par rapport aux techniques existantes (en examinant les
possibilités qu'il offre), et de comprendre quelles sont les "bonnes"
questions à poser à l'oracle
pour obtenir beaucoup d'information via une "petite" réponse. Le
propos sera basé sur des exemples très simples, tels le problème du
Knapsack, ou
de l'ordonnancement de tâches indépendantes sur des machines identiques.
Le second objectif est de donner un exemple complet d'application sur
le problème du "Multiple Strip Packing" (MSP). Le MSP consiste à
placer des rectangles dans un nombre fixé de boîtes (ayant des
largeurs différentes et des hauteurs infinies), en minimisant la
hauteur maximale atteinte. Ce problème est 2-innaproximable en temps
polynomial, à moins que P=NP. Je présenterai un algorithme interactif
ayant un ratio d'approximation de 5/2.
Lundi 28 mars 2011 à 14h salle A707
Title: Bounding planar graphs with homomorphism properties
Abstract: In this talk we consider the problem of bounding planar graphs of odd-girth
$2k+1$ with a graph of odd-girth $2k+1$. The existence of such bounds is special
case of a theorem of P. Ossona De Mendez and J. Nesetril. We are more
interested in a smallest possible bound. We introduce a conjecture in this
regard and consider some other variations of the problem. We give a proof of
the conjecture for the restricted family of series parallel graphs and we give an algorithm to find one such homomorphism for a given series parallel graph.
Another consequence of this work is to show that while deciding if a planar graph
admits homomorphism to $C_5$ (or to the Petersen graph) is an NP-complete problem, deciding if a planar graph admits a homomorphism to the Clebsch graph is possible in polynomial time. For such a decision one simply needs to decide if the input planar graph has a triangle or not, it would admit a homomorphism to the Clebsch graph if it has no triangle. Given a triangle-free planar graph $G$, finding a homomorphism of
$G$ to the Clebsch graph also is expected to be possible in polynomial time, but this
is not proved yet.
Mardi 29 mars 2011 à 14h salle A709
Thomas McCormick, Sauder School of Business, University of British Columbia, et Britta Peis, TU Berlin
Title: A Combinatorial Polynomial Algorithm for Weighted Abstract Cut
Abstract: Hoffman and Schwartz developed the Lattice Polyhedron
model and proved that it is totally dual integral (TDI), and so has
integral optimal solutions. The model generalizes many important
combinatorial optimization problems such as polymatroid
intersection, cut covering polyhedra, min cost aborescences, etc.,
but has lacked a combinatorial algorithm. The problem can be seen
as the blocking dual of Hoffman's Weighted Abstract Flow (WAF) model, or as an abstraction of ordinary Shortest Path and
its cut packing dual, so we call it Weighted Abstract Cut Packing
(WACP). We develop the first combinatorial algorithm for WACP,
based on the Primal-Dual Algorithm framework. The framework is
similar to that used by Martens and McCormick for WAF, in that both algorithms
depend on a relaxation by a scalar parameter, and then need to solve
an unweighted ``restricted'' subproblem. The subroutine to solve
WACP's restricted subproblem is quite different from the
corresponding WAF subroutine. The WACP subroutine uses an oracle to
solve a restricted abstract cut packing/shortest path subproblem using greedy cut packing, breadth-first search, and an update that
achieves complementary slackness. This plus a standard scaling
technique yields a polynomial combinatorial algorithm.
Jeudi 31 mars 2011 à 10h salle A709
Titre : Complexité et Approximation pour l’acquisition de données d’une torpille en immersion
Résumé : Depuis quelques années les véhicules sous-marins autonomes sont en plein essor, les torpilles
sous-marines d’exploration en sont le parfait exemple. Cette torpille a pour objectif d’exécuter deux
types de tâches : celles d’acquisition et celles de traitement. Les tâches d’acquisition sont semblables
à des tâches-couplées, et les tâches de traitement sont des tâches classiques. La torpille utilise différents
capteurs pour réaliser les acquisitions, certains capteurs ne peuvent pas être utilisés en même
temps pour cause d’interférences. Nous introduisons donc un graphe de compatibilité permettant
de représenter les tâches d’acquisition pouvant avoir leur exécution qui se chevauchent. La torpille
possède un monoprocesseur embarqué permettant d’exécuter toutes la tâches.
Je présenterai les aspects théoriques de ce problème d’ordonnancement, classification des problèmes
au sens de la complexité classique (transformation polynomiale, développement d’algorithmes
efficaces polynomiaux) et au sens de l’approximation classique (développement d’algorithmes approchés
avec garantie de performances) selon les paramètres fondamentaux. Les limites des approches
seront également présentées.
Je poursuivrai par la présentation des travaux en cours et les perpectives possibles de ces travaux
du point de vue de la complexité paramétrique et de l’approximation différentielle.
Mardi 17 mai 2011 à 14h salle A707
Title: Recognizing some subclasses of vertex intersection graphs of 0-bend
paths in a grid
Abstract: We investigate graphs that can be represented as vertex
intersections of horizontal and vertical paths in a grid, know as B_0-VPG
graphs. Recognizing these graphs is an NP-hard problem. In light of this, we
focus on their subclasses. In the paper, we describe polynomial time
algorithms for recognizing chordal B_0-VPG graphs, and for recognizing
B_0-VPG graphs that have a representation on a grid with 2 rows.
(en collaboration avec Steven Chaplick et Elad Cohen)
Vendredi 27 mai 2011 à 11h salle A305
Titre : Optimisation du réseau de collecte d’énergie d’un parc éolien
Résumé : Un parc éolien est constitué d’un ensemble d’éoliennes réparties sur un territoire de façon à maximiser la production énergétique. Les éoliennes sont reliées à une sous-station de transformation grâce à un système de collecte d’énergie composé de câbles souterrains et de lignes aériennes. L’énergie acheminée vers la sous-station est ensuite redirigée vers des lignes de transmission à haute tension.
Le système de collecte d’énergie est un facteur de coût important dans la construction d’un parc éolien. La tâche consistant à concevoir un réseau de collecte de l’énergie est complexe et est sujette à plusieurs contraintes de nature variée, tel que la nécessité d’obtenir des permis de chemin d’accès, le besoin de suivre des routes existantes, l’importance d’éviter de croiser les rivières et les chemins de fer, la prise en compte de la topographie et de la nature du sol, etc.
Nous nous plaçons dans un contexte où le positionnement des éoliennes est fixé et il s’agit de minimiser les coûts d’installation du réseau de collecte d’énergie. Nous présentons diverses formulations du problème, l’une étant une généralisation des modèle de flots non-bifurqués, et l’autre une généralisation du problème de l’arbre de Steiner avec contraintes de capacités. Une modélisation à l’aide de la programmation linéaire en nombre entiers est aussi proposée et quelques résultats numériques préliminaires sont présentés.
(co-auteurs : Asma Mdimagh, Odile Marcotte, Michel Carreau, François Welt)
Lundi 30 mai 2011 à 14h salle A707
Cheng WAN, Equipe Combinatoire et Optimisation, Université Paris 6
Title: Coalitions in nonatomic network congestion games
Abstract: The work focuses on coalitions in nonatomic network congestion games. Suppose that a finite number of disjoint coalitions are formed in a network congestion game played by nonatomic individuals.
After having established the existence and the uniqueness of equilibrium both in the nonatomic game without coalitions and in the mixed game played by coalitions and non-coalitional individuals, the work first shows that the presence of coalitions benefits everyone. More precisely, in the equilibrium of the mixed game, the average payoff of each coalition and the individuals' common payoff are all higher than the equilibrium payoff in the nonatomic game. Moreover, the individuals' payoff are higher than the average payoff of any coalition, and the average payoff of a smaller coalition is higher than that of a larger one.
In the case of unique coalition, both the average payoff of the coalition and the individuals's payoff increase with the size of the coalition.
Asymptotic behaviors are studied for a sequence of mixed games, where a finite number of coalitions are fixed, while the maximum size of the remaining coalitions tends to zero. It is proved that the sequence of equilibrium of these games converges to the equilibrium of a mixed game played by the same fixed coalitions and the remaining individuals.
Mardi 7 juin 2011 à 14h salle A305
Title: Approximation Hardness of Combinatorial Optimization
Problems
Abstract: In the last years tight bounds on the efficient approximability for several
optimization problems have been achieved using
the Probabilistic Checkable Proof theory.
However, the current state of the PCP
technique hardly allows to obtain similar results for
restricted
instances of combinatorial optimization problems (e.g., in bounded degree
graphs or in intersection graphs of some
geometric objects).
In this talk we present some techniques which were used to obtain
currently the best inapproximability results for the Maximum Independent Set problem
and other graph optimization problems in
intersection graphs of 3-dimensional boxes.
Lundi 27 juin 2011 à 14h salle A707
Title: Combinatorial Approaches to the Peg Solitaire game
Abstract: The classical Peg Solitaire was already popular by the time of Louis XIV and was described by Leibniz in 1710.
An authoritative account with a annotated bibliography can be found in the comprehensive book of Beasley 1985.
The book mentions an engraving of Berey, dated 1697, of a lady with a Solitaire board.
Apparently the first theoretical study of the game that was published was done in 1841 by Suremain de Missery.
The modern mathematical study of the game dates to the 1960s, when the solitaire cone was first described by
Boardman and Conway. We present old and more recent results, most of which can be found in the seminal book
of Berlekamp, Conway and Guy 1982, and highlight combinatorial and geometric interpretations of these results.
Joint work with David Avis, McGill, and Shmuel Onn, Technion.
Mardi 27 septembre 2011 à 14h salle A711
Martin Charles Golumbic, Caesarea Rothschild Institute and Department of Computer Science,
University of Haifa, Israel
Title: Co-TT Graphs and a Characterization of Split Intersection Co-TT Graphs
Abstract: In this paper, we present a new characterization of complement Threshold Tolerance
graphs (co-TT for short) and find a recognition algorithm for the subclass of
split co-TT graphs running in O(n^2) time. Currently, the best
recognition algorithms for co-TT graphs and for split co-TT graphs run
in O(n^4) time [Hammer1981, Monma1988]. Joint work with Vincent Limouzy, LIMOS-Université Blaise Pascal and Nirit Lefel Weingarten, University of Haifa.
Lundi 7 novembre 2011 à 14h salle A711
Title: On graph coloring problems with local constraints
Abstract: A coloring of a graph is an assignment of natural numbers to its vertices in
such a way that adjacent vertices receive different colors. This well-known
problem is a basic model for frequency assignment and resource allocation
problems. In order to take into account particular constraints arising in
practical settings, more elaborate models of vertex coloring have been
defined in the literature. One of such generalized models is the
list-coloring problem, which considers a prespecified set of available
colors for each vertex. The list-coloring problem is NP-complete even for
split graphs, interval graphs, and bipartite graphs. We consider some
particular cases of the list-coloring problem: \mu-coloring problem (upper
bounds for the color on each vertex), the precoloring extension problem (a
subset of vertices colored beforehand), and a problem generalizing both of
them, the (\gamma,\mu)-coloring problem (lower and upper bounds for the
color on each vertex). In this talk, we will characterize the complexity of
all those problems on several graph classes. In particular, we will present
some polynomial time algorithms to solve these problems on classes where
list-coloring is NP-complete.
Joint work(s) with: Mariano Cecowski, Guillermo Durán, Yuri Faenza, Javier
Marenco and Gianpaolo Oriolo
Lundi 21 novembre 2011 à 14h en salle A707
Title: Lower Bounds on the Complexity of MSO Model-Checking
Abstract: One of the most important algorithmic meta-theorems is the famous result by Courcelle
which states that any graph problem definable in monadic second-order (MSO_2) logic
is decidable in linear time on any class of graphs of bounded tree-width.
In the parlance of parameterized complexity, this means that MSO_2 is fixed-parameter
tractable wrt the tree-width as parameter.
In a recent, technically-involved paper, Kreutzer and Tazari
have given a sort of corresponding complexity lower-bound---that
for graph classes that are subgraph-closed, and whose tree-width
is poly-logarithmically unbounded, MSO_2 model-checking is not tractable
unless all problems in the Polynomial-Time Hierarchy can be solved in
sub-exponential time (in particular, the exponential-time hypothesis, ETH, fails).
We improve upon their result by showing that even MSO model-checking with
vertex colours (labels) is not tractable for graph classes which are
subgraph-closed and whose tree-width is poly-logarithmically unbounded
(unless nonuniform ETH fails). We also get rid of the
constructability requirement of Kreutzer and Tazari, and give somewhat simpler
and cleaner arguments to prove the result. In fact, by making a stronger
assumption on our label set, we show that:
MSO model-checking with vertex labels is not tractable for
such a graph class unless every problem in the Polynomial-Time
Hierarchy is in DTIME(2^{o(n)}/SubEXP)$.
Furthermore, our result has an interesting consequence in the realm of digraph width
measures: Strengthening the recent result in Ganian et al., we get that no
subdigraph-monotone measure can be algorithmically useful, unless it is
within a poly-logarithmic factor of the ordinary (undirected) tree-width.
Mardi 22 novembre 2011 à 14h en salle
Title: Multistage Energy Planning - Risk Neutral and Risk Averse Approaches
Abstract: Stochastic Dual Dynamic Programming algorithm is a popular approach to tackle large scale linear multistage stochastic problems like the one considered in multistage energy planning problems. In the risk neutral approach, the objective is to minimize the expected value of the total cost along the planning period, subject to feasibility constraints. In this talk we discuss two possible risk averse approaches: regular and adaptive. In the adaptive case we discuss approaches based on the average value-at-risk and mean-upper-semideviation risk measures. We show their contributions compared to the risk neutral method.
This is joint work with Alexander Shapiro, Joari Paulo da Costa and Murilo Pereira Soares.
Mercredi 23 novembre 2011 à 14h en salle A711
Title: Polynomial Kernels for Some Graph Modification in H-Topological-Minor-Free Graphs
Abstract: We study polynomial kernels for the following graph modification problems: Interval Vertex Deletion and Chordal Vertex Deletion, in H-topological-minor-free graphs. Using the recent protrusion machinery developed by Bodlaneder et al. [Meta Kernelization, Bidimensionality and Kernels], we show that Interval Vertex Deletion has a linear kernel and that Chordal Vertex Deletion has a quadratic kernel for H-toplogical-minor-free graphs. Note that both interval and chordal graphs are hereditary properties with an infinite forbidden set. We are working to improve these results and also trying to generalize these to other graph modification problems on hereditary classes with an infinite forbidden set. Work in progress.
Lundi 28 novembre 2011 à 14h en salle A711
Title: Finding good decompositions for dynamic programming on dense graphs
Abstract: It is well-known that for graphs with high edge density the tree-width is always
high while the clique-width can be low. Boolean-width is a new parameter that is
never higher than tree-width or clique-width and can in fact be as
small as logarithmic
in clique-width. Boolean-width is defined using a decomposition tree
by evaluating the
number of neighborhoods across the resulting cuts of the graph.
Several NP-hard problems
can be solved efficiently by dynamic programming when given a decomposition of
boolean-width k.
In this talk
we report on various results related to boolean-width, including a
heuristic algorithm that finds a reasonably good decomposition.
Experiments with this heuristic show that boolean-width is competitive
with tree-width, at least for non-sparse graphs and problems like
Independent Set and Dominating Set.
Mercredi 30 novembre 2011 à 14h en salle A707
Title: Graph Minors and Parameterized Algorithms
Abstract: The main mathematical achievement of the Graph Minors Theory, developed by Robertson
and Seymour, was the proof of Wagner's conjecture, now known as the Robertson & Seymour
Theorem, stating that graphs are well quasi ordered under the minor containment relation.
Besides its purely theoretical importance, GMT induced a series of powerful algorithmic
results and techniques that had a deep influence in theoretical computer science. GMT
offered the theoretical base for the understanding and the resolution of some of the most
prominent graph-algorithmic problems in the design of parameterized algorithms. In this
talk we give a brief presentation of the main results and techniques to this direction and
we comment on their potential and their limitations.
Lundi 5 decembre 2011 à 15h30 en salle A711
Title: (In) Compressibility of NP-hard problems
Abstract: Kernelization is a mathematical framework that allows a rigorous
analysis of the ubiquitous technique of preprocessing (data-reduction).
Roughly speaking, a kernelization algorithm compresses an instance of a
given problem to an equivalent instance (the kernel) whose size is bounded
by function depending only on a prespecified parameter (typically the
solution size). In this talk we survey some recent results on lower-bounds
for kernel sizes, and discuss some future direction for research that
arise from these results.