Optimization by Integer Programming

Nemhauser

Last summer, at the EURO/INFORMS conference in Rome, Professor George Nemhauser religiously praised the last 50 years of integer programming. Since 2000, this powerful mathematical theory has been applied by 53% of Franz Edelman prize finalists! This means that half of applications of mathematics to real-world problems involve integer programming, making it the greatest pinnacle of the whole of applied mathematics! Thus, to the question is mathematics even useful?, most answers probably should include integer programming…

I’ve often heard the question, but never have I heard of integer programming! How come?

This is something I propose to fix now!

Some Applications

Before actually talking about integer programming, let me get you excited about it by reviewing three important classes of problems integer programming solves, and a fraction of their uncountable applications to our every day world!

To our every day world? Really?

Yes! Today’s technologies are now filled with optimizing processes, many of which involve the cleverness of integer programming. Here’s an awesome video that explains that:

Find out more about operations research by checking other videos of the Youtube channel LearnAboutOR! Let’s go a bit more into details…

Set Partitioning Problem

The most important class of problems is probably the class of set partitioning problems. A key application of these is the scheduling problem. This problem consists in defining a plan to carry out a set of tasks. Typically, these tasks will have to be given certain number of workers. In other words, tasks must be partitioned among workers, as in the example below:

Set Partitioning

Finding such a partitioning isn’t much of a problem, but finding the optimal one is. And this is particularly the case when you have additional constraints! For instance, some workers may be more qualified than others for certain tasks, some tasks might have to be done one after the other (for instance the packaging must be done before the delivering of the good) and some may be incompatible with one another (one can’t be at the warehouse and delivering a good at the other end of the city all at once)… More importantly, there might be so many workers (like 5, 6 or 152!) that it would no longer be possible for you to keep track of what each and every one is doing. At this point, using integer programming will provide you a much more efficient management of employees and tasks!

Set partitioning is typically also applied to planning shifts to be followed by nurses in hospitals or professors in universities! It’s actually involved in my own research!
So, set partitioning is the ideal modeling for managers!

Yes! But not only for managers! Set partitioning has plenty of other applications. In particular, it is widely used by airline companies to design the routes of their planes. Each destination may then be considered as a task, and the airline companies will then have to partition destinations among their planes. Other examples include vehicle routing for pick-up and/or delivery, data mining used for instance in marketing to characterize classes of customers, scheduling baseball or soccer games for the season, designing wedding dinner tables… More amusingly, my professor used to apply set partitioning to sort out who voted what, given the result of a secret company board vote!

Technically, the set partitioning problem is written as $\min c^Tx$ subject to $x \in \{0,1\}^n$ and $Ax=e$. The variable $x_i$ is binary variable which equals 1 if the subset $i$ of tasks is allocated. $A$ is a matrix with only $1$s and $0$s and $e$ is the column vector made of $1$s. Additional constraints may be added. Overall, it’s such a rich problem that it deserves its own Science4All article. If you can, please write about it!
Also, I should mention three important variants. First is the set covering problem where the constraint $Ax=e$ is replaced by $Ax\geq e$. Second is the assignment problem, on which you can find out more by reading my article on the marriage problem. Third is the knapsack problem where $Ax=e$ is replaced by $Ax \leq e$. This last one corresponds to the smart robber example we’ll study in this article.

Facility Location Problem

Location Problem

Another important class of problems that integer programming efficiently faces is the facility location problem. Imagine you were a manager of Nike and wanted to open several new stores in Paris. What you can do is gather a few candidate locations for your stores. But then, you need to choose only a subset of them to be actually built! The chosen locations must fit the populations of the city, but also and less obviously, they must be well selected with regards to one another! You don’t want them to all be grouped in a single location!

On the right is a map of Paris with the already existing Nike stores in red. Green stars represent the possible locations for new stores one might come up with. The facility location problem consists in determining a few of these green stars where to build your new stores!
Do we really need mathematics to solve this problem?

You can do much better with mathematics than with your brain only! That’s because of the combinatorial inter-dependence of the locations of your stores, which is very hard to analyze! For instance, in our very simple example, there are already over a thousand ways to choose the locations of three new stores!

OK… And I guess that choosing the location of warehouses or hospitals can be done similarly, right?

Yes! As you can imagine, any problem of choosing the location for a new facility can be solved similarly. For instance, Google and Facebook recently had to solve facility location problems, as they were searching for where to build their latest data centers in Europe. But the facility location problem isn’t restricted to geographical location! A friend of mine is finishing his PhD where he studied where to locate virtual machines of applications running on the computing cloud! More generally, in telecommunication, the facility location problem often comes along with a network designing problems, which requires the full potential of integer programming to be solved efficiently!

The facility location problem usually includes binary variables $x_i$ which equal 1 if location $i$ is chosen, and other variables $y_i$ which must be nil if location $i$ is not chosen. We then get the problem $\min c^Tx + d^Ty$, with constraints $y \leq Mx$ for $M$ large enough to not be restrictive when $x_i = 1$.

Ordering Problem

The traveling salesman problem is an iconic problem of applied mathematics which belongs to a family of ordering problems. Here’s a short presentation of this problem I gave in my More Hiking in Modern Math World talk:

Basically, the traveling salesman problem consists in finding the cheapest itinerary that goes through a set of customers. Now, this can be equivalently restated as the problem of finding the right order in which to visit the customers. This problem is particularly important for theoretical reasons, as it has been known to be NP-complete.

Is it that hard a problem?

Yes! And that’s because the number of ordering is just spectacularly huge! For only 60 customers, there are as many orderings as the number of particles in our whole universe! The awesomeness of integer programming is to address efficiently problems as complicated as the traveling salesman problem.

For better insight into truly big numbers, check my talk A Trek through 20th Century Mathematics! Also, an important variant of the traveling salesman problem with a huge amount of applications is the vehicle routing problem.
Are there other applications of ordering problems?

Sure! For instance, they come up in supply chain management. More surprisingly, it’s also applied to DNA sequencing! And I’m sure that we have only made the first steps in a endless range of applications ordering problems can have! As more and more complex systems are modeled with more and more powerful mathematical frameworks, I strongly believe that the most astonishing applications of integer programming have yet to be discovered!

One key class I haven’t mentioned is the class of assignment problems. Find out more with my article on marriage problems.

Linear Relaxation

From now on, my article on linear programming is a prerequisite. My articles on duality and simplex methods aren’t prerequisite, but their readings are greatly recommended for greater insights into integer programming.

Bill and Gold

To present the technics of integer programming, let’s take an example. And I’ll use a slight variant of the smart robber problem I used to introduce linear programming, duality and the simplex methods. In this problem, a robber can either steal gold or bills, but he is limited by the volume of his bag and by the weight he can carry.

So what makes it an integer program?

We’ll now assume that each piece of gold comes as a heavy gold bar. Similarly, bills are given in huge stacks.

What does that change?

It means that we cannot have any amount of gold we want, nor can we have any amount of bills we want. We can only work with an integer number of gold bars, and with an integer number of bill stacks. As a result, only integer values of the feasible region are feasible. This is what’s illustrated below:

Feasible Set

Below is the formulation corresponding to the figure above, where constraints are matched with colors accordingly. Formally, the integer program is simply obtained by adding the purple integer constraint:

Smart Robber Problem

Doesn’t that make the problem easier, now that there is a finite number of feasible solutions?

Surprisingly, it’s quite the opposite! The integrality constraint breaks the beautiful linear structure which made the problem efficiently solvable. As it turns out, the strongest asset we have to face integer programming is precisely its similarity with linear programming! In fact, an integer programming worthy of that name is an optimization problem where variables are integers and which has a linear structure. The linear program obtained by withdrawing the integrality constraint is then called the linear relaxation.

In complexity theory terms, linear programs are P-complete, while integer programs are NP-complete, which means that the latter are harder than the former.
But what’s the point of the linear relaxation?

One point is to provide an upper bound on the optimal value of the integer program. Indeed, because there are many more feasible solutions in the linear relaxation than in the integer program, the optimal value of the linear relaxation is always better than the optimal value of the integer program itself. The difference between these two values is known as the integrality gap, and it sort of represents how much harder the integer program is, compared to its linear relaxation.

In our case, the linear relaxation yields an upper bound because the smart robber problem is a maximization problem. However, operation researchers often rather write minimization problems, in which case the linear relaxation provides a lower bound. In both cases, the optimal value in the linear relaxation is “better”.
So, we can have an upper bound… But this doesn’t say that much about the optimal value of the integer program, does it?

Lower and Upper Bounds

No, indeed. However, any feasible solution of the integer program yields a value for the objective function which will be a lower bound. And what’s usually done in integer programming is an attempt to increase the lower bound and decrease the upper bound to better know the actual optimal value of the integer program. In particular, if these two bounds end up meeting, then their common value is the optimal value of the integer program!

But how can we increase the lower bound or decrease the upper bound?

There are two major technics to do so, known as cutting planes and branch and bound. In practice, a mixing of the two is used to reach greatest performances. Let’s discuss each in details!

Cutting Planes

The method I find the most beautiful mathematically is the cutting plane method.

What’s it about?

Before getting to the cutting plane method itself, notice that, by adding the right linear constraints, any integer programming can be made equivalent to its linear relaxation. Such a case is called an ideal formulation. Below, the dark constraints are replaced by light colored constraints, hence making the formulation ideal:

Cutting Planes

The idea of the cutting plane method is to search for linear constraints which get us closer to the ideal formulation. Any linear constraint found is then called a cutting planes as it cuts the feasible polyhedron along a hyperplane (although, in our case, in dimension 2, they are simply cutting lines…)!

Can we always add enough cutting planes to reach an ideal formulation?

Convex Hull

Yes! Here’s why. If the integer program has a linear structure, then its feasible points are all the integer points of the polyhedron $P_{initial}$ of the initial linear relaxation. Now, let’s consider the convex hull of the feasible integer points. We’ll show that this convex hull defines the ideal formulation of the integer program.

Wait. What’s the convex hull?

The convex hull is the set of all points which are in-between the feasible integer points. It forms a convex polyhedron $P_{final}$ inside the initial convex polyhedron. Since it is a convex polyhedron, it corresponds to a linear program.

Formally, the convex hull of a set $S$ can be defined equivalently as the set of convex combinations of points of $S$, or as the smallest convex space containing $S$. By definition, it is obviously convex. Plus, if $S$ is finite, then the convex hull is a convex polytope. There’s so much more to say about convexity. If you can, please write about it!
OK… But is the linear program corresponding to the convex hull equivalent to the initial integer program?

Yes! Here’s why. First, since this convex polyhedron $P_{final}$ is included in the initial convex polyhedron $P_{initial}$, all integer points of the $P_{final}$ are obviously integer points of $P_{initial}$ too. What’s more, by construction, $P_{final}$ contains all feasible points of the integer program. Thus, integer points of $P_{initial}$ and $P_{final}$ coincide! Second, still by construction, all extreme points of $P_{final}$ are integer points. Thus, any optimal solution given by simplex methods will be an integer feasible solution! That’s why the linear relaxation is equivalent to the integer program!

Waw! That’s awesome! So any integer program can be solved by a linear program, right?

Yes! At least, theoretically…

Wait… there’s a hold-up? Is it hard to construct the equivalent linear program?

Actually, no. The Gomory-Chvatal cutting plane method yields a systematic way to generate right cutting planes. In practice, in many problems, there’s even a direct way to compute all the right cutting planes. Thus, in general, the problem isn’t our ability to construct the equivalent linear program…

So what’s the hold-up?

The problem is usually that the number of cutting planes required to construct the equivalent linear program is exponential! This means that the list of constraints may not even fit in the memory of the computer, and, even if it did, the simplex method would then be extremely slow…

Arg… So, are cutting planes useless?

What’s interesting is that listing all the required cutting planes is not necessary for the integer program to be equivalent to its relaxation! In practice, we first solve the linear relaxation. If the optimum found is integer, then it’s optimal for the integer program too! Otherwise, it’s fractional. Then, we add some linear constraints which cut this fractional solution out of the feasible polyhedron. This yields a better linear relaxation to work on! And we can then repeat this cutting plane method on this improved linear relaxation until an integer optimum is found! This is the case illustrated below:

Cutting Plane Method

As we cut the fractional optimum, we are making it infeasible. But since our current base solution was this fractional optimum, we cannot apply the primal simplex method which requires us to be at a feasible base solution. Fortunately, applying the dual simplex method enables to quickly find the optimum of the improved linear relaxation!

A key remark to be made here is that the cutting planes are interesting only locally, so a cutting plane may later become useless. Thus, in general, a right management of these cutting planes is key to solve efficiently the integer program.

More generally, the cutting plane method can also be applied to any linear program which has too many constraints. This is particularly relevant in stochastic programming which exploits the Benders decomposition, and it yields the so-called L-shaped method. Amusingly, this L-shaped method can be shown to be equivalent to column generation in the dual problem.

However, in most cases, the best strategy is still to couple cutting plane methods with the powerful branch and bound method!

Branch and Bound

The branch and bound is sort of a divide and conquer method.

What do you mean?

It consists in dividing the integer program into two much simpler ones. For instance, in our case, we can work separately on the case where the number of gold bar stolen is smaller or equal to 1, and the case where it’s greater or equal than 2! Each of the cases simply corresponds to adding a linear constraint, either $n_{gold} \leq 1$ or $n_{gold} \geq 2$. Each case defines a new integer program corresponding to one of the polyhedra $P_1$ and $P_2$ depicted below:

Branch and Bound

Technically, we say that we are branching onto two simpler integer programs.

A relevant way of branching consists in cutting the fractional optimal solution in both $P_1$ and $P_2$. Typically, if the optimum was $n_{gold} = 1.6$, then we would branch with added constraints $n_{gold} \leq 1$ and $n_{gold} \geq 2$.
Does this help?

Yes! The key point of the branch and bound is to enable us to delete the middle grey band from the optimization problem, which makes each of the 2 obtained integer programs much easier to solve than the initial one. The optimal value of the initial integer program then equals the best of the optimal values of the 2 branched integer programs.

But each of the branched integer programs is still not equivalent to its linear relaxation…

You’re right! More often than not, one branching is not enough. Each branched integer program then gets branched itself, until the optima of the branched integer program are integer solutions… or until we find out that the branched integer program has no feasible point! This is represented by the figure below, which depicts what we call the branching tree:

Branching Tree

Each linear relaxation can either have an integer optimum, a fractional optimum, or no feasible solution. These cases are depicted by green, blue and red crosses.

What do we do in each of these cases?

In the green case, when the optimal solution found is integer, we have found a integer solution of the initial program. Thus, its value is a lower bound of the initial integer program! Plus, since the green solution is the optimum of its branch, there’s no need to do any exploration of the branch anymore.

What about the other cases?

Let me now address the red case which is simpler than the blue one. In the red case, there’s no feasible solution to the linear relaxation. Thus, obviously, there’s no solution to the integer program either. Thus, the branch explored is completely useless and we can just drop it.

What about the blue case now?

That’s the bad news case, as it means that the corresponding integer program is not equivalent to its linear relaxation! The only chance of good news is if the optimal value of the linear relaxation, which is an upper bound of the corresponding integer program, is lower than the best lower bound we have found with green solutions. Then, we can deduce that there’s no need to explore subbranches, as they won’t provide a feasible integer solution better than the best we have found so far! This remark corresponds to bounding and enables to greatly speed up the branch and bound method. It’s what happened to the branch on the extreme right.

But what if the linear relaxation has a value better than the lower bound?

Then its branches still might have a better integer solution than the best one found so far… So we need to keep branching!

In practice, this is where we can, instead of branching, search for cutting planes to get the branch closer to an ideal formulation. Another essential technics used in practice is constraint programming. It basically consists in combining constraints to deduce stronger ones which will cut efficiently the linear relaxation. Constraint programming is a vast and essential area of applied mathematics. If you can, you should write about it!

Let me finish this section with one last major remark. In branch and bound methods, there’s a freedom in which next branch we are going to focus on. There are two extreme possibilities. On one hand, we can explore in depth the first branches we generate. The major advantage is that this enables to quickly find a feasible integer solution, which can be returned even if the algorithm stops halfway. The drawback though will be the quality of the lower and upper bounds. Basically, searching in depth also means that you are focusing on a particular kind of solution, as you rule out most of the other possibilities. A contrario, if you’re searching for a faster way to find as soon as possible the actual optimum of the integer program, you’ll probably be better off exploring in breadth, that is, line by line.

Let’s Conclude

To conclude, I want to stress the extraordinary accomplishments there have been in integer programming over the last decades. Within 20 years, the speed of integer programming solvers has been increased by an astonishing factor of 250,000! Can you believe that? And this doesn’t even include the increasing computational power of the computers! In other words, any problem we were able to solve using a 20-year old computer in 7 years can now be solved with this same old computer in just 1 second! Even more impressively, in comparison, the speed up of hardwares according to Moore’s law has been improved by only about 10,000 within the same period! These figures must be taught to anyone who doesn’t value research enough!

Waw! That’s spectacular indeed!

The most astonishing part is that this improvement in our algorithmic technics doesn’t seem to be slowing down! It seems impossible right now to even imagine what will be possible within the next decades! Too often, people forget that we live at an unbelievable time where new technologies improve at an exponential rate. As I try to make sense of that, many of the political debates seem deprecated to me… or about to be deprecated!

So what are the upcoming improvements in integer programming?

Hard to say… A lot of mathematicians are currently working on stochastic programming and column generation, which are clever technics based on the idea that solving an integer program doesn’t require it to be fully stated. By working on increasingly faithful approximations of the integer program, the actual optimum can then be found much quicker! Yet, the problems these technics address still suffer the curse of dimensionality. That’s why stochastic programmings when applied to complex problems tend to be replaced by robust linear programmings, which restrict themselves to searching for the best solution in worst-case scenarios. Finally, two hot topics in integer programming like, in any field of applied mathematics, are parallelization and machine learning. It’s particularly hard to even guess what will come out of these two topics…

These are all amazing topics which deserver their Science4All articles. If you can, please write about these!

More on Science4All

Duality in Linear Programming Duality in Linear Programming
By Lê Nguyên Hoang | Updated:2016-01 | Prerequisites: Optimization by Linear Programming, Linear Algebra and Higher Dimensions | Views: 22894
Duality in linear programming yields plenty of amazing results that help understand and improve algorithms of resolution. This article shows the construction of the dual and its interpretation, as well as major results. In particular, matching of primal and dual bases will be dealt, before presenting the issue of degeneracy and its dual interpretation.

Primal and Dual Simplex Methods Primal and Dual Simplex Methods
By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: Optimization by Linear Programming, Duality in Linear Programming | Views: 27205
The simplex method is one of the major algorithm of the 20th century, as it enables the resolution of linear problems with millions of variables. An intuitive approach is given. But that's not all. We present an important variant called the dual simplex. Finally, we'll explain its main default, that is, when facing degeneracy.

Marriage Assignment Problem and Variants Marriage Assignment Problem and Variants
By Lê Nguyên Hoang | Updated:2016-02 | Views: 7327
Marriage problems consist in matching boys and girls. They are a very interesting class of problems that include assignment problems, which have a very large range of applications. Additional results have been found for variants which include the introduction of boys and girls' preferences, and cases where people cannot be categorized into two groups.

P versus NP: A Crucial Open Problem P versus NP: A Crucial Open Problem
By Lê Nguyên Hoang | Updated:2016-01 | Views: 7408
P=NP is probably today's most crucial open problem. Not only is it a very theoretical question in computer science and mathematics, but it also has major implications to real world. Its resolution could revolutionize the world. This article gives the definition and major results of P=NP.

Column Generation and Dantzig-Wolfe Decomposition Column Generation and Dantzig-Wolfe Decomposition
By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: Linear Algebra and Higher Dimensions, Optimization by Linear Programming | Views: 14365
Column generation and the Dantzig-Wolfe decomposition are powerful tricks which have revolutionized optimization addressed to industrial problems, and generated millions and millions of dollars. My PhD supervisor effectively took great advantage of these tricks and founded companies with it. This article explains the tricks.

Santa Routing and Heuristics in Operations Research Santa Routing and Heuristics in Operations Research
By Lê Nguyên Hoang | Updated:2016-01 | Views: 8423
Designing routes to visit customers has become one of applied mathematicians' favorite optimization problems, as companies offer millions to solve them! This article discusses the clever technics they have come up with, and use them to help Santa deliver toys to kids all over the world!

Leave a Reply

Your email address will not be published. Required fields are marked *