## 2.3 Classification of Optimization Problems and Types of Algorithms

The computational difficulty of optimization problems, then, depends on the properties of the domain set, constraints, and the objective function.

### 2.3.1 Classification

Problems without constraints are said to be unconstrained. For example, least-squares minimization in statistics can be formulated as an unconstrained problem, and so can $\begin{array}{|rl} \min & x^2 - 3x \\ \mbox{s.t.} & x \in \mathbb{R} \end{array}$

Problems with linear constraints $$g_i$$ (i.e.linear inequalities or equalities) and a linear objective function $$f$$ form an important class of problems in linear programming.

Linear programming problems are by far the easiest to solve in the sense that efficient algorithms exist both in theory and in practice. Linear programming is also the backbone for solving more complex models [20]. Convex problems are problems with a convex domain set, which is to say a set $$\mathcal{D}$$ such that $t\mathbf{x}_1+(1-t)\mathbf{x}_2\in \mathcal{D}$ for all $$\mathbf{x}_1,\mathbf{x}_2\in \mathcal{D}$$ and for all $$t\in [0,1]$$, and convex constraints $$g_i$$ and function $$f$$, which is to say, $h(t\mathbf{x}_1+(1-t)\mathbf{x}_2)\leq th(\mathbf{x}_1)+(1-t)h(\mathbf{x}_2)$ for all $$\mathbf{x}_1,\mathbf{x}_2\in \mathcal{D}$$, and for all $$t\in [0,1], h\in \{f,g_i\}.$$

Convex optimization problems have the property that every local optimum is also a global optimum. Such a property permits the development of effective algorithms that could also work well in practice. Linear programming is a special case of convex optimization.

Nonconvex problems (such as problems involving integer variables and/or nonlinear constraints that are not convex) are the hardest problems to solve. In general, nonconvex problems are NP-hard. Such problems often arise in scheduling and engineering applications. In the rest of the report, we will primarily focus on linear programming and nonconvex problems whose linear constraints $$g_i$$ and objective function $$f$$ are linear, but with domain set $$\mathcal{D}\subseteq \mathbb{R}^k \times \mathbb{Z}_+^{n-k}$$. These problems cover a large number of applications in operations research, which are often discrete in nature. We will not discuss optimization problems that arise in statistical learning and engineering applications that are modeled as nonconvex continuous models since they require different sets of techniques and methods – more information is available in [21].

### 2.3.2 Algorithms

We will not go into the details of algorithms for solving the problems discussed, as consultants and analytsts are expected to be using off-the-shelf solvers for the various tasks, but it could prove useful for analysts to know of the various types of algorithms or methods that exist for solving optimization problems.

Algorithms fall into three families: heuristics, exact, and approximate methods.

• Heuristics are normally quick to execute but do not provide guarantees of optimality. For example, the greedy heuristic for the Knapsack Problem is very quick but does not always return an optimal solution. In fact, no guarantee exists for the “validity” of a solution either.

Other heuristics methods include ant colony, particle swarm, and evolutionary algorithms, just to name a few. There are also heuristics that are stochastic in nature and have proof of convergence to an optimal solution. Simulated annealing and multiple random starts are such heuristics.

Unfortunately, there is no guarantee on the running time to reach optimality and there is no way to identify when one has reached an optimum point.

• Exact methods return a global optimum after a finite run time.

However, most exact methods can only guarantee that constraints are approximately satisfied (though the potential violations fall below some pre-specified tolerance). It is therefore possible for the returned solutions to be infeasible for the actual problem.

There also exist exact methods that fully control the error. When using such a method, an optimum is usually given as a box guaranteed to contain an optimal solution rather than a single element.

Returning boxes rather than single elements are helpful in cases, for example, where the optimum cannot be expressed exactly as a vector of floating point numbers.

Such exact methods are used mostly in academic research and in areas such as medicine and avionics where the tolerance for errors is practically zero.

• Approximate methods eventually zoom in on sub-optimal solutions, while providing a guarantee: this solution is at most $$\varepsilon$$ away from the optimal solution, say.

In other words, approximate methods also provide a proof of solution quality.

### References

[20]
D. Bertsimas and J. Tsitsiklis, Introduction to Linear Optimization, 1st ed. Athena Scientific, 1997.
[21]
D. P. Bertsekas, Nonlinear Programming. Athena Scientific, 1999.