This site comprises a complete but compact introduction to the major topics in optimization. It is intended as a gentle introduction, assuming no more preparation than high school mathematics. Most concepts are developed from scratch.
Chapter 1: Introduction. An introduction to the process of optimization and an overview of the major topics covered in the course. Last revision: December 2020.
Chapter 2: Introduction to Linear Programming. The basic notions of linear programming and the simplex method. The simplex method is the easiest way to provide a beginner with a solid understanding of linear programming. Last revision: December 2020.
Chapter 7: Linear Programming in Practice. Mention of other solution methods such as revised simplex method and interior point methods. Mention of advanced techniques used in practice such as advanced and crash start methods, infeasibility analysis, and modelling systems. Last revision: December 2020.
Chapter 10: Network Flow Programming. A surprising range of problems can be solved using minimum cost network flow programming, including shortest route, maximum flow and minimum cut, etc. Variations such as generalized and processing networks are also briefly introduced. Last revision: December 2020.
Chapter 11: PERT for Project Planning and Scheduling. PERT is a network-based aid for project planning and scheduling. Many optimization problems involve some aspect of the timing of activities that may run sequentially or in parallel, or the timing of resource use. PERT diagrams help you to understand and formulate such problems. Last revision: December 2020.
Chapter 13: Binary and Mixed-Integer Programming. These are specialized versions of branch and bound. A binary program has only binary variables (0 or 1 only). A mixed-integer program looks like a linear program, except that some or all of the variables are integer-valued (or binary-valued), while others might be real-valued. Last revision: December 2020.
Chapter 14: Heuristics for Discrete Search: Genetic Algorithms and Simulated Annealing. Some problems are just too big for branch and bound, in which case you must abandon the guarantee of finding the optimum solution and instead opt for heuristic methods which can only guarantee to do fairly well most of the time. Genetic Algorithms and Simulated Annealing are two popular heuristic methods for use on very large problems. Last revision: December 2020.
Chapter 15: Dynamic Programming. This optimization technique builds towards a solution by first solving a small part of the whole problem, and then gradually incrementing the size in a series of stages until the whole problem is solved. Efficiency results from combining the local solution for a stage with the optimum found for a previous stage. We look at the simplest deterministic discrete cases. Last revision: December 2020.
Chapter 16: Introduction to Nonlinear Programming (NLP). NLP is a lot harder than linear programming. We start by looking at the reasons for this. Next we look at the simplest method for solving the simplest type of NLP: unconstrained problems that consist only of a nonlinear objective function. The method of steepest ascent/descent is described. Last revision: December 2020.
Chapter 17: Pattern Search for Unconstrained NLP. What do you do if you don’t have access to gradient information? In that case you can use pattern search techniques (also known as derivative-free, direct search, or black box methods). We look at the classic Hooke and Jeeves pattern search method. Last revision: December 2020.
Chapter 18: Constrained Nonlinear Programming. Now that we have some idea of how to solve unconstrained NLPs, how do we deal with constrained NLPs? The first idea is to turn them into unconstrained NLPs of course! This is done by using penalty and barrier methods which replace or modify the original objective function in ways that make feasible points attractive in the resulting unconstrained problem. Last revision: December 2020.
Chapter 19: Handling Equality Constraints in NLP. Equality constraints are the hardest to handle in nonlinear programming. We look at two ways of dealing with them: (i) the method of Lagrange, and (ii) the Generalized Reduced Gradient (GRG) method. And we take a look at making linear approximations to nonlinear functions because we need that for the GRG method. Last revision: December 2020.
Different nonlinear local solvers can give quite different solution trajectories, i.e. the sequence of intermediate solutions reached before the final solution. You can see this in the next few animations: NLP3, NLP4, NLP5
Resources: free online solvers, modelling systems, tutorials, textbooks, problem sets, etc.
Study Guide: how to study for an exam in optimization or operations research.