Learning Optimization: from beginner to advanced level (article 2)
Anusuya Ghosh
Industry Research | OR | MATHEMATICS | DS | Product Management | IITKharagpur | IITBombay
Well, friends I am back with another Optimization article for those who wants to build career in Optimization, a must major part of current topics in Operations Research. Operations Research is a part of applied Mathematics. So far you must learn that without Mathematics, Optimization even does not exist.
So we start with "Theorem of Weierstrass". As the Theorem states about achieving minima on a compact set (I hope now you know a Compact Set !), Optimization is about to achieve the Maxima or Minima on certain set or a set with certain properties. Hence, Optimization problem has certain structure: an objective function, constraints, restrictions. The objective function is about to maximize or minimize the achievable decisions, constraints are about all conditions in the process of achieving the best decision and restrictions (also you may say a part of constraints !) are about the applied range of values for decision variables.
Decision variable, Objective function, Constraint, Restriction are the basic terminologies you may come across in learning this particular subject. I suggest better go through any book or read through Wikipedia. It is suffice.
Well, so a mathematical structure of an optimization problem looks like :
Minimize / Maximize (Objective function / A decision function with decision variables)
Subject to
Constraint 1: Condition 1 with decision variable,
Constraint 2: Condition 2 with decision variable,
......................... so on.
Restriction : decision variable >= 0 or
0 <= decision variable <= 1.
Now come to the solution part of the optimization problem. Majorly two types of optimization problems are there : Constrained Optimization and Unconstrained Optimization. Today I will start with Constrained Optimization Problems.
A Constrained Optimization problems are those which is noted as minimize f(x) subject to x belongs to set G. We wish to minimize the real-valued function f where f is from R^n to R. f is the objective function (or cost fucntion). The vector x is independent with n -tuples: x1, x2, ......,xn. The variables x1, x2, ......,xn are known as decision variables. The set G is a subset of R^n and is called a feasible set.
The above problem can be viewed as a decision problem that involves finding the "best" vector x over all possibilities on G. The "best"vector means the one which gives the smallest value of the objective function f(x) among all. The "best" vector is known as the minimizer of f over feasible set G. There may exist a maximizatuon problem. Maximizing f is equivalent to the problem of minimizing (-f). So without loss of generality, we discuss about minimization problem.
If we refer to the feasible set G to be real number space R^n, then the above problem is an Unconstrained Optimization problem. Considering an Unconstrained Optimization problem, we refer two kinds of optimizer : Local and global. Another version exists : strict local minimizer and strict global minimizer. There is no huge difference in those.
Unconstrained Optimization problem involves lots of topics such as conditions for local minimizers, FONC, SONC, SOSC, one-dimensional search methods (Golden section search, Fibonacci search, Newton's method, Secant method), Gradient methods (the method of steepest descent), Newton's method, Conjugate direction method, Quasi-newton method, Recursive Least-Squares algorithm, Kaczmarz's algorithm. Then we will move to the part of Neural Networks and Unconstrained Optimization. The final topic would be Genetic algorithm with unconstrained optimization!
(Have some patience, long way to go folks !)
Data Scientist, KPMG
5 年Great article. Gone through your previous also. Requesting for an article on dual conversion case study.