University of Jyvaskyla, Scientific Visualization OVi


 
 
 

Main help page | OVi Tutorial | Optimization Methods | Mathematical Expressions
Visualization | The Ovi User Interface

Optimization methods

The basic idea of optimization is to seek the best (lowest) possible solution. For finding the minimum of a general objective function we need optimization methods of different level. For one-dimensional functions we need to find an interval where the function is unimodal. This implies that the function has one and only one minimum point in the interval. So-called line search methods then find that minimum point.
For multi-dimensional functions a search direction is first determined and then the line search methods are needed to find an appropriate step-size (to be taken into the search direction).
In constrained optimization methods, the feasibility of the solutions have to be quaranteed, for example, by penalizing techniques.
All the methods will find better and better values during the iteration prosess. Therefore we know that the new point is nearer to the optimal point than the earlier point.

All the methods have a final accuracy, called the epsilon.

Current methods

All the methods are from Optimointi-lecture note by Kaisa Miettinen unless otherwise stated.


Unimodal bracketing

Line Search

Multi-Dimensional

Methods for constrained problems

When using the following methods, the constraints forming the feasible region have to be given.

The Stopping Rules

All the methods have some kind of rule to stop the iteration with the tolerance of epsilon. This rule is called an epsilon. Basically the epsilon is the distance between two points, i.e, between current point and previous point.

 Line search methods use epsilon as a distance between two points.
Multidimensional methods calculate a norm of the last two points. Methods using gradient calculate a norm from the gradient.