This is pag
Printer: O
C HAPTER2
Fundamentals of
Unconstrained
Optimization
In unconstrained optimization, we minimize an objective function that depends on real
variables, with no restrictions at all on the values of these variables. The mathematical
formulation is
min f (x), ()
x
where x ∈ IR n is a real vector with n ≥ 1 components and f :IRn → IR is a smooth
function.
C HAPTER 2. FUNDAMENTALS OF U NCONSTRAINED O PTIMIZATION 11
y
y
3 .
.
y
2 .
y .
1 .
t
t1 t2 t3 tm
Figure Least squares data fitting problem.
Usually, we lack a global perspective on the function f . All we know are the values
of f and maybe some of its derivatives at a set of points x0, x1, x2,.... Fortunately, our
algorithms get to choose these points, and they try to do so in a way that identifies a solution
reliably and without using too much computer time or storage. Often, the information
about f does not come cheaply, so we usually prefer algorithms that do not call for this
information unnecessarily.
❏ EXAMPLE
Suppose that we are trying to find a curve that fits some experimental data. Figure
plots measurements y1, y2,...,ym of a signal taken at times t1, t2,...,tm . From the data and
our knowledge of the application, we deduce that the signal has exponential and oscillatory
behavior of certain types, and we choose to model it by the function
2
−(x3−t) /x4
φ(t; x) x1
2.Fundamentals of Unconstrained Optimization英文 来自淘豆网www.taodocs.com转载请标明出处.