Abstract
We use the following notation: the working space is ℝn, where the scalar product will be denoted indifferently by (x, y) or ‹x, y› or x ⊤ y (actually, it will be the usual dot-product: \((x,y) = \Sigma _{i = 1}^n{x^i}{y^i}\)); | · | or ∥ · ∥ will denote the associated norm. The gradient (vector of partial derivatives) of a function f: ℝn → ℝ will be denoted by ∇ f or f′; the Hessian (matrix of second derivatives) by ∇2 f or f″. We will also use continually the notation g(x) = f′(x).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A. (2003). General Introduction. In: Numerical Optimization. Universitext. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-05078-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-662-05078-1_1
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-00191-1
Online ISBN: 978-3-662-05078-1
eBook Packages: Springer Book Archive