Errata to the 6th printing (9 8 7 6 in ISBN number page) of: Jorge Nocedal and Stephen J. Wright: "Numerical Optimization". Springer-Verlag, 1999. Miguel A. Carreira-Perpinan, 2005. - P. 5 fig. 1.2: should be x13. - P. 19, l. 2: *have* seen. - P. 24 l. -9 (displayed eq.): the Hessian should be at k (not k+1), or one should add and subtract Hessian(x+p)*p to the displayed eq. before. - P. 24 l. -8: "within which *the Hessian* is positive definite" (not the gradient). - P. 31, end of exercise 2.8: "...find all minimizers of the problem (2.9)". Some students think (2.9) refers to exercise (2.9) (next line) rather than equation (2.9) in page 19. - P. 40, fig. 3.5: the points of tangency are slightly offset wrt their corresponding dotted vertical line, which obscures the mean value theorem argument. - P. 117 eq. before 5.35: lambda_1 and lambda_n should be exchanged. - P. 117 l. -3: 3.28 -> 3.29. - P. 120, alg. 5.4: the use of = and \leftarrow is inconsistent. - P. 123, eq. 5.46: x -> x^*. - P. 165 l. -5: infintesimal. - P. 260: f_k should be r_k in l. 1 (Jk*fk -> Jk*rk) and eq. 10.22. - P. 271 l. -1 (eq. 10.43): w^2_i -> w^2_j. - P. 220, exercise 8.1: (a) Strict (strong) convexity hasn't been defined in the book I think. Also, a positive Hessian implies strict convexity but not vice versa (eg f(x)=x^4 at x=0). Instead of using the Hessian, a more general proof can use the first-order strict convexity conditions f(y) > f(x) + (book by Boyd & Vandenberghe, p. 69ff). (b): g is not defined and students confuse it with the gradient. - P. 332, eq. (12.34): "lim... -> d", the arrow should be an equals sign: "lim... = d" - P. 339, def. 12.4: alpha is not necessary to define F1. - P. 358, exercise 12.4: maybe say explicitly these are two different functions; the second one looks like a rewriting of the Inf-norm with a missing absolute value. - P. 424, l. 9: increasing -> decreasing. - P. 424, l. -13: "but approach *zero* as x approaches the boundary" -> infinity. - P. 433: after eq. (15.22), the minimum-norm problem should be min \norm{x}_2 s.t. Ax=b rather than min \norm{Ax-b}_2. - P. 435, eq. (15.26): A(x) has not been defined. - P. 442-443 (portfolio optimization example): in usual statistical notation, the covariance matrix is matrix G while the matrix with entries \rho_{ij} is a correlation matrix. - P. 442: displayed equation after eq. (16.2): E(R) -> E[R]. - P. 454, l. -8: "linearly dependent" -> "linearly independent". - P. 462, algorithm 16.1: . The line "set \hat{W} = W_k" is unnecessary. . Just before "else (* p_k \neq 0 *)": "x_{k+1} = x_k" -> leftarrow instead of equals sign. . "obtain W_{k+1} by ... to W_{k+1}" -> to W_k. - P. 465, 2 lines before heading "Further remarks...": \hat{\lambda_i} should be 0.8, not 1.25. - P. 482, l. 3: "slack vector y" -> "surplus vector y" (for consistency with eq. (13.2) in p. 365). - Ch. 17: most figures have a nonuniform axis ratio, which distorts the contour plots. - Algorithmic frameworks 17.1 (p. 494) and 17.2 (p. 505): need to choose new tolerance \tau_{k+1} in (0,\tau_k). - P. 497, l. 3: "quantities -c_i(x_k)/mu_k" (needs a minus sign). - P. 498: reference to eq. (17.10) should be to (17.9). - P. 511, l. 15: (by using (17.41d) and (17.41b)) -> (17.41c) as well? - P. 514, l. after eq. (17.47): "barrier parameter" -> penalty parameter. - P. 575, exercise 18.7: "...given in Exercise 3" -> Exercise 18.2? - P. 591 l. 9: if the sequences are nonnegative then there is no need for absolute values (eg in l. 12). - P. 618, ref. [140] Karmarkar: Combinatorics -> Combinatorica. - P. 619, ref. [157] Markowitz: volume 7, not 8.