内容简介
This book is a standard for a complete description of the methods for unconstrained optimization and the solution ofnonlinear equations....this republication is most welcome and this volume should be in every library. Of course, there exist more recent books on the topics and somebody interested in the subject cannot be satiated by looking only at this book. However, it contains much quite-well-presented material and I recommend reading it before going ,to other.publications.
内页插图
精彩书评
★"...For anyone who needs to go beyond the basic treatment of numerical methods for nonlinear equations given in any number of standard numerical texts this book is ideal"
——Duncan Lawson, Mathematics Today, August 1998.
★"With 206 exercises aiming to illustrate and develop the ideas in the text and 134 bibliographical references, this very well written and organized monograph provides the basic inJbrmation needed to understand both the theory and the practice qf the methods'for solving problems related to unconstrained optimization and systems of nonlinear equations."
——Alfred Braier, Buletinul lnstitutului Politehnic Din lasi, Tomul
XLII(XLVI), Fasc. 3-4, 1996.
★"This book is a standardJor a complete description of the methods for unconstrained optimization and the solution of nonlinear equations. ...this republication is most welcome and this volume should be in every library. Of course, there exist more recent books on the topic's and somebody interested in the subject cannot be satisfied by looking only at this book. However, it contains much quite-well-presented material and I recommend reading it before going to other publications."
——Claude Brejinski, Numerical Algorithms, 13, October 1996.
目录
PREFACE TO THE CLASSICS EDITION
PREFACE
1 INTRODUCTION
1.1 Problems to be considered
1.2 Characteristics of"real-world" problems
1.3 Finite-precision arithmetic and measurement of error
1.4 Exercises
2 NONLINEAR PROBLEMS IN ONE VARIABLE
2.1 What is not possible
2.2 Newton's method for solving one equation in one unknown
2.3 Convergence of sequences of real numbers
2.4 Convergence of Newton's method
2.5 Globally convergent methods for solving one equation in one unknown
2.6 Methods when derivatives are unavailable
2.7 Minimization of a function of one variable
2.8 Exercises
3 NUMERICAL LINEAR ALGEBRA BACKGROUND
3.1 Vector and matrix norms and orthogonality
3.2 Solving systems of linear equations——matrix factorizations
3.3 Errors in solving linear systems
3.4 Updating matrix factorizations
3.5 Eigenvalues and positive definiteness
3.6 Linear least squares
3.7 Exercises
4 MULTIVARIABLE CALCULUS BACKGROUND
4.1 Derivatives and multivariable models
4.2 Multivariable finite-difference derivatives
4.3 Necessary and sufficient conditions for unconstrained minimization
4.4 Exercises 83
5 NEWTON'S METHOD FOR NONLINEAR EQUATIONS AND UNCONSTRAINED MINIMIZATION
5.1 Newton's method for systems of nonlinear equations
5.2 Local convergence of Newton's method
5.3 The Kantorovich and contractive mapping theorems
5.4 Finite-difference derivative methods for systems of nonlinear equations
5.5 Newton's method for unconstrained minimization
5.6 Finite-difference derivative methods for unconstrained minimization
5.7 Exercises
6 GLOBALLY CONVERGENT MODIFICATIONS OF NEWTON'S METHOD
6.1 The quasi-Newton framework
6.2 Descent directions
6.3 Line searches
6.3.1 Convergence results for properly chosen steps
6.3.2 Step selection by backtracking
6.4 The model-trust region approach
6.4.1 The locally constrained optimal ("hook") step
6.4.2 The double dogleg step
6.4.3 Updating the trust region
6.5 Global methods for systems of nonlinear equations
6.6 Exercises
7 STOPPING, SCALING, AND TESTING
7.1 Scaling
7.2 Stopping criteria
7.3 Testing
7.4 Exercises
8 SECANT METHODS FOR SYSTEMS OF NONLINEAR EQUATIONS
8.1 Broyden's method
8.2 Local convergence analysis of Broyden's method
8.3 Implementation of quasi-Newton algorithms using Broyden's update
8.4 Other secant updates for nonlinear equations
8.5 Exercises
9 SECANT METHODS FOR UNCONSTRAINED MINIMIZATION
9.1 The symmetric secant update of Powell
9.2 Symmetric positive definite secant updates
9.3 Local convergence of positive definite secant methods
9.4 Implementation of quasi-Newton algorithms using the positive definite secant update
9.5 Another convergence result for the positive definite secant method
9.6 Other secant updates for unconstrained minimization
9.7 Exercises
10 NONLINEAR LEAST SQUARES
10.1 The nonlinear least-squares problem
10.2 Gauss-Newton-type methods
10.3 Full Newton-type methods
10.4 Other considerations in solving nonlinear least-squares problems
10.5 Exercises
11 METHODS FOR PROBLEMS WITH SPECIAL STRUCTURE
11.1 The sparse finite-difference Newton method
11.2 Sparse secant methods
11.3 Deriving least-change secant updates
11.4 Analyzing least-change secant methods
11.5 Exercises
A APPENDIX: A MODULAR SYSTEM OF ALGORITHMS FOR UNCONSTRAINED MINIMIZATION AND NONLINEAR EQUATIONS (by Robert Schnabel)
B APPENDIX: TEST PROBLEMS (by Robert SchnabeI)
REFERENCES
AUTHOR INDEX
SUBJECT INDEX
前言/序言
国外数学名著系列(续一 影印版)42:无约束最优化与非线性方程的数值方法 [Numerical Methods for Unconstrained Optimization and Nonlinear Equations] --- 图书简介 本卷《国外数学名著系列(续一 影印版)》汇集了数学分析、计算数学和应用数学领域中具有里程碑意义的经典著作。本期特别呈现的影印版(第42卷)聚焦于一个对现代科学与工程计算至关重要的核心领域:无约束最优化理论与非线性方程的数值求解。 本书并非一本通用的数学教材,而是一部深入探讨特定计算方法论的专著。它系统性地梳理和阐述了在处理没有边界限制的函数极小化问题,以及求解一组非线性代数方程组时所采用的数学框架、算法构造、收敛性分析和实际计算技巧。 核心主题与内容结构 本书的核心内容可以划分为两大紧密关联的数学分支:无约束最优化方法与非线性方程组的数值解法。 第一部分:无约束最优化方法 无约束最优化问题,即寻找函数 $f(mathbf{x})$ 的最小值点 $mathbf{x}^ in mathbb{R}^n$,是机器学习、经济学、控制理论和工程设计中普遍存在的数学模型。本书详细介绍了求解这类问题的经典迭代算法及其演进。 1. 基本概念与预备知识: 本部分首先回顾了函数空间、梯度、Hessian 矩阵的定义,以及优化问题的适定性(Well-posedness)讨论。对局部极小点存在的充分必要条件(一阶和二阶最优性条件)进行了严格的数学推导。 2. 一维搜索方法(Line Search Methods): 这是所有迭代算法的基础。书中详尽分析了如何确定每一步迭代中的步长 $alpha_k$。内容涵盖了精确线搜索(如利用黄金分割法、Fibonacci 方法在特定条件下的应用)和不精确线搜索(如 Wolfe 条件、Armijo 条件)的数学准则。这些准则保证了算法既能保证下降,又不会“走得太慢”。 3. 梯度下降法及其变体: 从最基础的梯度法(Steepest Descent Method)出发,本书剖析了其线性收敛的局限性。随后,重点转向了收敛速度更快的拟牛顿法(Quasi-Newton Methods)。 拟牛顿方法的构建: 详细解释了如何通过秩一修正或秩二修正(如 DFP、BFGS 公式)来近似计算 Hessian 矩阵的逆(或Hessian矩阵本身),从而避免了每一步都进行昂贵的矩阵求逆或分解操作。BFGS 公式因其出色的稳定性和收敛速度,在书中占据了重要篇幅。 4. 牛顿法及其修正: 牛顿法以其二次收敛的特性成为理论上的黄金标准。本书深入讨论了标准牛顿法在实际应用中的两大难题:Hessian 矩阵可能不满秩或不定性,以及计算成本高昂。为此,书中引入了修正牛顿法(Modified Newton Methods)的概念,探讨了如何通过在 Hessian 矩阵上添加对角扰动或使用信赖域方法(Trust-Region Methods)来确保迭代方向的有效性。 5. 共轭梯度法(Conjugate Gradient Methods): 对于大规模问题,直接存储和更新 $n imes n$ 的矩阵(如拟牛顿法)变得不切实际。共轭梯度法提供了一种仅依赖向量运算的有效替代方案。本书详细阐述了 Fletcher-Reeves(FR)、Polak-Ribière-Polyak(PRP)和 Hestenes-Stiefel(HS)等经典共轭梯度算法的构建原理及其在保证方向共轭性方面的理论基础。 第二部分:非线性方程组的数值解法 求解非线性方程组 $mathbf{F}(mathbf{x}) = mathbf{0}$ 是工程建模中的另一项核心任务。本书将最优化的思路巧妙地转化为方程求解问题,特别是通过最小化 $|mathbf{F}(mathbf{x})|^2$ 来间接求解。 1. 迭代基础: 从最基础的逐坐标法(Jacobi, Gauss-Seidel for nonlinear systems)的局限性出发,本书转向了更高效的局部线性化方法。 2. 非线性方程的牛顿法: 这是求解非线性方程的核心工具。书中对非线性牛顿法进行了细致的分析,即在每一步迭代中,求解线性系统 $J(mathbf{x}_k) mathbf{s}_k = -mathbf{F}(mathbf{x}_k)$,其中 $J$ 是 $mathbf{F}$ 的 Jacobian 矩阵。重点分析了 Jacobian 矩阵的计算、稀疏矩阵技术以及在奇异点附近的收敛行为。 3. 拟牛顿法在方程求解中的应用: 为了避免每步都精确计算并分解 Jacobian 矩阵及其逆,书中介绍了拟牛顿方法在方程求解中的变体,如 Broyden 方法。Broyden 方法通过低秩更新来近似 Jacobian 矩阵的逆,从而实现了收敛速度与计算效率之间的良好折衷。 4. 阻尼牛顿法与信赖域方法: 在求解非线性方程时,如果初始点离解较远,直接应用牛顿法可能会导致不收敛。本书讨论了如何结合线搜索(阻尼)或信赖域框架来稳定牛顿法的每一步,确保找到一个可行的下降方向,从而保证全局收敛性。 学术价值与目标读者 本书的影印版保持了原著的严谨性和深度,是计算数学领域的一部经典参考书。其内容结构严谨,数学推导详尽,注重算法背后的理论支撑——收敛速度的分析(线性、超线性、二次收敛)和算法的鲁棒性。 本书适合于高等院校的研究生、博士生以及从事数值分析、优化、运筹学、控制系统或大规模科学计算领域的工程师和研究人员。它为读者提供了从一维优化到高维、大规模问题的数值方法论的坚实基础。阅读本书要求读者具备扎实的实分析和线性代数基础,并对数值方法有浓厚的兴趣和深入的探究精神。它不是一本快速入门的“教程”,而是一部需要细致研读的“手册”。