Extreme Value Theorem: A function continuous on a closed, bounded region attains its minimum and maximum value on that region.
For a multivariable function \(f\colon R \to \mathbf{R}\) restricted to some domain (region) \(R\) in \(\mathbf{R}^2,\) the zeros of the gradient \(\nabla f\) and sign of the discriminant \(\operatorname{H}_f\) indicate the extrema of \(f\) only on the interior of \(R\); a different technique is needed to detect extrema on the boundary of \(R.\) This optimization problem of finding the extreme values of a function \(f\) on a boundary defined by a function \(g\) is canonically phrased as: \[ \text{``Minimize/Maximize } f(x,y) \text{ subject to the constraint } g(x,y)=k \text{.''} \] The key idea is that the extreme values must occur where the gradients of \(f\) and \(g\) are parallel. Explicitly, assuming such extreme values exist and \(\nabla g \neq \bm{0}\) on the curve \(g(x,y) = k,\) the critical points on the boundary will be any point \((a,b)\) such that for some real number \(\lambda,\) \[\begin{align*} \nabla f(a,b) &= \lambda \nabla g(a,b) \\ g(a,b) &= k\,. \end{align*}\] The location of the extreme values must be among those critical points. The number \(\lambda\) is called the Lagrange multiplier for that point.
This technique generously generalizes. For example, to find the extreme values of a multivariable function \(f\colon \mathbf {R}^3 \to \mathbf{R}\) subject to the two constraints \({g(x,y,z) = k}\) and \({h(x,y,z) = \ell,}\) assuming these extreme values exist and \(\nabla g \neq \bm{0}\) and \(\nabla h \neq \bm{0},\) the critical points will be any point \((a,b,c)\) such that for some real numbers (Lagrange multipliers) \(\lambda\) and \(\mu,\) \[ \nabla f(a,b,c) = \lambda \nabla g(x,y,z) + \mu \nabla h(x,y,z) \quad\text{and}\quad g(a,b,c) = k \quad\text{and}\quad h(a,b,c) = \ell\,. \]