Engineering Calc II Summary

Sometimes the purpose of a course becomes obfuscated over its duration; it can become hard to appreciate the overarching plot of a story when one spends months buried in fine exposition and occasional side-plots. This narrative overview, in its brevity, serves to summarize the plot of this course and to clarify our purpose for the sake of keeping both student and instructor focused on what’s important.

Transcendental Functions

In this second semester studying calculus we start by introducing the familiar transcendental functions \(\ln(x)\) and \(\mathrm{e}^x\) from a new perspective.

We introduce these functions by defining the logarithm as a definite integral \[ \ln(x) = \int\limits_1^x \frac{1}{t}\,\mathrm{d}t\,, \] and defining \(\mathrm{e}^x\) as its inverse. Starting from this perspective though, it becomes pertinent to figure out how the derivative of the inverse of a function relates the function itself. Thankfully the inverse function theorem gives us a succinct formula for the relationship: \[ \left(f^{-1}\right)' = \frac{1}{f' \circ f^{-1}}\,. \]

We can get a lot of information from the inverse function theorem, not just about the derivative of \(\mathrm{e}^x\) and about exponential/logarithmic function of non-natural bases, but also about the derivatives of the inverses of trigonometric functions.

\[ \frac{\mathrm{d}}{\mathrm{d}x} \arcsin(x) = \frac{1}{\sqrt{1-x^2}} \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \arccos(x) = -\frac{1}{\sqrt{1-x^2}} \] \[ \frac{\mathrm{d}}{\mathrm{d}x} \mathrm{arcsec}(x) = \frac{1}{x\sqrt{x^2-1}} \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \mathrm{arccsc}(x) = -\frac{1}{x\sqrt{x^2-1}} \] \[ \frac{\mathrm{d}}{\mathrm{d}x} \arctan(x) = \frac{1}{1+x^2} \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \mathrm{arccot}(x) = -\frac{1}{1+x^2} \]

Now before moving on to a couple of calculus tricks that logarithms help us see, there are a couple of new transcendental functions we should add to our dictionary: the hyperbolic functions.

\[ \newcommand{\ex}{\mathrm{e}} \sinh(x) = \frac{\ex^x-\ex^{-x}}{2} \qquad \qquad \cosh(x) = \frac{\ex^x+\ex^{-x}}{2} \] \[ \mathrm{sech}(x) = \frac{1}{\cosh(x)} \qquad \qquad \mathrm{csch}(x) = \frac{1}{\sinh(x)} \] \[ \tanh(x) = \frac{\sinh(x)}{\cosh(x)} \qquad \qquad \coth(x) = \frac{1}{\tanh(x)} \]

We can use the representations for \(\sinh(x)\) and \(\cosh(x)\) in terms of \(\mathrm{e}^x\) to calculate the derivatives of all the hyperbolic functions, and begin exploring their antiderivatives. Furthermore we can use the inverse function theorem to calculate the derivatives of the inverses of the hyperbolic functions.

\[ \frac{\mathrm{d}}{\mathrm{d}x} \sinh(x) = \cosh(x) \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \cosh(x) = \sinh(x) \] \[ \frac{\mathrm{d}}{\mathrm{d}x} \mathrm{sech}(x) = -\mathrm{sech}(x)\tanh(x) \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \mathrm{csch}(x) = -\mathrm{csch}(x)\coth(x) \] \[ \frac{\mathrm{d}}{\mathrm{d}x} \tanh(x) = \mathrm{sech}^2(x) \qquad \qquad \frac{\mathrm{d}}{\mathrm{d}x} \coth(x) = -\mathrm{csch}^2(x) \]

With addition of these logarithmic, exponential, trigonometric, and hyperbolic functions, the dictionary of differentiable functions we’ve been building in this class is complete. Beyond this core dictionary-building narrative there are two new tools/techniques we will manufacture with the help of logarithms: logarithmic differentiation and L’Hospital’s Rule.

Now with our dictionary complete and our toolbox buffed, we are now well-prepared to dive into the art and practice of evaluating integrals.

Techniques & Applications of Integration

We’ve seen already that it’s more complicated to calculate formulas for antiderivatives than for derivatives. Now we dip our toes into this complexity. Now begins the gauntlet.

First remember integrals are linear: they break up across sums/differences, and constants may be factored out. We also know to recognize a few integrals on sight simply as being the derivatives of familiar functions. And we have a single tool for computing derivatives, substitution, which “undoes” the chain-rule of differentiation. The first new tool we’ll develop is called integration by-parts, a way of “undoing” the product-rule of differentiation, the utility of which is captured by the formula \[ \int u \,\mathrm{d}v = uv - \int v \,\mathrm{d}u\,. \]

With this tool we’ll finally be able to calculate formulas for the integrals of the functions \(\ln\) and \(\sec\), \[ \int \ln(x) \,\mathrm{d}x = x\ln(x) - x +C \qquad \int \sec(x) \,\mathrm{d}x = \ln|\sec(x) + \tan(x)| +C\,, \] and move on to learning the tricks for dealing with more complicated trigonometric integrands. The “trick” is usually to cleverly wield some trigonometric identities to rewrite the integrands. We get extra mileage out of the Pythagorean identities in particular due to their ability to turn a sum/difference of squares into a single square term, a technique known as trigonometric substitution.

The last technique of integration we’ll cover isn’t so much about integration as it is about writing a rational expression as a sum of polynomials and rational expression with degree-1 or degree-2 denominators. This sum is called its partial fraction decomposition, and is straightforward to integrate using previously learned techniques.

Because calculating integrals can be such an arduous, creative task, it’s commonly be held to be a good idea to pre-compute many common antiderivative “templates” and record them in a table for reference.

This concludes the gauntlet of learning to manually compute certain integrals, so we should step back and discuss the broader context. Not every function’s antiderivative is computable from its formula. I.e. not every indefinite integral can be expressed as a formula in terms of elementary functions. In practice when we need to compute the value of a definite integral we can use a computer to approximate it to arbitrary precision. You already know about Riemann sums, but there are other methods.

In recent history there’s been advances in symbolically computing the formulas for indefinite integrals using sufficiently sophisticated symbolic computer algebra systems (CAS). You can get a small taste of this sophistication by asking WolframAlpha to calculate a formula for an integral for you.

Before moving on to applications of integration we should address a geometric quirk that we’ve ignored so far, but will become crucial to think about later when discussing series: unbounded regions may contain a finite amount of area. Considering unbounded regions “under” curves and the integrals that compute their area, this means our bounds of integration may be \(\pm\infty\), or that our curve may have an asymptote (pole) between its bounds of integration. Such integrals are called improper, and, computationally, we handle improper integrals with limits. If our bounds of integration are infinite, \[ \int\limits_{-\infty}^{\infty} f(x) \,\mathrm{d}x = \lim_{L\to-\infty}\int\limits_{L}^{0} f(x) \,\mathrm{d}x + \lim_{R\to\infty}\int\limits_{0}^{R} f(x) \,\mathrm{d}x \,, \] or if \(f\) has a pole at \(p\) between \(a\) and \(b\), \[ \int\limits_{a}^{b} f(x) \,\mathrm{d}x = \lim_{\ell\to p^{-}}\int\limits_{a}^{\ell} f(x) \,\mathrm{d}x + \lim_{r\to p^{+}}\int\limits_{r}^{b} f(x) \,\mathrm{d}x \,. \]

After going over these techniques and becoming comfortable with improper integrals, we’ll talk about a few “applications” of integration. The first are applications within mathematics, specifically within geometry, whereas the others are sincere applications,

Next we’re going to dive into a topic that appears to be unrelated to calculus, before discovering why it is that mathematicians appreciate polynomials so much.

Taylor Series

Every function we’ve dealt with so far has been a real-valued functions with real inputs. This segment of the course pivots towards real-valued functions with positive integer inputs, sequences, with the goal of defining and exploring Taylor series. One of the important questions we’ll want to answer about Taylor series is when the sum of their coefficients, the series that their coefficients comprise, converges and sincerely defines a function, so first we’ll spend some time studying the convergence of series.

Once we’re trained to tell when a series converges, we can apply this skill to decide where one defines a function. The series \(\sum_{n=0}^{\infty} c_n(x-a)^n\) is called power series centered at \(x=a\), and can be thought of a polynomial with infinitely many terms. A power series defines a function on some interval of convergence centered at \(x=a\) consisting of all \(x\) for which the power series \(\sum c_n(x-a)^n\) converges. This interval then serves as the domain of the function \(f(x) = \sum c_n(x-a)^n\).

We must also consider the opposite question, not when a power series describes a function, but how to write a given function as a power series. The answer to this question: any smooth function \(f\) is equal to its Taylor series centered at \(x=a\) \[f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n\] for some interval of convergence centered at \(x=a\). This to say any smooth function, on some interval, can be expressed as the limit of a convergent sequence of polynomial functions.

This ability to express any smooth function as a series gives us a means of calculating approximations to the outputs of functions, like trigonometric or exponential/logarithmic functions, that are otherwise difficult to calculate. By truncating a Taylor series after the \(N\mathrm{th}\) term, we define the \(N\mathrm{th}\) Taylor polynomial (centered at \(x=a\)) of a function \(f\) as \[T_N(x) = \sum_{n=0}^{N} \frac{f^{(n)}(a)}{n!}(x-a)^n\,,\] and note that this polynomial approximates \(f\) around \(x=a\). Being a polynomial it is much easier to calculate, and the approximation improves as \(N \to \infty\).

This revelation that every smooth function, nearly every function we’ve studied in a math class before, is secretly the limit of polynomial functions (on some domain) morally serves as the climax and conclusion of this course. However the curriculum demands me learn more! We’ll wrap up this course by previewing two topics that each serve as the focus of their own college courses. First we’ll introduce the main antagonist from the field of applied mathematics, the differential equation. Then we’ll get some practice thinking about calculus beyond the scope of functions and their graphs rectangular space.

Differential Equations & Coordinate Geometry

Before moving on the final major topic of the course, we should take a quick foray into these two topics which, though they feel rather tangential to the narrative of this course, are still important to cover since each serves as the focus of math classes you’ll likely take after this one.

First, having developed some fluency with integration, we can now talk about the topic of solving differential equations, a practice so ubiquitous in research and industry that there are many college courses dedicated to it exclusively.

Second, we’ve developed the theory of calculus quite thoroughly up to this point, but only from a limited perspective: the perspective of single-variable real-valued functions and their graphs plotted in rectangular (Cartesian) coordinates. The topic of multi-variable functions is a broad one reserved for another class, but we have time to explore beyond the scope of functions and their graphs in rectangular coordinates in this class. For example, what about curves in rectangular coordinates that aren’t the graph of a function at all? What about curves that are the graph of a function plotted in polar coordinates instead of rectangular? The calculus knowledge we’ve developed transfers over to these settings.

The graph \(y = f(x)\) of a function \(f\) is the set of all points \(\big(x,f(x)\big)\) in the \((x,y)\)-plane; the \(y\)-coordinate is a function of the \(x\)-coordinate. But we can certainly define curves as the set of all points \(\big(x(t), y(t)\big)\), where \(x\) and \(y\) are functions of some new parameter \(t\). Such a curve is said to be parametrically-defined, and if the functions \(x\) and \(y\) are differentiable/integrable, we can calculate geometric measures of this curve using the tools we’ve developed with calculus.

Note that these parametric curves are still defined in rectangular space, the \((x,y)\)-plane. Instead of generalizing how we describe the coordinates of points, we could also plot points in a different space entirely: polar space, the \(\big(r,\theta\big)\)-plane, where every points is described by its distance \(r\) from the origin, the angle \(\theta\) by which is it’s inclined from the positive \(x\)-axis. Explicitly, the coordinates transformation is given by these equations: \[ \begin{cases} x=r\cos\theta \\ y=r\cos\theta \end{cases} \quad\Longleftrightarrow\quad \begin{cases} r = \sqrt{x^2+y^2} \\ \theta = \operatorname{atan2}(y,x) \end{cases} \]

Typically you see the \(r\) coordinate written as a function of \(\theta\) and plot the graph \(r = f(\theta)\) in polar space. It’s easy to plot the graph of a function \(f\) in rectangular space to appear as if it’s plotted in polar space though. For a parameter \(\theta\), the curve defined as all points \(\big(f(\theta)\cos\theta, f(\theta)\sin\theta\big)\) in rectangular space will be the same curve as if \(r = f(\theta)\) were plotted in polar coordinates. Using this, we can just refer to the calculus skills we developed in terms of parametric equations to build new tools for the graphs of functions in polar coordinates.

We could describe curves in polar space parametrically too, as the set of all points \(\big(r(t), \theta(t)\big)\) for some parameter \(t\), … but I’ve never heard anyone talk about this.