Numerical Differentiation with Finite Differences in R

Part 1 of 9 in the series Numerical Analysis

Numerical differentiation is a method of approximating the derivative of a function f at particular value x. Often, particularly in physics and engineering, a function may be too complicated to merit the work necessary to find the exact derivative, or the function itself is unknown, and all that is available are some points x and the function evaluated at those points. Numerical differentiation, of which finite differences is just one approach, allows one to avoid these complications by approximating the derivative.

The most straightforward and simple approximation of the first derivative is defined as:

f^\prime (x) \approx \frac{f(x + h) - f(x)}{h} \qquad h > 0

The approximation error of this equation can be found by performing a Taylor expansion of f(x + h) about x, which gives:

f(x + h) = f(x) + hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(\epsilon)

We then rearrange the expansion and substitute f^\prime with the earlier approximation to arrive at an exact form of the approximation equation:

f^\prime (x) = \frac{f(x + h) - f(x)}{h} - \frac{h^2}{2} f^{\prime\prime}(\epsilon)
Finite Differences Methods for Numerical Differentiation

There are three primary differencing techniques, forward, backward, and central, for approximating the derivative of a function at a particular point. Note there are more accurate methods and algorithms for approximating derivatives; however, due to their relative ease of computation and accuracy, differencing methods are still a viable tool for performing numerical differentiation.

The forward difference approximation is the same as the earlier approximation:

f^\prime (x) \approx \frac{f(x + h) - f(x)}{h}

The backward difference approximation is based on the values of the function evaluated at x - h and x, defined as:

f^\prime \approx \frac{f(x) - f(x - h)}{h}

The central (or centered) difference approximation is essentially an average of the forward and backward differencing methods:

f^\prime \approx \frac{f(x + h) - f(x - h)}{2h}
Error of Approximations

It was shown earlier the forward and backward difference approximations have an error term of \frac{h^2}{2} f^{\prime\prime}(\epsilon) where \epsilon \in (x, x + h). The error term of the central difference method can be found by performing Taylor expansions on f(x + h) and f(x - h) about x.

f(x + h) = f(x) + hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(x) + \frac{h^3}{6} f^{\prime\prime\prime}(\epsilon) \qquad \epsilon \in (x, x + h)
f(x - h) = f(x) - hf^\prime (x) + \frac{h^2}{2} f^{\prime\prime}(x) - \frac{h^3}{6} f^{\prime\prime\prime}(\epsilon) \qquad \epsilon \in (x - h, x)

These two equations are then subtracted, and then solving for f^\prime (x) leads to the central difference method:

f^\prime \approx \frac{f(x + h) - f(x - h)}{2h} - \frac{h^2}{12}\bigg({f^{\prime\prime\prime}(\epsilon_1) + f^{\prime\prime\prime}(\epsilon_2)\bigg)}

The central difference approximation is more accurate than forward and backward differences and should be used whenever possible. The three difference methods report the same approximations of the following example as the function and its derivative are rather simple; however, it is still best to apply the central difference approximation in actual practice.

Differencing Approximations Example in R

Consider the following set of data points:

x f(x)
0.0 0.00000
0.2 0.74140
0.4 1.37180
x <- c(0.0, 0.2, 0.4)
fx <- c(0.00000, 0.74140, 1.3718)

The function is unknown; however, we would still like to approximate the function’s derivative at the values of x. The following function combines the forward and backward differencing methods to approximate the derivative of the function at each value of x.
Update: Iñaki in the comments gives a vectorized implementation of the finite differences and central differences methods.

finite.differences <- function(x, y) {
if (length(x) != length(y)) {
stop('x and y vectors must have equal length')
}
n <- length(x)
# Initialize a vector of length n to enter the derivative approximations
fdx <- vector(length = n)
# Iterate through the values using the forward differencing method
for (i in 2:n) {
fdx[i-1] <- (y[i-1] - y[i]) / (x[i-1] - x[i])
}
# For the last value, since we are unable to perform the forward differencing method 
# as only the first n values are known, we use the backward differencing approach
# instead. Note this will essentially give the same value as the last iteration 
# in the forward differencing method, but it is used as an approximation as we 
# don't have any more information
fdx[n] <- (y[n] - y[n - 1]) / (x[n] - x[n - 1])
return(fdx)
}

Using the function, perform each differencing method on the x values and the evaluated points f(x).

finite <- finite.differences(x, fx)
finite
## [1] 3.707 3.152 3.152

Let’s assume the data from earlier was taken from the function e^x - 2x^2 + 3x - 1.

f <- function(x) {
return(exp(x) - 2 * x^2 + 3 * x - 1)
}

Using the central differencing method and the given function, we can approximate the derivative at each point. As h approaches 0, the approximation becomes more accurate. The following function approximates the values of the derivative at the given points at several different values of h to demonstrate how the approximations converge.

central.difference <- function(f, x) {
steps <- c(0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001)
n <- length(steps)
fdx <- vector(length = n)
for (h in 1:n) {
fdx[h] <- (f(x + 0.5 * steps[h]) - f(x - 0.5 * steps[h])) / steps[h]
}
return(fdx)
}

Print the approximations of the derivative.

for (i in x) {
print(central.difference(f, i))
}
## [1] 4.000417 4.000004 4.000000 4.000000 4.000000 4.000000 4.000000
## [1] 3.421912 3.421408 3.421403 3.421403 3.421403 3.421403 3.421403
## [1] 2.892446 2.891831 2.891825 2.891825 2.891825 2.891825 2.891825

We see the approximations converge rather quickly as h approaches 0, giving the following approximations:

f^\prime (0.0) \approx 4.00000 \qquad f^\prime (0.2) \approx 3.421403 \qquad f^\prime (0.4) \approx 2.891825

The derivative of the function is e^x - 4x + 3. We shall compare our approximated values of with the actual values of the derivative at x to see how the central differencing method faired.

fdx <- function(x) {
return(exp(x) - 4 * x + 3)
}

For fun, plot both the function and its derivative to get a visualization of where the function’s derivative is at the values of x.

ggplot(data = data.frame(x = 0), mapping = aes(x = x)) + 
stat_function(fun = f, size = 1.25, alpha = 0.75) +
stat_function(fun = fdx, size = 1.25, color = 'blue', alpha = 0.75) +
xlim(-3,3)

Function and its derivative to approximate with finite differences

Print the calculated derivative’s values for each x and compare them with our previously approximated values.

actual <- vector(length = length(x))
central.approx <- c(4.00000, 3.421403, 2.891825)
for (i in 1:length(x)) {
actual[i] <- fdx(x[i])
}
approx.df <- data.frame(cbind(actual, central.approx, actual - central.approx, finite, actual - finite))
colnames(approx.df) <- c('Actual Values', 'Central Difference Approx', 'Central Differences Error', 'Finite Differences', 'Finite Differences Error')
approx.df
##   Actual Values Central Difference Approx Central Differences Error
## 1      4.000000                  4.000000              0.000000e+00
## 2      3.421403                  3.421403             -2.418398e-07
## 3      2.891825                  2.891825             -3.023587e-07
##   Finite Differences Finite Differences Error
## 1              3.707                0.2930000
## 2              3.152                0.2694028
## 3              3.152               -0.2601753
References

Burden, R. L., & Faires, J. D. (2011). Numerical analysis (9th ed.). Boston, MA: Brooks/Cole, Cengage Learning.

Weisstein, Eric W. “Numerical Differentiation.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/NumericalDifferentiation.html

Series NavigationDivided Differences Method of Polynomial Interpolation >>
6 Comments
  • […] article was first published on R – Aaron Schlegel, and kindly contributed to R-bloggers) Part 1 of 7 in the series Numerical […]

  • Iñaki Úcar

    August 4, 2017 at 5:40 am Reply

    The first function, finite.differences, can be implemented in a vectorised way as follows:

    (diff(fx) / diff(x))[c(1:(length(x)-1), length(x)-1)]

  • Iñaki Úcar

    August 4, 2017 at 5:45 am Reply

    Oh, and in the second one, central.difference, you don’t need the loop at all either:

    central.difference <- function(f, x) {
    steps <- c(0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001, 0.0000001)
    (f(x + 0.5 * steps) – f(x – 0.5 * steps)) / steps
    }

    • Aaron Schlegel

      August 5, 2017 at 9:31 am Reply

      Hi Iñaki,

      Thank you so much for your comments! That’s awesome, I updated the post to call out your vectorized implementations. I’ve been trying to move away from loops in favor of vectorized implementations whenever possible but as you can see I often default to them regardless. I have another post coming soon discussing Simpson’s rule for approximating definite integrals in which the functions are vectorized, so I’m getting there! Thanks again.

      By the way, your simmer package is amazing ^_^

      • Iñaki Úcar

        August 5, 2017 at 4:41 pm Reply

        Thanks, much appreciated! 🙂 Working with vectors in R is hard at the beginning, but at some point there is a switch in your head and, from that moment on, you rarely even consider using a loop.

        • Aaron Schlegel

          August 11, 2017 at 6:58 am Reply

          They definitely can be tricky! I think I am getting more confident with vectors as I went straight to using those rather than loops in my last post and an upcoming one. Just like most other things, the more you work with vectors the easier they get ^_^.

Post a Comment