Up until recently, I hadn’t really thought about the way that numerical integration was performed. Sure, I knew about some techniques like using the trapezoid rule to perform numerical integration, and without thinking about it too much, I had just assumed that the integration routines like R’s integrate function used this technique too. But, I was wrong — most libraries that implement numerical integration use adaptive quadrature.

Adaptive quadrature is actually a rather interesting technique. I won’t go into too much detail here, but the function being integrated (the integrand) is evaluated at a number of points within the integration range, and the function values are multiplied by a set of weights. In mathematical terms:

$$ \int_a^b f\left(x\right) dx \approx \sum_i^n w_i f\left(x_i\right) $$

Where the weights, $w_i$ and the evaluation points, $x_i$ are tabulated values. These values can be taken from references such as  Abramowitz (1972) .

The GNU Scientific Library uses two different sets of $w_i$ and $x_i$: the first set are 15-point Kronrod weights, and the second set are 7-point Gausian weights. The estimate of the integral is computed using these two sets of weight and the absolute value of the difference between the two results is an upper bound on the error.

If the error is too great, the range is sub-divided and the integral of each sub-divided range is summed to produce the complete integral — as are the error estimates. This sub-division procedure is the “adaptive” part of adaptive quadrature.

I’ve been working on a computational problem that involves the computation of an expression of the following form:

$$ \frac{ \int_{-\infty}^\lambda g(t)A(t)dt + \int_{\lambda}^\infty h(t)A(t)dt }{ \int_{-\infty}^{\infty}A(t)dt } $$

In my particular problem, $A(t)$ is expensive to compute, while $g(t)$ and $h(t)$ are relatively computationally cheap.

In my use case, I need to compute this integral many times with slightly different $g(t)$ and $h(t)$ functions, but with the $A(t)$ function identical each time.

For now, let’s ignore the integration bounds for these four integrals. We’ll revisit the bounds shortly. The quadrature estimate of first integral (containing $g(t)$) will be:

$$ \int g(t) A(t) dt \approx \sum_i^n w_i f(x_i) = \sum_i^n w_i g(x_i) A(x_i) $$

Thus, we can pre-compute the values of $A(x_i)$ once and avoid computing them again. A similar procedure can be used for the other three integrals in the original expression.

I’ve implemented this approach of pre-computing the values of $A(x_i)$ in C++. I’ve run this several times with different repetitions and compared the speed to a “naive” approach where the complete integration is performed each time. The results are as follows:

Repetitions Naive Approach Pre-Computing $A(x_i)$
1 0.485 ms 1.625 ms
10 3.65 ms 1.42 ms
100 57.4 ms 1.34 ms
1000 317 ms 1.24 ms
10000 2645 ms 1.11 ms

Using the naive approach, the time scales roughly linearly with the number of repetitions, while the approach where we pre-compute the value of $A(x_i)$ is roughly constant regardless of the number of repetitions. The specific values shown here are based on a single-run of the code, so the results will be affected the whatever else my PC was doing at the time, but we can still see general trends.

Returning to the discussion of the integration bounds: first, the bounds of the two integrals in the numerator and the bounds of the integral in the denominator are all different. To account for this, we compute the integral using the widest bounds, subdivide the range as required to achieve a suitable error estimate. Then for the smaller range, we choose the subdivisions that are within the new range, adding a smaller subdivision at one end if needed.

Second, you’ll notice that some of the integration bounds are infinite. This is handled by a clever trick that I would not have though of myself — a change of variables. In my code, I’ve used a $\tan$ transformation; in the GNU Scientific Library, they use a different transform that contains a singularity. This singularity is okay if you’re not altering the integration bounds after starting the computation (which GSL does not), but can lead to trouble otherwise. After this change of variables, the integration becomes:

$$ \int_{-\infty}^\infty f(x) dx = \int_{-\pi/2}^{\pi/2} f(\tan(t)) \cos^2(t) dt $$

With this transformation, the integration bounds become finite.

Using these few tricks, the quadrature for this particular problem can be sped up significantly. These tricks won’t work for all problems, though.


Citations

  1. Stegun Abramowitz, editor. Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables. National Bureau of Standards, Dec 1972. 1