By dkl9, written 2023-363, revised 2023-363 (0 revisions)

Consider a definite integral, such as ∫_{1}^{2}`dx` sin(`x`).
It's approximately equal to the closely related integral ∫_{1}^{2}`dx` sin(`a`), for 1 < `a` < 2.
Let's be lazy and pick `a` = 1; then we approximate ∫_{1}^{2}`dx` sin(`x`) ≈ ∫_{1}^{2}`dx` sin(1).
The approximation simplifies very nicely as (2 - 1) sin(1) = sin(1) ≈ 0.841.

Given that the original integral is closer to 0.956, you may be dissatisfied with this approximation.
It turns out that ∫_{1}^{2}`dx` sin(`x`) = ∫_{1}^{3/2}`dx` sin(`x`) + ∫_{3/2}^{2}`dx` sin(`x`).
We can do the same kind of approximation on each term, which leads to the even nicer result of (2 - 3/2) sin(3/2) + (3/2 - 1) sin(1) ≈ 0.919.

Still dissatisfied? We can go further and split the integral into four terms, and the approximate result will get even closer. Likewise with any power of two — or any natural number of terms, but I don't care about that here. This is the left-endpoint sum.

This method annoyingly requires that you know at the start how finely you're subdividing the integral. But if you don't know how precise you want it, not all hope is lost. Start with the trivial case, then calculate the two-term case, then with four terms, and so on, getting closer approximations, until you realise you should stop for your very practical purposes.

There are two problems with this approach:

- It repeats a lot of its work. In the example here, the one-term case calculates sin(1), but so does the two-term case, and indeed every later case. Likewise, the two-term case uses sin(3/2), but so does every finer such subdivision.
- If you stop partway thru one of the later sums, all the progress you made on it is wasted, and you have to use the previous, less precise, completed sum instead.
Adding just the first 13 terms of the 16-term case approximates ∫
_{1}^{29/16}`dx`sin(`x`), not the desired ∫_{1}^{2}`dx`sin(`x`).

(There are actually more problems with this approach, if you think about it, which I suggest you just don't.)

We can kill both birds with the one stone of expressing these sums in terms of modifications to earlier sums.

The one-term sum is `S`_{0} = 1 sin(1).

The two-term sum is `S`_{1} = 1/2 sin(1) + 1/2 sin(3/2) = (1 - 1/2) sin(1) + 1/2 sin(1.5) = 1 sin(1) - 1/2 sin(1) + 1/2 sin(3/2) = `S`_{0} - 1/2 sin(1) + 1/2 sin(3/2).

The four-term series is `S`_{2} = 1/4 sin(1) + 1/4 sin(5/4) + 1/4 sin(3/2) + 1/4 sin(7/4) = 1 sin(1) - 1/2 sin(1) - 1/4 sin(1) + 1/4 sin(5/4) + 1/2 sin(3/2) - 1/4 sin(3/2) + 1/4 sin(7/4) = `S`_{0} + `S`_{1} - 1/4 sin(1) + 1/4 sin(5/4) - 1/4 sin(3/2) + 1/4 sin(7/4).

Those cases should suffice to show the pattern: each left-endpoint sum with a power-of-two number of equal intervals equals the sum of all the coarser sums, plus alternate negative and positive left-endpoint samples, scaled by the appropriate coefficient. Now this method of approximating definite integrals is more readily described as one long sum, stopped when you like, rather than a sequence of increasingly long sums. The real advantages are that

- each subtract-add pair improves upon a small section of the integral approximated to that point
- as you go into finer sums, the coefficients get small, so the error introduced by subtracting but not yet adding back gets smaller

Which means that you really can cut off this sum whenever you like, and get closer to the true integral the further out you stop.

Want this kind of sum as a single expression? That's doable. You should have enough of the insight now. The final expression is left as an exercise to the reader.