By dkl9, written 2024-254, revised 2024-254 (0 revisions)

Usually, people are happy (`H` > 0) when current reality (`f`(`t`)) exceeds their expectations, calibrated by the recent past (`f`(`t` - `x`)), and sad (`H` < 0) when reality subceeds expectation.
That is, to a first approximation, `H` ∝ `f`(`t`) - `f`(`t` - `x`), for all `t` and some `x`.
It would be most fair (symmetric) to consider all `x` > 0, but weight them differently.
More recent times calibrate one's expectations more, according to, say, `e`^{-x}.
Thus `H`(`t`) = ∫_{-∞}^{t} `ds` `e`^{s - t} (`f`(`t`) - `f`(`s`)).

That might be a good formula for happiness. My study in that direction ends here. It gets more interesting when seen as a purely mathematical object.

The conclusions are largely the same, just simpler, if we instead work with the related `H`(`t`) = ∫_{-∞}^{t} `ds` `e`^{s - t} `f`(`s`).

Some values of that integral for various `f`:

f(s) | ∫_{-∞}^{t} ds e^{s - t} f(s) |
---|---|

1 | 1 |

s | t - 1 |

s² | t² - 2t + 2 |

e^{s} | 1/2 e^{t} |

cos(s) | 1/2 (cos(s) + sin(s)) |

Notice:

- all derivatives of 1 are 0
- the derivative of
`t`is 1, then all further derivatives are 0 - the derivative of
`t`² is 2`t`, next 2, and then infinite 0s - all derivatives of
`e`^{t}are`e`^{t} - derivatives of cos(
`t`) alternate sine and cosine, positive and negative

In the first three more finite cases, the integral transform gives an alternating sum, `f`(`t`) - `f`'(`t`) + `f`''(`t`).
An infinite-series extension — `f`(`t`) - `f`'(`t`) + `f`''(`t`) - `f`'''(`t`) + ... — explains the last two, for if 1 - 1 + 1 - 1 + ... has any limit, it would be 1/2.

That is, this infinite integral is equivalent to an infinite series of derivatives.
To prove it more rigorously, induce it from repeated integration-by-parts.
Pick `dv` = `ds` `e`^{s - t} and `u` = `f`(`s`).
Then each step preserves `e`^{s - t} as-is and differentiates `f`(`s`), reversing the sign.

But wait!
There's more.
That integration-by-parts shows that the indefinite integral ∫`ds` `e`^{s - t} `f`(`s`) = `e`^{s - t} (`f`(`t`) - `f`'(`t`) + `f`''(`t`) - `f`'''(`t`) + ...) + `C`.
So if we can force `C` = 0, the infinite definite integral equals the infinite alternating series equals an indefinite integral, followed by a simple operation: (∫`ds` `e`^{s - t} `f`(`s`)) / `e`^{s - t}.

You know where else we divide indefinite integrals by exponentials?
First-order linear ODEs.
In particular, `y`' + `P`(`t`) `y` = `Q`(`t`) is solved with `y` = (∫`dt` `u``Q`(`x`)) / `u`, where `u` = `e`^{∫dt P(t)}.

Plugging in expressions above to "unsolve" the ODE, we get `y`' + `y` = `f`(`t`).
Among the three objects from earlier, we can most readily show that the derivative series satisfies that equation.
Differentiating the series reverses the sign on each term, such that `y`' + `y` would cancel to 0, but for the initial `f`(`t`).

The following are equivalent:

- infinite definite integral, ∫
_{-∞}^{t}`ds``e`^{s - t}`f`(`s`) - alternating derivative series,
`f`(`t`) -`f`'(`t`) +`f`''(`t`) -`f`'''(`t`) + ... - transformed indefinite integral, (∫
`ds``e`^{s - t}`f`(`s`)) /`e`^{s - t} - ODE solution,
`y`' +`y`=`f`(`t`)