Jump to content

Talk:LogSumExp

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"trick"?

[edit]

What is the "trick" in the section "log-sum-exp trick for log-domain calculations"? I had to read the sentence "Like multiplication operation in linear-scale becoming simple addition in log-scale; an addition operation in linear-scale becomes the LSE in the log-domain." three times for it to sort of make sense, I'll try and fix it, assuming log-scale and log-domain are the same thing. --WiseWoman (talk) 20:56, 14 March 2020 (UTC)[reply]

The trick is to replace by which is numerically more stable (e.g. when used in a computer program). I think the text is clear (perhaps it has changed since you commented). --80.129.163.20 (talk) 14:39, 20 January 2022 (UTC)[reply]
I stumbled on that sentence and math too. Apparently, the point is that applying LogSumExp to a vector of variables transformed to, or taken to be in, log space (which I agree is not obviously defined, as log can in principle output any number), is equivalent to taking the log of the sum of the vector of untransformed variables. Equivalence is symmetric, so LSE can also be thought of as a way to notate/represent/compute the logarithm of a sum. Whether and how it (the trick and the whole function) is useful is another question, perhaps not sufficiently answered by this article.
NB: The other reply explains another section of the article. Elias (talk) 09:59, 10 March 2023 (UTC)[reply]

LSE?

[edit]

I think the LSE acronym is misleading as it can be read as Least Square Error. I'd be consistent across the text and use LogSumExp. User:misssperovaz — Preceding unsigned comment added by Missperovaz (talkcontribs) 04:57, 14 January 2021 (UTC)[reply]

Some approximation to the 2 variable case

[edit]

In the case of two real-valued variables, it is possible to approximate the function as:

showing how heavily nonlinear the function really is.

Hope someone could fact check and later add it to the main text. 45.181.122.234 (talk) 15:29, 13 September 2024 (UTC)[reply]

Later I found a less accurate but more intuitive approximation:
It has the property that both functions solves 45.181.122.234 (talk) 17:21, 12 July 2025 (UTC)[reply]
Another one less accurate but useful since works better than a Taylor expansion of second order but keeps the same components.
The 2nd order Taylor expansion is given by:
Then the expected value will be limited to find the terms:
a way to improve considerably
And improvement for this approximation by using the same terms could be found by using the classic *small-angle approximation* for the cosine function but instead of simplifying, by going in the other way around, and then applying the Isserlis's theorem:
and since when the variables are too different I just have:
I really care about computing accurately when , here applying the Expected value jointly with Isserlis's theorem leads to:
Note that the bound is quite "fit", since it comes from the fact that since and that:
by matching both and making I get:
Summarizing, the approximation is given by:
45.181.122.234 (talk) 18:43, 14 July 2025 (UTC)[reply]
I realized that the expected value formula is wrong if don't have zero mean, but it could be fixed as:
45.181.122.234 (talk) 23:05, 22 July 2025 (UTC)[reply]

t>0

[edit]

in the properties section, you need to specify that t is positive, otherwise it leads to mislead. 94.180.181.63 (talk) 10:13, 8 December 2024 (UTC)[reply]