Jump to content

Talk:Binary logarithm

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

[Untitled]

[edit]

Where did the algorithm given here come from? I would love to find an original reference for this. Kleg 22:45, 19 July 2006 (UTC)[reply]

Same here. I can sort of guess why it works (squaring the scaled input value corresponds to doubling the result), but I would love to see the actual maths behind it.

Math for the result is located at this url: http://en.literateprograms.org/Logarithm_Function_%28Python%29

not a function! A function has a domain, a range, and a graph!

lg?

[edit]

Where does the name lg come from? --Abdull (talk) 20:15, 24 July 2008 (UTC)[reply]

I also wonder. In all my books lb x is used.--MathFacts (talk) 20:26, 16 August 2009 (UTC)[reply]
lg = log10. the correct symbol for binary logarithm is lb = log2 — Preceding unsigned comment added by 140.180.255.232 (talk) 19:47, 24 August 2016 (UTC)[reply]
For values of correct meaning "recommended by a standards organization" rather than what people actually use, maybe. —David Eppstein (talk) 21:06, 24 August 2016 (UTC)[reply]

Error in identity?

[edit]

Isn't there an error in the identity given for integers?

It says:

But surely it should be:

? —Preceding unsigned comment added by 195.27.20.35 (talk) 12:05, 26 February 2010 (UTC)[reply]

python example

[edit]

Python example is clearly too complex and too long. 1exec1 (talk) 17:53, 24 April 2010 (UTC)[reply]

Then refer to the OLD python code, it is much simpler

#!/usr/bin/python

from __future__ import division

def log2(X):
  epsilon = 1.0/(10**12)
  integer_value=0
  while X < 1:
    integer_value = integer_value - 1
    X = X * 2
  while X >= 2:
    integer_value = integer_value + 1
    X = X / 2
  decfrac = 0.0
  partial = 0.5
  X=X*X
  while partial > epsilon:
    if X >= 2:
      decfrac = decfrac + partial
      X = X / 2
    partial = partial / 2
    X=X*X
  return (integer_value + decfrac)

if __name__ == '__main__':
  value = 4.5
  print "     X  =",value
  print "LOG2(X) =",log2(value)

# Sample output
#
#    $ python log2.py 
#         X  = 4.5
#    LOG2(X) = 2.16992500144
#

C example

[edit]

wouldn't it be nicer code to use

while(n>>=1!=0)
  ++pos;

instead of

if (n >= 1<<16) { n >>= 16; pos += 16; }
if (n >= 1<< 8) { n >>=  8; pos +=  8; }
if (n >= 1<< 4) { n >>=  4; pos +=  4; }
if (n >= 1<< 2) { n >>=  2; pos +=  2; }
if (n >= 1<< 1) {           pos +=  1; }

? -- 129.247.247.239 (talk) 11:53, 16 July 2010 (UTC)[reply]

Yes, I agree. The point of an article like this is to explain how a binary logarithm works, not to show some super-optimized and confusing C version. On the other hand, no one really writes anything in C anymore, unless it needs to run really fast... Moxfyre (ǝɹʎℲxoɯ | contrib) 15:28, 16 July 2010 (UTC)[reply]
@129.247.247.239 Thank you so much! This is very useful for my research! Dominic3203 (talk) 10:28, 4 April 2025 (UTC)[reply]

the notation lb is used without an introduction

[edit]

Under "Information Theory" the notation lb rather than ld is suddenly used without explanation. Is this a typo? If not, perhaps it should say something like: lg, lx, and lb are sometimes used for base 2 logs.

GA Review

[edit]
GA toolbox
Reviewing
This review is transcluded from Talk:Binary logarithm/GA1. The edit link for this section can be used to add comments to the review.

Reviewer: Jfhutson (talk · contribs) 21:31, 28 December 2015 (UTC)[reply]

This looks like it's probably already at GA standards. Here are my comments:

  • lead formula: specify that x is the binary log of n.
  • Starting with five examples seems excessive.
  • italicize the Elements
  • "On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms, in 1544." Last comma not needed.
  • wikilink Jain
  • rather than listing some logarithmic identities, why not say it obeys all logarithmic identities unless some are particularly relevant here.
  • you really started to lose me with big O notation. Is there a way to make this more accessible?
  • likewise with bioinformatics

That'll do for now. I don't know if any of that should hold up the GA. I'll take another look today or tomorrow. My main issue is where the article drifts into specialized subjects without explaining enough for a non-specialist.--JFH (talk) 21:31, 28 December 2015 (UTC)[reply]


A special thanks

[edit]

I don't know enough about Wikipedia to find out who wrote the "Iterative approximation" section, but to whoever did, thank you. Algorithms for calculating a logarithm are surprisingly hard to find, and that section was far and away the clearest and most helpful description I've found. I'm sure that I'm using the talk page wrong, so feel free to delete this section, but I just had to express my gratitude. Cormac596 (talk) 14:47, 1 June 2022 (UTC)[reply]

It appears to have been added as Python code by a not-logged-in editor in March 2006, and converted to roughly the current form by User:Moxfyre in September 2008. —David Eppstein (talk) 16:25, 1 June 2022 (UTC)[reply]
@Cormac596 Yep, that's right. I thought it was pretty cool, and I was learning/polishing my Python skills so I did a bit of cleanup 🤓. —Moxfyre (ǝɹʎℲxoɯ | contrib) 16:34, 1 June 2022 (UTC)[reply]
In that case, thank you Moxfyre. :) Cormac596 (talk) 20:49, 1 June 2022 (UTC)[reply]

A Commons file used on this page or its Wikidata item has been nominated for deletion

[edit]

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 11:52, 23 August 2022 (UTC)[reply]

Calculator

[edit]

My addition of a calculator in Special:Diff/1269904851 and subsequent adjustments by User:Eyesnore in Special:Diff/1270558458 were reverted as "clutter" by an IP editor in Special:Diff/1270570319. So now we're at the "D" stage of WP:BRD. Could we maybe have a broader discussion over whether this calculator is a helpful addition? —David Eppstein (talk) 06:53, 20 January 2025 (UTC)[reply]

I see no harm in including a calculator. If suitably placed, it's not clutter. Dondervogel 2 (talk) 09:15, 20 January 2025 (UTC)[reply]
I think the calculator is helpful and should be retained. I am not too thrilled by the output of "-infinity" for both 0.0 and -0.0, though, as this is only a one-sided limit. Better (and more consistent with the article) to be "(invalid input)" at 0 just like at -1. —Kusma (talk) 10:01, 20 January 2025 (UTC)[reply]
No I don't see that a calculator helps. To me it is clutter. —Quantling (talk | contribs) 16:50, 20 January 2025 (UTC)[reply]
It doesn't seem much more "clutter" than the graph is. I'm not sure I'd put it where it was (it kind of floats oddly across my screen from the table of contents), but I have no objection to including it somewhere. XOR'easter (talk) 00:20, 21 January 2025 (UTC)[reply]

Below is a Wikipedia-style section focusing solely on how the logarithm is used to approximate the inverse square root:

---

      1. Logarithmic Approximation

In the fast inverse square root algorithm, the floating-point number is first interpreted according to the IEEE 754 format, where any positive number can be expressed as   x = M · 2^E with M representing the mantissa and E the exponent. This means that the binary logarithm of x can be expressed as   log₂(x) = log₂(M) + E.

The algorithm exploits this property by reinterpreting the bitwise representation of x as an integer. Through a clever manipulation—specifically a right bit-shift which effectively divides the binary exponent by two—the method approximates:

  1/√x ≈ 2^(–0.5 · log₂(x)).

A magic constant is then subtracted from the shifted value to further calibrate the result, yielding a quick and efficient initial estimate. This bit-level trick is essentially a rapid computation of the logarithmic relationship that underpins the approximation.

---

This concise explanation isolates the role of the logarithm in generating the initial approximation in the fast inverse square root algorithm. Dominic3203 (talk) 10:43, 4 April 2025 (UTC)[reply]

There is no calculation of the logarithm within this algorithm; it is a trick involving the exponent of a floating point number, which is very much not the same thing as the binary logarithm of the number (because it is an integer and the logarithm generally is not). Additionally, any additions along these lines could only be made from published sources making the same arguments. Where are your sources? —David Eppstein (talk) 17:14, 4 April 2025 (UTC)[reply]
It's a little closer to the binary logarithm than that, but still not the binary logarithm. If you interpret the decimal point as being in the right place between the exponent and the mantissa, and adjust for the bias in the exponent, floating point representation is a piecewise-linear approximation to the binary logarithm. It is exactly the binary logarithm when applied to an integer power of two (modulo that decimal point and biased exponent), but is linearly interpolated when it falls between two adjacent integer powers of two. —Quantling (talk | contribs) 18:45, 4 April 2025 (UTC)[reply]
"We denote by an integer that has the same binary representation as an FP number and by an FP number that has the same binary representation as an integer . The main idea behind FISR is as follows. If the FP number , given in the IEEE 754 standard, is represented as an integer , then it can be considered a coarse, biased, and scaled approximation of the binary logarithm ...." doi:10.3390/computation9020021. –jacobolus (t) 01:33, 5 April 2025 (UTC)[reply]