List of logarithmic identities

(Redirected from Log law)

In mathematics, many logarithmic identities exist. The following is a compilation of the notable of these, many of which are used for computational purposes.

Trivial identities

edit

Trivial mathematical identities are relatively simple (for an experienced mathematician), though not necessarily unimportant. Trivial logarithmic identities are:

  because  
  because  

Explanations

edit

By definition, we know that:

 ,

where   and  .

Setting  , we can see that:  . So, substituting these values into the formula, we see that:   , which gets us the first property.

Setting  , we can see that:  . So, substituting these values into the formula, we see that:   , which gets us the second property.

Cancelling exponentials

edit

Logarithms and exponentials with the same base cancel each other. This is true because logarithms and exponentials are inverse operations—much like the same way multiplication and division are inverse operations, and addition and subtraction are inverse operations.

 
 [1]

Both of the above are derived from the following two equations that define a logarithm: (note that in this explanation, the variables of   and   may not be referring to the same number)

 

Looking at the equation  , and substituting the value for   of  , we get the following equation:   , which gets us the first equation. Another more rough way to think about it is that  , and that that " " is  .

Looking at the equation   , and substituting the value for   of  , we get the following equation:   , which gets us the second equation. Another more rough way to think about it is that  , and that that something " " is  .

Using simpler operations

edit

Logarithms can be used to make calculations easier. For example, two numbers can be multiplied just by using a logarithm table and adding. These are often known as logarithmic properties, which are documented in the table below.[2] The first three operations below assume that x = bc and/or y = bd, so that logb(x) = c and logb(y) = d. Derivations also use the log definitions x = blogb(x) and x = logb(bx).

  because  
  because  
  because  
  because  
  because  
  because  

Where  ,  , and   are positive real numbers and  , and   and   are real numbers.

The laws result from canceling exponentials and the appropriate law of indices. Starting with the first law:

 

The law for powers exploits another of the laws of indices:

 

The law relating to quotients then follows:

 
 

Similarly, the root law is derived by rewriting the root as a reciprocal power:

 

Derivations of product, quotient, and power rules

edit

These are the three main logarithm laws/rules/principles,[3] from which the other properties listed above can be proven. Each of these logarithm properties correspond to their respective exponent law, and their derivations/proofs will hinge on those facts. There are multiple ways to derive/prove each logarithm law – this is just one possible method.

Logarithm of a product

edit

To state the logarithm of a product law formally:

 

Derivation:

Let  , where  , and let  . We want to relate the expressions   and  . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to   and   quite often, we will give them some variable names to make working with them easier: Let  , and let  .

Rewriting these as exponentials, we see that

 

From here, we can relate   (i.e.  ) and   (i.e.  ) using exponent laws as

 

To recover the logarithms, we apply   to both sides of the equality.

 

The right side may be simplified using one of the logarithm properties from before: we know that  , giving

 

We now resubstitute the values for   and   into our equation, so our final expression is only in terms of  ,  , and  .

 

This completes the derivation.

Logarithm of a quotient

edit

To state the logarithm of a quotient law formally:

 

Derivation:

Let  , where  , and let  .

We want to relate the expressions   and  . This can be done more easily by rewriting in terms of exponentials, whose properties we already know. Additionally, since we are going to refer to   and   quite often, we will give them some variable names to make working with them easier: Let  , and let  .

Rewriting these as exponentials, we see that:

 

From here, we can relate   (i.e.  ) and   (i.e.  ) using exponent laws as

 

To recover the logarithms, we apply   to both sides of the equality.

 

The right side may be simplified using one of the logarithm properties from before: we know that  , giving

 

We now resubstitute the values for   and   into our equation, so our final expression is only in terms of  ,  , and  .

 

This completes the derivation.

Logarithm of a power

edit

To state the logarithm of a power law formally:

 

Derivation:

Let  , where  , let  , and let  . For this derivation, we want to simplify the expression  . To do this, we begin with the simpler expression  . Since we will be using   often, we will define it as a new variable: Let  .

To more easily manipulate the expression, we rewrite it as an exponential. By definition,  , so we have

 

Similar to the derivations above, we take advantage of another exponent law. In order to have   in our final expression, we raise both sides of the equality to the power of  :

 

where we used the exponent law  .

To recover the logarithms, we apply   to both sides of the equality.

 

The left side of the equality can be simplified using a logarithm law, which states that  .

 

Substituting in the original value for  , rearranging, and simplifying gives

 

This completes the derivation.

Changing the base

edit

To state the change of base logarithm formula formally:  

This identity is useful to evaluate logarithms on calculators. For instance, most calculators have buttons for ln and for log10, but not all calculators have buttons for the logarithm of an arbitrary base.

Proof/derivation

edit

Let  , where   Let  . Here,   and   are the two bases we will be using for the logarithms. They cannot be 1, because the logarithm function is not well defined for the base of 1.[citation needed] The number   will be what the logarithm is evaluating, so it must be a positive number. Since we will be dealing with the term   quite frequently, we define it as a new variable: Let  .

To more easily manipulate the expression, it can be rewritten as an exponential.  

Applying   to both sides of the equality,  

Now, using the logarithm of a power property, which states that  ,  

Isolating  , we get the following:  

Resubstituting   back into the equation,  

This completes the proof that  .

This formula has several consequences:

 

 

 

 


 

where   is any permutation of the subscripts 1, ..., n. For example  

Summation/subtraction

edit

The following summation/subtraction rule is especially useful in probability theory when one is dealing with a sum of log-probabilities:

  because  
  because  

Note that the subtraction identity is not defined if  , since the logarithm of zero is not defined. Also note that, when programming,   and   may have to be switched on the right hand side of the equations if   to avoid losing the "1 +" due to rounding errors. Many programming languages have a specific log1p(x) function that calculates   without underflow (when   is small).

More generally:  

Exponents

edit

A useful identity involving exponents:   or more universally:  

Other/resulting identities

edit

   

Inequalities

edit

Based on,[4][5] and [6]

 
 

All are accurate around  , but not for large numbers.

Calculus identities

edit
 
 
 
 
 
 

The last limit is often summarized as "logarithms grow more slowly than any power or root of x".

Derivatives of logarithmic functions

edit
 
 
 

Integral definition

edit
 

To modify the limits of integration to run from   to  , we change the order of integration, which changes the sign of the integral:

 

Therefore:

 
 
 
 
 
 

for   and   is a sample point in each interval.

Series representation

edit

The natural logarithm   has a well-known Taylor series[7] expansion that converges for   in the open-closed interval  :

 

Within this interval, for  , the series is conditionally convergent, and for all other values, it is absolutely convergent. For   or  , the series does not converge to  . In these cases, different representations[8] or methods must be used to evaluate the logarithm.

Harmonic number difference

edit

It is not uncommon in advanced mathematics, particularly in analytic number theory and asymptotic analysis, to encounter expressions involving differences or ratios of harmonic numbers at scaled indices.[9] The identity involving the limiting difference between harmonic numbers at scaled indices and its relationship to the logarithmic function provides an intriguing example of how discrete sequences can asymptotically relate to continuous functions. This identity is expressed as[10]

 

which characterizes the behavior of harmonic numbers as they grow large. This approximation (which precisely equals   in the limit) reflects how summation over increasing segments of the harmonic series exhibits integral properties, giving insight into the interplay between discrete and continuous analysis. It also illustrates how understanding the behavior of sums and series at large scales can lead to insightful conclusions about their properties. Here   denotes the  -th harmonic number, defined as

 

The harmonic numbers are a fundamental sequence in number theory and analysis, known for their logarithmic growth. This result leverages the fact that the sum of the inverses of integers (i.e., harmonic numbers) can be closely approximated by the natural logarithm function, plus a constant, especially when extended over large intervals.[11][9][12] As   tends towards infinity, the difference between the harmonic numbers   and   converges to a non-zero value. This persistent non-zero difference,  , precludes the possibility of the harmonic series approaching a finite limit, thus providing a clear mathematical articulation of its divergence.[13][14] The technique of approximating sums by integrals (specifically using the integral test or by direct integral approximation) is fundamental in deriving such results. This specific identity can be a consequence of these approximations, considering:

 

Harmonic limit derivation

edit

The limit explores the growth of the harmonic numbers when indices are multiplied by a scaling factor and then differenced. It specifically captures the sum from   to  :

 

This can be estimated using the integral test for convergence, or more directly by comparing it to the integral of   from   to  :

 

As the window's lower bound begins at   and the upper bound extends to  , both of which tend toward infinity as  , the summation window encompasses an increasingly vast portion of the smallest possible terms of the harmonic series (those with astronomically large denominators), creating a discrete sum that stretches towards infinity, which mirrors how continuous integrals accumulate value across an infinitesimally fine partitioning of the domain. In the limit, the interval is effectively from   to   where the onset   implies this minimally discrete region.

Double series formula

edit

The harmonic number difference formula for   is an extension[10] of the classic, alternating identity of  :

 

which can be generalized as the double series over the residues of  :

 

where   is the principle ideal generated by  . Subtracting   from each term   (i.e., balancing each term with the modulus) reduces the magnitude of each term's contribution, ensuring convergence by controlling the series' tendency toward divergence as   increases. For example:

 

This method leverages the fine differences between closely related terms to stabilize the series. The sum over all residues   ensures that adjustments are uniformly applied across all possible offsets within each block of   terms. This uniform distribution of the "correction" across different intervals defined by   functions similarly to telescoping over a very large sequence. It helps to flatten out the discrepancies that might otherwise lead to divergent behavior in a straightforward harmonic series.

Deveci's Proof

edit

A fundamental feature of the proof is the accumulation of the subtrahends   into a unit fraction, that is,   for  , thus   rather than  , where the extrema of   are   if   and   otherwise, with the minimum of   being implicit in the latter case due to the structural requirements of the proof. Since the cardinality of   depends on the selection of one of two possible minima, the integral  , as a set-theoretic procedure, is a function of the maximum   (which remains consistent across both interpretations) plus  , not the cardinality (which is ambiguous[15][16] due to varying definitions of the minimum). Whereas the harmonic number difference computes the integral in a global sliding window, the double series, in parallel, computes the sum in a local sliding window—a shifting  -tuple—over the harmonic series, advancing the window by   positions to select the next  -tuple, and offsetting each element of each tuple by   relative to the window's absolute position. The sum   corresponds to   which scales   without bound. The sum   corresponds to the prefix   trimmed from the series to establish the window's moving lower bound  , and   is the limit of the sliding window (the scaled, truncated[17] series):

 
 
 
 
 
 
 
 

Integrals of logarithmic functions

edit
 
 

To remember higher integrals, it is convenient to define

 

where   is the nth harmonic number:

 
 
 
 

Then

 
 

Approximating large numbers

edit

The identities of logarithms can be used to approximate large numbers. Note that logb(a) + logb(c) = logb(ac), where a, b, and c are arbitrary constants. Suppose that one wants to approximate the 44th Mersenne prime, 232,582,657 −1. To get the base-10 logarithm, we would multiply 32,582,657 by log10(2), getting 9,808,357.09543 = 9,808,357 + 0.09543. We can then get 109,808,357 × 100.09543 ≈ 1.25 × 109,808,357.

Similarly, factorials can be approximated by summing the logarithms of the terms.

Complex logarithm identities

edit

The complex logarithm is the complex number analogue of the logarithm function. No single valued function on the complex plane can satisfy the normal rules for logarithms. However, a multivalued function can be defined which satisfies most of the identities. It is usual to consider this as a function defined on a Riemann surface. A single valued version, called the principal value of the logarithm, can be defined which is discontinuous on the negative x axis, and is equal to the multivalued version on a single branch cut.

Definitions

edit

In what follows, a capital first letter is used for the principal value of functions, and the lower case version is used for the multivalued function. The single valued version of definitions and identities is always given first, followed by a separate section for the multiple valued versions.

  • ln(r) is the standard natural logarithm of the real number r.
  • Arg(z) is the principal value of the arg function; its value is restricted to (−π, π]. It can be computed using Arg(x + iy) = atan2(y, x).
  • Log(z) is the principal value of the complex logarithm function and has imaginary part in the range (−π, π].
  •  
  •  

The multiple valued version of log(z) is a set, but it is easier to write it without braces and using it in formulas follows obvious rules.

  • log(z) is the set of complex numbers v which satisfy ev = z
  • arg(z) is the set of possible values of the arg function applied to z.

When k is any integer:

 
 
 

Constants

edit

Principal value forms:

 
 

Multiple value forms, for any k an integer:

 
 

Summation

edit

Principal value forms:

 
 [18]
 
 [18]

Multiple value forms:

 
 

Powers

edit

A complex power of a complex number can have many possible values.

Principal value form:

 
 

Multiple value forms:

 

Where k1, k2 are any integers:

 
 

See also

edit

References

edit
  1. ^ Weisstein, Eric W. "Logarithm". mathworld.wolfram.com. Retrieved 2020-08-29.
  2. ^ "4.3 - Properties of Logarithms". people.richland.edu. Retrieved 2020-08-29.
  3. ^ "Properties and Laws of Logarithms". courseware.cemc.uwaterloo.ca/8. Retrieved 2022-04-23.
  4. ^ "Archived copy" (PDF). Archived from the original (PDF) on 2016-10-20. Retrieved 2016-12-20.{{cite web}}: CS1 maint: archived copy as title (link)
  5. ^ http://www.lkozma.net/inequalities_cheat_sheet/ineq.pdf [bare URL PDF]
  6. ^ http://downloads.hindawi.com/archive/2013/412958.pdf [bare URL PDF]
  7. ^ Weisstein, Eric W. "Mercator Series". MathWorld--A Wolfram Web Resource. Retrieved 2024-04-24.
  8. ^ To extend the utility of the Mercator series beyond its conventional bounds one can calculate   for   and   and then negate the result,  , to derive  . For example, setting   yields  .
  9. ^ a b Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. p. 389. ISBN 978-0521898065. See page 117, and VI.8 definition of shifted harmonic numbers on page 389
  10. ^ a b Deveci, Sinan (2022). "On a Double Series Representation of the Natural Logarithm, the Asymptotic Behavior of Hölder Means, and an Elementary Estimate for the Prime Counting Function". arXiv:2211.10751 [math.NT]. See Theorem 5.2. on pages 22 - 23
  11. ^ Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). Concrete Mathematics: A Foundation for Computer Science. Addison-Wesley. p. 429. ISBN 0-201-55802-5.
  12. ^ "Harmonic Number". Wolfram MathWorld. Retrieved 2024-04-24. See formula 13.
  13. ^ Kifowit, Steven J. (2019). More Proofs of Divergence of the Harmonic Series (PDF) (Report). Prairie State College. Retrieved 2024-04-24. See Proofs 23 and 24 for details on the relationship between harmonic numbers and logarithmic functions.
  14. ^ Bell, Jordan; Blåsjö, Viktor (2018). "Pietro Mengoli's 1650 Proof That the Harmonic Series Diverges". Mathematics Magazine. 91 (5): 341–347. doi:10.1080/0025570X.2018.1506656. hdl:1874/407528. JSTOR 48665556. Retrieved 2024-04-24.
  15. ^ Harremoës, Peter (2011). "Is Zero a Natural Number?". arXiv:1102.0418 [math.HO]. A synopsis on the nature of 0 which frames the choice of minimum as the dichotomy between ordinals and cardinals.
  16. ^ Barton, N. (2020). "Absence perception and the philosophy of zero". Synthese. 197 (9): 3823–3850. doi:10.1007/s11229-019-02220-x. PMC 7437648. PMID 32848285. See section 3.1
  17. ^ The   shift is characteristic of the right Riemann sum employed to prevent the integral from degenerating into the harmonic series, thereby averting divergence. Here,   functions analogously, serving to regulate the series. The successor operation   signals the implicit inclusion of the modulus   (the region omitted from  ). The importance of this, from an axiomatic perspective, becomes evident when the residues of   are formulated as  , where   is bootstrapped by   to produce the residues of modulus  . Consequently,   represents a limiting value in this context.
  18. ^ a b Abramowitz, Milton (1965). Handbook of mathematical functions, with formulas, graphs, and mathematical tables. Irene A. Stegun. New York: Dover Publications. ISBN 0-486-61272-4. OCLC 429082.
edit