In discrete calculus the indefinite sum operator (also known as the antidifference operator), denoted by or ,[1][2] is the linear operator, inverse of the forward difference operator . It relates to the forward difference operator as the indefinite integral relates to the derivative. Thus

More explicitly, if , then

If F(x) is a solution of this functional equation for a given f(x), then so is F(x)+C(x) for any periodic function C(x) with period 1. Therefore, each indefinite sum actually represents a family of functions. However, due to the Carlson's theorem, the solution equal to its Newton series expansion is unique up to an additive constant C. This unique solution can be represented by formal power series form of the antidifference operator: .

Fundamental theorem of discrete calculus

edit

Indefinite sums can be used to calculate definite sums with the formula:[3]

 

Definitions

edit

Laplace summation formula

edit

The Laplace summation formula allows the indefinite sum to be written as the indefinite integral plus correction terms obtained from iterating the difference operator, although it was originally developed for the reverse process of writing an integral as an indefinite sum plus correction terms. As usual with indefinite sums and indefinite integrals, it is valid up to an arbitrary choice of the constant of integration. Using operator algebra avoids cluttering the formula with repeated copies of the function to be operated on:[4]

 

In this formula, for instance, the term   represents an operator that divides the given function by two. The coefficients  ,  , etc., appearing in this formula are the Gregory coefficients, also called Laplace numbers. The coefficient in the term   is[4]

 

where the numerator   of the left hand side is called a Cauchy number of the first kind, although this name sometimes applies to the Gregory coefficients themselves.[4]

Newton's formula

edit
 
where   is the falling factorial.

Faulhaber's formula

edit
 

Faulhaber's formula provides that the right-hand side of the equation converges.

Mueller's formula

edit

If   then[5]

 

Euler–Maclaurin formula

edit
 

Choice of the constant term

edit

Often the constant C in indefinite sum is fixed from the following condition.

Let

 

Then the constant C is fixed from the condition

 

or

 

Alternatively, Ramanujan's sum can be used:

 

or at 1

 

respectively[6][7]

Summation by parts

edit

Indefinite summation by parts:

 
 

Definite summation by parts:

 

Period rules

edit

If   is a period of function   then

 

If   is an antiperiod of function  , that is   then

 

Alternative usage

edit

Some authors use the phrase "indefinite sum" to describe a sum in which the numerical value of the upper limit is not given:

 

In this case a closed form expression F(k) for the sum is a solution of

 

which is called the telescoping equation.[8] It is the inverse of the backward difference   operator. It is related to the forward antidifference operator using the fundamental theorem of discrete calculus described earlier.

List of indefinite sums

edit

This is a list of indefinite sums of various functions. Not every function has an indefinite sum that can be expressed in terms of elementary functions.

Antidifferences of rational functions

edit
 
From which   can be factored out, leaving 1, with the alternative form  . From that, we have:
 
For the sum below, remember  
 
For positive integer exponents Faulhaber's formula can be used. For negative integer exponents,
 
where   is the polygamma function can be used.
More generally,
 
where   is the Hurwitz zeta function and   is the Digamma function.   and   are constants which would normally be set to   (where   is the Riemann zeta function) and the Euler–Mascheroni constant respectively. By replacing the variable   with  , this becomes the Generalized harmonic number. For the relation between the Hurwitz zeta and Polygamma functions, refer to Balanced polygamma function and Hurwitz zeta function#Special cases and generalizations.
From this, using  , another form can be obtained:
 
 

Antidifferences of exponential functions

edit
 

Particularly,

 

Antidifferences of logarithmic functions

edit
 
 

Antidifferences of hyperbolic functions

edit
 
 
 
where   is the q-digamma function.

Antidifferences of trigonometric functions

edit
 
 
 
 
 
where   is the q-digamma function.
 
 
 
where   is the normalized sinc function.

Antidifferences of inverse hyperbolic functions

edit
 

Antidifferences of inverse trigonometric functions

edit
 

Antidifferences of special functions

edit
 
 
where   is the incomplete gamma function.
 
where   is the falling factorial.
 
(see super-exponential function)

See also

edit

References

edit
  1. ^ Man, Yiu-Kwong (1993), "On computing closed forms for indefinite summations", Journal of Symbolic Computation, 16 (4): 355–376, doi:10.1006/jsco.1993.1053, MR 1263873
  2. ^ Goldberg, Samuel (1958), Introduction to difference equations, with illustrative examples from economics, psychology, and sociology, Wiley, New York, and Chapman & Hall, London, p. 41, ISBN 978-0-486-65084-5, MR 0094249, If   is a function whose first difference is the function  , then   is called an indefinite sum of   and denoted by  ; reprinted by Dover Books, 1986
  3. ^ "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1
  4. ^ a b c Merlini, Donatella; Sprugnoli, Renzo; Verri, M. Cecilia (2006), "The Cauchy numbers", Discrete Mathematics, 306 (16): 1906–1920, doi:10.1016/j.disc.2006.03.065, MR 2251571
  5. ^ Markus Müller. How to Add a Non-Integer Number of Terms, and How to Produce Unusual Infinite Summations Archived 2011-06-17 at the Wayback Machine (note that he uses a slightly alternative definition of fractional sum in his work, i.e. inverse to backwards difference, hence 1 as the lower limit in his formula)
  6. ^ Bruce C. Berndt, Ramanujan's Notebooks Archived 2006-10-12 at the Wayback Machine, Ramanujan's Theory of Divergent Series, Chapter 6, Springer-Verlag (ed.), (1939), pp. 133–149.
  7. ^ Éric Delabaere, Ramanujan's Summation, Algorithms Seminar 2001–2002, F. Chyzak (ed.), INRIA, (2003), pp. 83–88.
  8. ^ Algorithms for Nonlinear Higher Order Difference Equations, Manuel Kauers

Further reading

edit