Differentiation under integral sign
edit
Suppose
f
:
R
×
R
d
→
R
{\displaystyle f:\mathbb {R} \times \mathbb {R} ^{d}\rightarrow \mathbb {R} }
satisfies the following conditions:
(1)
f
(
t
,
x
)
{\displaystyle f(t,x)}
is a Lebesgue-integrable function of
x
{\displaystyle x}
for each
t
∈
R
{\displaystyle t\in \mathbb {R} }
(2) For almost all
x
∈
R
d
{\displaystyle x\in \mathbb {R} ^{d}}
, the derivative
f
t
{\displaystyle f_{t}}
exists for all
t
∈
R
{\displaystyle t\in \mathbb {R} }
(3) There is an integrable function
θ
:
R
d
→
R
{\displaystyle \theta :\mathbb {R} ^{d}\rightarrow \mathbb {R} }
such that
|
f
t
(
t
,
x
)
|
≤
θ
(
x
)
{\displaystyle |f_{t}(t,x)|\leq \theta (x)}
for all
t
∈
R
{\displaystyle t\in \mathbb {R} }
Then for all
t
∈
R
{\displaystyle t\in \mathbb {R} }
d
d
t
∫
R
d
f
(
t
,
x
)
d
x
=
∫
R
d
f
t
(
t
,
x
)
d
x
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\int _{\mathbb {R} ^{d}}\,f(t,x)\mathrm {d} x=\int _{\mathbb {R} ^{d}}\,f_{t}(t,x)\mathrm {d} x}
Integration by parts lead
edit
In calculus , and more generally in mathematical analysis , integration by parts is a theorem that relates the integral of a product of functions to the integral of their derivative and antiderivative. It is frequently used to find the antiderivative of a product of functions into an ideally simpler antiderivative. The rule can be derived in one line by simply integrating the product rule of differentiation .
The theorem states that if u and v are continuously differentiable functions then
∫
u
(
x
)
v
′
(
x
)
d
x
=
u
(
x
)
v
(
x
)
−
∫
u
′
(
x
)
v
(
x
)
d
x
.
{\displaystyle \int u(x)v'(x)\,dx=u(x)v(x)-\int u'(x)v(x)\,dx.}
It can be stated more compactly using the differentials du = u ′(x ) dx and dv = v ′(x ) dx as
∫
u
d
v
=
u
v
−
∫
v
d
u
.
{\displaystyle \int u\,dv=uv-\int v\,du.\!}
More general formulations of integration by parts exist for the Riemann–Stieltjes integral and Lebesgue–Stieltjes integral . A discrete analogue holds for sequences, called summation by parts .
Integration by parts
edit
The formula for integration by parts can be extended to functions of several variables. Instead of an interval one needs to integrate over an n -dimensional set. Also, one replaces the derivative with a partial derivative .
More specifically, suppose Ω is an open bounded subset of
R
n
{\displaystyle \mathbb {R} ^{n}}
with a piecewise smooth boundary
Γ
{\displaystyle \Gamma }
. If u and v are two continuously differentiable functions on the closure of Ω, then the formula for integration by parts is
∫
Ω
∂
u
∂
x
i
v
d
Ω
=
∫
Γ
u
v
ν
i
d
Γ
−
∫
Ω
u
∂
v
∂
x
i
d
Ω
{\displaystyle \int _{\Omega }{\frac {\partial u}{\partial x_{i}}}v\,d\Omega =\int _{\Gamma }uv\,\nu _{i}\,d\Gamma -\int _{\Omega }u{\frac {\partial v}{\partial x_{i}}}\,d\Omega }
where
ν
^
{\displaystyle {\hat {\mathbf {\nu } }}}
is the outward unit surface normal to
Γ
{\displaystyle \Gamma }
,
ν
i
{\displaystyle \mathbf {\nu } _{i}}
is its i -th component, and i ranges from 1 to n .
We can obtain a more general form of the integration by parts by replacing v in the above formula with v i and summing over i gives the vector formula
∫
Ω
∇
u
⋅
v
d
Ω
=
∫
Γ
(
u
v
)
⋅
ν
^
d
Γ
−
∫
Ω
u
∇
⋅
v
d
Ω
{\displaystyle \int _{\Omega }\nabla u\cdot \mathbf {v} \,d\Omega =\int _{\Gamma }(u\,\mathbf {v} )\cdot {\hat {\nu }}\,d\Gamma -\int _{\Omega }u\,\nabla \cdot \mathbf {v} \,d\Omega }
where v is a vector-valued function with components v 1 , ..., v n .
Setting u equal to the constant function 1 in the above formula gives the divergence theorem
∫
Γ
v
⋅
ν
^
d
Γ
=
∫
Ω
∇
⋅
v
d
Ω
{\displaystyle \int _{\Gamma }\mathbf {v} \cdot {\hat {\nu }}\,d\Gamma =\int _{\Omega }\nabla \cdot \mathbf {v} \,d\Omega }
.
For
v
=
∇
v
{\displaystyle \mathbf {v} =\nabla v}
where
v
∈
C
2
(
Ω
¯
)
{\displaystyle v\in C^{2}({\bar {\Omega }})}
, one gets
∫
Ω
∇
u
⋅
∇
v
d
Ω
=
∫
Γ
u
∇
v
⋅
ν
^
d
Γ
−
∫
Ω
u
Δ
v
d
Ω
{\displaystyle \int _{\Omega }\nabla u\cdot \nabla v\,d\Omega =\int _{\Gamma }u\,\nabla v\cdot {\hat {\nu }}\,d\Gamma -\int _{\Omega }u\,\Delta v\,d\Omega }
which is the first Green's identity .
The regularity requirements of the theorem can be relaxed. For instance, the boundary
Γ
{\displaystyle \Gamma }
need only be Lipschitz continuous . In the first formula above, only
u
,
v
∈
H
1
(
Ω
)
{\displaystyle u,v\in H^{1}(\Omega )}
is necessary (where H 1 is a Sobolev space ); the other formulas have similarly relaxed requirements.
(
x
+
y
)
α
=
(
x
1
+
y
1
)
α
1
⋯
(
x
d
+
y
d
)
α
d
=
(
∑
k
1
=
0
α
1
(
α
1
k
1
)
x
1
k
1
y
1
α
1
−
k
1
)
…
(
∑
k
d
=
0
α
d
(
α
1
k
1
)
x
d
k
d
y
d
α
d
−
k
d
)
=
∑
k
1
=
0
α
1
⋯
∑
k
d
=
0
α
d
(
α
1
k
1
)
x
1
k
1
y
1
α
1
−
k
1
…
(
α
1
k
1
)
x
d
k
d
y
d
α
d
−
k
d
=
∑
ν
≤
α
(
α
ν
)
x
ν
y
α
−
ν
{\displaystyle {\begin{aligned}(x+y)^{\alpha }&=(x_{1}+y_{1})^{\alpha _{1}}\dotsm (x_{d}+y_{d})^{\alpha _{d}}\\&={\bigg (}\sum _{k_{1}=0}^{\alpha _{1}}{\binom {\alpha _{1}}{k_{1}}}\,x_{1}^{k_{1}}y_{1}^{\alpha _{1}-k_{1}}{\bigg )}\;\dotsc \;{\bigg (}\sum _{k_{d}=0}^{\alpha _{d}}{\binom {\alpha _{1}}{k_{1}}}\,x_{d}^{k_{d}}y_{d}^{\alpha _{d}-k_{d}}{\bigg )}\\&=\sum _{k_{1}=0}^{\alpha _{1}}\dotsm \sum _{k_{d}=0}^{\alpha _{d}}{\binom {\alpha _{1}}{k_{1}}}\,x_{1}^{k_{1}}y_{1}^{\alpha _{1}-k_{1}}\;\dotsc \;{\binom {\alpha _{1}}{k_{1}}}\,x_{d}^{k_{d}}y_{d}^{\alpha _{d}-k_{d}}\\&=\sum _{\nu \leq \alpha }{\binom {\alpha }{\nu }}\,x^{\nu }y^{\alpha -\nu }\end{aligned}}}
d
m
d
t
m
f
(
x
+
t
y
)
=
∑
|
α
|
=
m
|
α
|
!
α
!
(
∂
α
f
(
x
+
t
y
)
)
y
α
{\displaystyle {\frac {d^{m}}{dt^{m}}}f(x+ty)=\sum _{|\alpha |=m}{\frac {|\alpha |!}{\alpha !}}(\partial ^{\alpha }f(x+ty))y^{\alpha }}
i.e.
d
m
d
t
m
f
(
x
+
t
y
)
=
∑
|
α
|
=
m
(
m
α
)
(
∂
α
f
(
x
+
t
y
)
)
y
α
{\displaystyle {\frac {d^{m}}{dt^{m}}}f(x+ty)=\sum _{|\alpha |=m}{\binom {m}{\alpha }}(\partial ^{\alpha }f(x+ty))y^{\alpha }}
from Partial differential equations by Fritz John
directional derivative
f
(
x
+
h
)
=
∑
|
α
|
≤
n
1
α
!
h
α
∂
α
f
(
x
)
+
∑
|
α
|
=
n
+
1
n
+
1
α
!
h
α
∫
0
1
(
1
−
t
)
n
∂
α
f
(
x
+
t
h
)
d
t
=
∑
r
=
0
n
1
r
!
[
(
h
⋅
∇
)
r
f
]
(
x
)
+
1
n
!
∫
0
1
(
1
−
t
)
n
[
(
h
⋅
∇
)
n
+
1
f
]
(
x
+
t
h
)
d
t
.
{\displaystyle {\begin{aligned}f(\mathbf {x} +\mathbf {h} )&=\sum _{\left\vert \alpha \right\vert \leq n}{\frac {1}{\alpha !}}\mathbf {h} ^{\alpha }\partial ^{\alpha }f(\mathbf {x} )+\sum _{\left\vert \alpha \right\vert =n+1}{\frac {n+1}{\alpha !}}\mathbf {h} ^{\alpha }\int _{0}^{1}(1-t)^{n}\partial ^{\alpha }f(\mathbf {x} +t\mathbf {h} )\,dt\\&=\sum _{r=0}^{n}{\frac {1}{r!}}{\bigl [}(\mathbf {h} \cdot \nabla )^{r}f{\bigr ]}(\mathbf {x} )+{\frac {1}{n!}}\int _{0}^{1}(1-t)^{n}{\bigl [}(\mathbf {h} \cdot \nabla )^{n+1}f{\bigr ]}(\mathbf {x} +t\mathbf {h} )\,dt.\end{aligned}}}
h
:
C
→
C
n
{\displaystyle \mathbf {h} \colon \mathbb {C} \to \mathbb {C} ^{n}}
(or
R
→
R
n
{\displaystyle \mathbb {R} \to \mathbb {R} ^{n}}
).
d
n
d
t
n
f
(
h
(
t
)
)
=
∑
|
α
|
=
n
(
n
α
)
∂
α
f
(
h
(
t
)
)
h
′
(
t
)
.
{\displaystyle {\frac {\mathrm {d} ^{n}}{\mathrm {d} t^{n}}}f(\mathbf {h} (t))=\sum _{|\alpha |=n}{\binom {n}{\alpha }}\partial ^{\alpha }f(\mathbf {h} (t))\,\mathbf {h} '(t).}
But this is only true if h is linear! Otherwise we'd need terms with higher derivatives in h (and lower derivatives in f ).
From http://www.math.niu.edu/~rusin/known-math/99/prod_hermite :
H
m
(
x
)
H
n
(
x
)
=
∑
i
=
0
min
(
m
,
n
)
(
2
i
i
!
m
!
(
m
−
i
)
!
n
!
(
n
−
i
)
!
H
m
+
n
−
2
i
(
x
)
)
{\displaystyle H_{m}(x)H_{n}(x)=\sum _{i=0}^{\min(m,n)}\left({\frac {2^{i}}{i!}}{\frac {m!}{(m-i)!}}{\frac {n!}{(n-i)!}}H_{m+n-2i}(x)\right)}