I started writing the following when I thought that there was no Wikipedia article on the subject. I later discovered that there already was, but I hadn't found it earlier because it was under the incorrect title of Sokhatsky–Weierstrass theorem. It is now correctly titled as Sokhotski–Plemelj theorem . Perhaps one day I'll move some of the material there.
In mathematics , Sokhotsky's formula (also known as the Plemelj formula , or Plemelj-Sokhotsky formula ) relates two ways of integrating over the singularity of the reciprocal function
1
/
x
{\displaystyle 1/x}
on the real numbers . The formula says, in terms of distributions , that
v
.
p
.
1
x
∓
i
π
δ
(
x
)
=
1
x
±
i
0
.
{\displaystyle \operatorname {v.\!p.} {\frac {1}{x}}\mp i\pi \delta (x)={\frac {1}{x\pm i0}}.}
The distribution on each side of this equality is the Fourier transform of the Heaviside step function .
Throughout this section φ is an arbitrary function in the Schwartz space .
The formula involves three separate distributions:
⟨
δ
,
φ
⟩
:=
φ
(
0
)
.
{\displaystyle \langle \delta ,\varphi \rangle :=\varphi (0).\!}
⟨
v
.
p
.
1
x
,
φ
⟩
:=
lim
ε
→
0
+
[
∫
−
∞
−
ε
φ
(
x
)
x
d
x
+
∫
ε
∞
φ
(
x
)
x
d
x
]
,
{\displaystyle \left\langle \operatorname {v.\!p.} {\frac {1}{x}},\varphi \right\rangle :=\lim _{\varepsilon \rightarrow 0+}\left[\int _{-\infty }^{-\varepsilon }{\frac {\varphi (x)}{x}}\,dx+\int _{\varepsilon }^{\infty }{\frac {\varphi (x)}{x}}\,dx\right],\!}
which is often written more compactly as
lim
ε
→
0
+
[
(
∫
−
∞
−
ε
+
∫
ε
∞
)
φ
(
x
)
x
d
x
]
.
{\displaystyle \lim _{\varepsilon \rightarrow 0+}\left[\left(\int _{-\infty }^{-\varepsilon }+\int _{\varepsilon }^{\infty }\right){\frac {\varphi (x)}{x}}\,dx\right].\!}
The distribution defined as
⟨
1
x
+
i
0
,
φ
⟩
:=
lim
ε
→
0
+
∫
−
∞
∞
φ
(
x
)
x
+
i
ε
d
x
.
{\displaystyle \left\langle {\frac {1}{x+i0}},\varphi \right\rangle :=\lim _{\varepsilon \rightarrow 0+}\int _{-\infty }^{\infty }{\frac {\varphi (x)}{x+i\varepsilon }}\,dx.\!}
Substituting in these definitions we obtain, explicitly in terms of integrals and limits, that
lim
ε
→
0
+
[
(
∫
−
∞
−
ε
+
∫
ε
∞
)
φ
(
x
)
x
d
x
]
∓
i
π
φ
(
0
)
=
lim
ε
→
0
+
∫
−
∞
∞
φ
(
x
)
x
±
i
ε
d
x
.
{\displaystyle \lim _{\varepsilon \rightarrow 0+}\left[\left(\int _{-\infty }^{-\varepsilon }+\int _{\varepsilon }^{\infty }\right){\frac {\varphi (x)}{x}}\,dx\right]\mp i\pi \varphi (0)=\lim _{\varepsilon \rightarrow 0+}\int _{-\infty }^{\infty }{\frac {\varphi (x)}{x\pm i\varepsilon }}\,dx.\!}
This proof is from Choquet-Bruhat, DeWitt-Morette & Dillard-Bleick (1982). We show the top choice of signs; the other choice is similar.
The proof is as follows, where limits and derivatives are in the distributional sense, and H is the Heaviside step function .
1
x
+
i
0
=
lim
ε
→
0
+
1
x
+
i
ε
(1)
=
lim
ε
→
0
+
d
d
x
log
(
x
+
i
ε
)
(2)
=
d
d
x
lim
ε
→
0
+
log
(
x
+
i
ε
)
(3)
=
d
d
x
(
log
|
x
|
+
i
π
(
1
−
H
(
x
)
)
)
(4)
=
v
.
p
.
1
x
−
i
π
δ
(
x
)
(5)
{\displaystyle {\begin{aligned}{\frac {1}{x+i0}}&=\lim _{\varepsilon \rightarrow 0+}{\frac {1}{x+i\varepsilon }}&&{\text{(1)}}\\&=\lim _{\varepsilon \rightarrow 0+}{\frac {\mathrm {d} }{\mathrm {d} x}}\log(x+i\varepsilon )&&{\text{(2)}}\\&={\frac {\mathrm {d} }{\mathrm {d} x}}\lim _{\varepsilon \rightarrow 0+}\log(x+i\varepsilon )&&{\text{(3)}}\\&={\frac {\mathrm {d} }{\mathrm {d} x}}(\log \vert x\vert +i\pi (1-H(x)))&&{\text{(4)}}\\&=\operatorname {v.\!p.} {\frac {1}{x}}-i\pi \delta (x)&&{\text{(5)}}\end{aligned}}}
The explanation of each equality, and the choice of logarithm branch, is given in the following subsection.
Step 1 By definition of the limit of a distribution,
⟨
lim
ε
→
0
+
1
x
+
i
ε
,
φ
⟩
=
lim
ε
→
0
+
⟨
1
x
+
i
ε
,
φ
⟩
.
{\displaystyle \left\langle \lim _{\varepsilon \rightarrow 0+}{\frac {1}{x+i\varepsilon }},\varphi \right\rangle =\lim _{\varepsilon \rightarrow 0+}\left\langle {\frac {1}{x+i\varepsilon }},\varphi \right\rangle .}
The right hand side of this is the definition of the distribution
1
/
(
x
+
i
0
)
{\displaystyle 1/(x+i0)}
.
Step 2 We need to show that, in the distributional sense,
d
d
x
log
(
x
+
i
ε
)
=
1
x
+
i
ε
.
{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}\log(x+i\varepsilon )={\frac {1}{x+i\varepsilon }}.}
We take
log
{\displaystyle \log }
to be the branch of the natural logarithm such that, for
x
∈
R
{\displaystyle x\in \mathbb {R} }
and
ε
>
0
{\displaystyle \varepsilon >0}
,
log
(
x
+
i
ε
)
=
log
|
x
+
i
ε
|
+
i
arg
(
x
+
i
ε
)
,
where
0
≤
arg
(
x
+
i
ε
)
≤
π
.
{\displaystyle \log(x+i\varepsilon )=\log \vert x+i\varepsilon \vert +i\arg(x+i\varepsilon ),\quad {\text{where }}0\leq \arg(x+i\varepsilon )\leq \pi .\!}
This is a continuous, and so smooth, choice of logarithm for these values of x and ε . The above derivative therefore holds pointwise, and so also holds in the distributional sense.
Step 3 Swapping the distributional derivative and limit is permitted because the distributional derivative is continuous.
Step 4 We need to show that, in the distributional sense,
lim
ε
→
0
+
log
(
x
+
i
ε
)
=
log
|
x
|
+
i
π
(
1
−
H
(
x
)
)
.
{\displaystyle \lim _{\varepsilon \rightarrow 0+}\log(x+i\varepsilon )=\log \vert x\vert +i\pi (1-H(x)).}
Pointwise we have
lim
ε
→
0
+
log
(
x
+
i
ε
)
=
{
log
|
x
|
if
y
>
0
,
log
|
x
|
+
i
π
if
y
<
0
,
{\displaystyle \lim _{\varepsilon \rightarrow 0+}\log(x+i\varepsilon )={\begin{cases}\log \vert x\vert &{\text{if }}y>0,\\\log \vert x\vert +i\pi &{\text{if }}y<0,\end{cases}}}
so the desired equation holds pointwise. Since
log
|
x
|
{\displaystyle \log \vert x\vert }
is locally integrable, it also holds in the distributional sense.
Step 5 Note that the derivative of a constant is zero, and the derivative of the Heaviside step function is the Dirac delta function. It remains only to show that for any test function
φ
∈
S
(
R
)
{\displaystyle \varphi \in {\mathcal {S}}(\mathbb {R} )}
⟨
d
d
x
log
|
x
|
,
φ
⟩
=
⟨
v
.
p
.
1
x
,
φ
⟩
.
{\displaystyle \left\langle {\frac {\mathrm {d} }{\mathrm {d} x}}\log \vert x\vert ,\varphi \right\rangle =\left\langle \operatorname {v.\!p.} {\frac {1}{x}},\varphi \right\rangle .}
The left hand side of this is
−
∫
−
∞
∞
log
|
x
|
φ
′
(
x
)
d
x
.
{\displaystyle -\int _{-\infty }^{\infty }\log \vert x\vert \varphi '(x)\,dx.}
But
|
lim
δ
→
0
+
∫
−
δ
δ
log
|
x
|
φ
′
(
x
)
d
x
|
≤
lim
δ
→
0
+
‖
φ
′
‖
∞
∫
−
δ
δ
log
|
x
|
d
x
=
0
{\displaystyle \left\vert \lim _{\delta \rightarrow 0+}\int _{-\delta }^{\delta }\log \vert x\vert \varphi '(x)\,dx\right\vert \leq \lim _{\delta \rightarrow 0+}\Vert \varphi '\Vert _{\infty }\int _{-\delta }^{\delta }\log \vert x\vert \,dx=0}
since φ is a Schwarz function and log is locally integrable. Thus the expression equals
−
lim
δ
→
0
+
[
(
∫
−
∞
−
δ
+
∫
δ
∞
)
log
|
x
|
φ
′
(
x
)
d
x
]
.
{\displaystyle -\lim _{\delta \rightarrow 0+}\left[\left(\int _{-\infty }^{-\delta }+\int _{\delta }^{\infty }\right)\log \vert x\vert \varphi '(x)\,dx\right].}
Integrating by parts this equals
lim
δ
→
0
+
[
−
(
φ
′
(
−
δ
)
log
δ
−
φ
′
(
δ
)
log
δ
)
+
(
∫
−
∞
−
δ
+
∫
δ
∞
)
φ
(
x
)
x
d
x
]
.
{\displaystyle \lim _{\delta \rightarrow 0+}\left[-\left(\varphi '(-\delta )\log \delta -\varphi '(\delta )\log \delta \right)+\left(\int _{-\infty }^{-\delta }+\int _{\delta }^{\infty }\right){\frac {\varphi (x)}{x}}\,dx\right].}
But
lim
δ
→
0
+
(
φ
′
(
−
δ
)
log
δ
−
φ
′
(
δ
)
log
δ
)
=
lim
δ
→
0
+
(
2
δ
φ
″
(
η
δ
)
log
δ
)
=
0
{\displaystyle \lim _{\delta \rightarrow 0+}\left(\varphi '(-\delta )\log \delta -\varphi '(\delta )\log \delta \right)=\lim _{\delta \rightarrow 0+}\left(2\delta \,\varphi ''(\eta _{\delta })\log \delta \right)=0}
by the mean value theorem (where
η
δ
∈
[
−
δ
,
δ
]
{\displaystyle \eta _{\delta }\in [-\delta ,\delta ]}
for each δ ). Thus the expression equals
⟨
v
.
p
.
1
x
,
φ
⟩
,
{\displaystyle \left\langle \operatorname {v.\!p.} {\frac {1}{x}},\varphi \right\rangle ,}
as required.