ex-6: review

This commit is contained in:
Giù Marcer 2020-06-03 21:56:05 +02:00 committed by rnhmjoj
parent a695611452
commit 98938fd55b

View File

@ -14,10 +14,10 @@ where:
- $E$ is the electric field amplitude, default $E = \SI{1e4}{V/m}$;
- $a$ is the radius of the slit aperture, default $a = \SI{0.01}{m}$;
- $\theta$ is the diffraction angle, shown in @fig:slit;
- $J_1$ is a Bessel function of first kind;
- $\theta$ is the diffraction angle shown in @fig:slit;
- $J_1$ is the Bessel function of first kind;
- $k$ is the wavenumber, default $k = \SI{1e-4}{m^{-1}}$;
- $L$ s the distance from the screen, default $L = \SI{1}{m}$.
- $L$ is the distance from the screen, default $L = \SI{1}{m}$.
\begin{figure}
\hypertarget{fig:slit}{%
@ -58,7 +58,6 @@ though, $\theta$ must be uniformly distributed on the half sphere, hence:
&\thus \frac{dP}{d\theta} = \int_0^{2 \pi} \!\!\! d\phi \frac{1}{2 \pi} \sin{\theta}
= \frac{1}{2 \pi} \sin{\theta} \, 2 \pi = \sin{\theta}
\end{align*}
\begin{align*}
\theta = \theta (x) &\thus
\frac{dP}{d\theta} = \frac{dP}{dx} \cdot \left| \frac{dx}{d\theta} \right|
@ -67,8 +66,7 @@ though, $\theta$ must be uniformly distributed on the half sphere, hence:
&\thus \sin{\theta} = \left. 1 \middle/ \, \left|
\frac{d\theta}{dx} \right| \right.
\end{align*}
If $\theta$ is taken to increase with $x$, then the absolute value can be
If $\theta$ is chosen to increase with $x$, then the absolute value can be
omitted:
\begin{align*}
\frac{d\theta}{dx} = \frac{1}{\sin{\theta}}
@ -80,10 +78,9 @@ omitted:
\\
&\thus \theta = \text{acos} (1 -x)
\end{align*}
The so obtained sample was binned and stored in a histogram with a customizable
number $n$ of bins (default to $n = 150$) ranging from $\theta = 0$ to $\theta
= \pi/2$ because of the system symmetry. In @fig:original an example is shown.
= \pi/2$ because of the system symmetry. An example is shown in @fig:original.
![Example of intensity histogram.](images/6-original.pdf){#fig:original}
@ -91,14 +88,14 @@ number $n$ of bins (default to $n = 150$) ranging from $\theta = 0$ to $\theta
## Convolution {#sec:convolution}
In order to simulate the instrumentation response, the sample was then
convolved with a gaussian kernel with the aim to recover the original sample
convolved with a Gaussian kernel with the aim to recover the original sample
afterwards, implementing a deconvolution routine.
For this purpose, a 'kernel' histogram with an even number $m$ of bins and the
same bin width of the previous one, but a smaller number of them ($m \sim 6\%
\, n$), was generated according to a gaussian distribution with mean $\mu = 0$
\, n$), was generated according to a Gaussian distribution with mean $\mu = 0$
and variance $\sigma$. The reason why the kernel was set this way will be
discussed shortly.
Then, the original histogram was convolved with the kernel in order to obtain
The original histogram was then convolved with the kernel in order to obtain
the smeared signal. As an example, the result obtained for $\sigma = \Delta
\theta$, where $\Delta \theta$ is the bin width, is shown in @fig:convolved.
The smeared signal looks smoother with respect to the original one: the higher
@ -122,8 +119,8 @@ implemented for discrete arrays of numbers, such as histograms or vectors:
where:
- $R$ and $T_x$ are the reflection and translation by $x$ operators
- $(\cdot, \cdot)$ is an inner product
- $R$ and $T_x$ are the reflection and translation by $x$ operators,
- $(\cdot, \cdot)$ is an inner product.
Given a signal $s$ of $n$ elements and a kernel $k$ of $m$ elements,
their convolution is a vector of $n + m + 1$ elements computed
@ -131,7 +128,7 @@ by flipping $s$ ($R$ operator) and shifting its indices ($T_i$ operator):
$$
c_i = (s, T_i \, R \, k)
$$
The shift is defined such that when index overflows ($\ge m$ or $\le$ 0) the
The shift is defined such that when a index overflows ($\ge m$ or $\le$ 0) the
element is zero. This convention specifies the behavior at the edges
and results in the $m + 1$ increase in size.
For a better understanding, see @fig:dot_conv.
@ -213,7 +210,6 @@ $L^1$ functions $f(x)$ and $g(x)$:
$$
\mathcal{F}[f * g] = \mathcal{F}[f] \cdot \mathcal{F}[g]
$$
where $\mathcal{F}[\cdot]$ stands for the Fourier transform.
Being the histogram a discrete set of data, the Discrete Fourier Transform (DFT)
was applied. When dealing with arrays of discrete values, the theorem still
@ -229,13 +225,11 @@ $$
\mathcal{F}[s * k] = \mathcal{F}[s] \cdot \mathcal{F}[k] \thus
\mathcal{F} [s] = \frac{\mathcal{F}[s * k]}{\mathcal{F}[k]}
$$
The FFT are efficient algorithms for calculating the DFT. Given a set of $n$
values {$z_j$}, each one is transformed into:
$$
x_j = \sum_{k=0}^{n-1} z_k \exp \left( - \frac{2 \pi i j k}{n} \right)
$$
where $i$ is the imaginary unit.
The evaluation of the DFT is a matrix-vector multiplication $W \vec{z}$. A
general matrix-vector multiplication takes $O(n^2)$ operations. FFT algorithms,
@ -248,19 +242,17 @@ $$
z_j = \frac{1}{n}
\sum_{k=0}^{n-1} x_k \exp \left( \frac{2 \pi i j k}{n} \right)
$$
In GSL, `gsl_fft_complex_forward()` and `gsl_fft_complex_inverse()` are
functions which allow to compute the forward and inverse transform,
respectively.
The inputs and outputs for the complex FFT routines are packed arrays of
floating point numbers. In a packed array, the real and imaginary parts of each
complex number are placed in alternate neighboring elements. In this special
case where the sequence of values to be transformed is made of real numbers,
the Fourier transform is a complex sequence which satisfies:
case, where the sequence of values to be transformed is made of real numbers,
the Fourier transform is a complex sequence which satisfies:
$$
z_k = z^*_{n-k}
$$
where $z^*$ is the conjugate of $z$. A sequence with this symmetry is called
'half-complex'. This structure requires particular storage layouts for the
forward transform (from real to half-complex) and inverse transform (from
@ -321,7 +313,7 @@ computation. GSL provides the function `gsl_fft_halfcomplex_unpack()` which
convert the vectors from half-complex format to standard complex format but the
inverse procedure is not provided by GSL and had to be implemented.
In the end, the external bins which exceed the original signal size are cut
At the end, the external bins which exceed the original signal size were cut
away in order to restore the original number of bins $n$. Results will be
discussed in @sec:conv-results.
@ -338,16 +330,14 @@ drown not by $f(x)$ but by another function $\phi(x)$ such that:
$$
\phi(x) = \int d\xi \, f(\xi) P(x | \xi)
$$ {#eq:conv}
where $P(x | \xi) \, d\xi$ is the probability (presumed known) that $x$ falls
in the interval $(x, x + dx)$ when $\xi = \xi$. If the so-called point spread
function $P(x | \xi)$ is a function of $x-\xi$ only, for example a normal
distribution with variance $\sigma$,
distribution with variance $\sigma$:
$$
P(x | \xi) = \frac{1}{\sqrt{2 \pi} \sigma}
\exp \left( - \frac{(x - \xi)^2}{2 \sigma^2} \right)
$$
then, @eq:conv becomes a convolution and finding $f(\xi)$ amounts
to a deconvolution.
An example of this problem is precisely that of correcting an observed
@ -359,26 +349,22 @@ $(\xi, \xi + d\xi)$ when the measured quantity is $x = x$. The probability that
both $x \in (x, x + dx)$ and $(\xi, \xi + d\xi)$ is therefore given by $\phi(x)
dx \cdot Q(\xi | x) d\xi$ which is identical to $f(\xi) d\xi \cdot P(x | \xi)
dx$, hence:
$$
\phi(x) dx \cdot Q(\xi | x) d\xi = f(\xi) d\xi \cdot P(x | \xi) dx
\thus Q(\xi | x) = \frac{f(\xi) \cdot P(x | \xi)}{\phi(x)}
$$
$$
\thus Q(\xi | x) = \frac{f(\xi) \cdot P(x | \xi)}
{\int d\xi \, f(\xi) P(x | \xi)}
$$ {#eq:first}
which is the Bayes theorem for conditional probability. From the normalization
of $P(x | \xi)$, it follows also that:
$$
f(\xi) = \int dx \, \phi(x) Q(\xi | x)
$$ {#eq:second}
Since $Q (\xi | x)$ depends on $f(\xi)$, @eq:second suggests an iterative
procedure for generating estimates of $f(\xi)$. With a guess for $f(\xi)$ and
a known $P(x | \xi)$, @eq:first can be used to calculate and estimate for
a known $P(x | \xi)$, @eq:first can be used to calculate an estimate for
$Q (\xi | x)$. Then, taking the hint provided by @eq:second, an improved
estimate for $f(\xi)$ can be generated, using the observed sample {$x_i$} to
give an approximation for $\phi$.
@ -395,7 +381,6 @@ $$
\int dx \, \frac{\phi(x)}{\int d\xi \, f^t(\xi) P(x | \xi)}
P(x | \xi)
$$ {#eq:solution}
When the spread function $P(x | \xi) = P(x-\xi)$, @eq:solution can be
rewritten in terms of convolutions:
$$
@ -403,7 +388,7 @@ $$
$$
where $P^{\star}$ is the flipped point spread function [@lucy74].
In this particular instance, a gaussian kernel was convolved with the original
In this particular instance, a Gaussian kernel was convolved with the original
histogram. Again, dealing with discrete arrays of numbers, the division and
multiplication are element wise and the convolution is to be carried out as
described in @sec:convolution.
@ -432,30 +417,25 @@ regions, the EMD is the minimum cost of turning one pile into the other, making
the first the most possible similar to the second, where the cost is the amount
of dirt moved times the distance by which it is moved.
Computing the EMD is based on the solution to a transportation problem, which
Computing the EMD is based on the solution to a transportation problem which
can be formalized as follows. Consider two vectors $P$ and $Q$ which represent
the two distributions whose EMD has to be measured:
$$
P = \{ (p_1, w_{p1}) \dots (p_m, w_{pm}) \} \et
Q = \{ (q_1, w_{q1}) \dots (q_n, w_{qn}) \}
$$
where $p_i$ and $q_i$ are the 'values' (that is, the location of the dirt) and
$w_{pi}$ and $w_{qi}$ are the 'weights' (that is, the quantity of dirt). A
ground distance matrix $D$ is defined such as its entries $d_{ij}$ are the
distances between $p_i$ and $q_j$. The aim is to find the flow matrix $F$,
where each entry $f_{ij}$ is the flow from $p_i$ to $q_j$ (which would be the
quantity of moved dirt), which minimizes the cost $W$:
$$
W (P, Q, F) = \sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij}
$$
The $Q$ region is to be considered empty at the beginning: the 'dirt' present
in $P$ must be moved to $Q$ in order to reproduce the same distribution as
close as possible. Formally, the following constraints must be satisfied:
\begin{align*}
&\text{1.} \hspace{20pt} f_{ij} \ge 0 \hspace{15pt}
&1 \le i \le m \wedge 1 \le j \le n
@ -469,7 +449,6 @@ close as possible. Formally, the following constraints must be satisfied:
&\text{4.} \hspace{20pt} \sum_{j = 1}^n f_{ij} \sum_{j = 1}^m f_{ij} \le w_{qj}
= \text{min} \left( \sum_{i = 1}^m w_{pi}, \sum_{j = 1}^n w_{qj} \right)
\end{align*}
The first constraint allows moving dirt from $P$ to $Q$ and not vice versa; the
second limits the amount of dirt moved by each position in $P$ in order to not
exceed the available quantity; the third sets a limit to the dirt moved to each
@ -477,8 +456,8 @@ position in $Q$ in order to not exceed the required quantity and the last one
forces to move the maximum amount of supplies possible: either all the dirt
present in $P$ has been moved or the $Q$ distribution has been obtained.
The total moved amount is the total flow. If the two distributions have the
same amount of dirt, hence all the dirt present in $P$ is necessarily moved to
$Q$ and the flow equals the total amount of available dirt.
same amount of dirt, all the dirt present in $P$ is necessarily moved to $Q$ and
the total flow equals the amount of available dirt.
Once the transportation problem is solved and the optimal flow is found, the
EMD is defined as the work normalized by the total flow:
@ -486,27 +465,22 @@ $$
\text{EMD} (P, Q) = \frac{\sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij}}
{\sum_{i = 1}^m \sum_{j=1}^n f_{ij}}
$$
In this case, where the EMD is to be measured between two same-length
histograms, the procedure simplifies a lot. By representing both histograms
with two vectors $u$ and $v$, the equation above boils down to [@ramdas17]:
$$
\text{EMD} (u, v) = \sum_i |U_i - V_i|
$$
where the sum runs over the entries of the vectors $U$ and $V$, which are the
cumulative sums of the histograms. In the code, the following equivalent
iterative routine was implemented.
$$
\text{EMD} (u, v) = \sum_i |\text{d}_i| \with
\begin{cases}
\text{d}_i = v_i - u_i + \text{d}_{i-1} \\
\text{d}_i = v_i - u_i + \text{d}_{i-1} \\
\text{d}_0 = 0
\end{cases}
$$
The equivalence is apparent once the definition is expanded:
\begin{align*}
\text{EMD} (u, v)
@ -524,9 +498,8 @@ The equivalence is apparent once the definition is expanded:
&= |V_1 - U_1| + |V_2 - U_2| + |V_3 - U_3| + \dots \\
&= \sum_i |U_i - V_i|
\end{align*}
This simple algorithm enabled the comparisons between a great number of
histogram to be computed efficiently.
histograms to be computed efficiently.
In order to make the code more flexible, the data were normalized before
computing the EMD: in doing so, it is possible to compare even samples with a
different number of points.
@ -537,8 +510,8 @@ different number of points.
### Noiseless results {#sec:noiseless}
In addition to the convolution with a gaussian kernel of width $\sigma$, the
possibility to add a gaussian noise to the convolved histogram counts was also
In addition to the convolution with a Gaussian kernel of width $\sigma$, the
possibility to add a Gaussian noise to the convolved histogram counts was also
implemented to check weather the deconvolution is affected or not by this kind
of interference. This approach is described in the next subsection, while the
noiseless results are given in this one.
@ -549,14 +522,13 @@ $$
\sigma = 0.5 \, \Delta \theta \et
\sigma = \Delta \theta
$$
Since the RL method depends on the number $r$ of performed rounds, in order to
find out how many are sufficient or necessary to compute, the earth mover's
distance between the deconvolved signal and the original one was measured for
different $r$s for each of the three tested values of the kernel $\sigma$.
To achieve this goal, a number of 1000 experiments were simulated. Each
consists in generating the diffraction signal, convolving it with the a kernel
To achieve this goal, a number of 1000 experiments was simulated. Each one
consists in generating the diffraction signal, convolving it with a kernel
of width $\sigma$, deconvolving with the RL algorithm with a given number of
rounds $r$ and measuring the EMD.
The distances are used to build an histogram of EMD distribution, from which
@ -570,7 +542,7 @@ The plots in @fig:rless-0.1 show the average (red) and standard deviation
iterations does not affect the quality of the outcome (those fluctuations are
merely a fact of floating-points precision) and the best result is obtained for
$r = 2$, meaning that the convergence of the RL algorithm is really fast and
this is due to the fact that the histogram was only slighlty modified.
this is due to the fact that the histogram was only slightly modified.
In @fig:rless-0.5, the curve starts to flatten at about 10 rounds, whereas in
@fig:rless-1 a minimum occurs around \num{5e3} rounds, meaning that, whit such
a large kernel, the convergence is very slow, even if the best results are
@ -585,11 +557,23 @@ The following $r$s were chosen as the most fitting:
Note the difference between @fig:rless-0.1 and the plots resulting from $\sigma =
0.5 \, \Delta \theta$ and $\sigma = \, \Delta \theta$ as regards the order of
magnitude: the RL deconvolution is heavily influenced by the variance magnitude:
the greater $\sigma$, the worse the deconvolved result.
the greater $\sigma$, the worse the deconvolved result.
On the other hand, the FFT deconvolution procedure is not affected by $\sigma$
amplitude changes: it always gives the same outcome, which would be exactly the
original signal, if the floating point precision would not affect the result. In
fact, the FFT is the analytical result of the deconvolution.
original signal, if the floating point precision would not affect the result,
being the FFT the analytical result of the deconvolution.
For this reason, the EMD obtained with the FFT can be used as a reference point
against which to compare the EMDs measured with RL.
As described above, for a given $r$, a thousands of experiments were simulated:
for each of this simulations, an EMD was computed. Besides computing their
average and standard deviations, those values were used to build histograms
showing the EMD distribution.
Once the best numbers of rounds $r^{\text{best}}$ were found, their histograms
were compared to the histograms of the FFT results, started from the same
convolved signals, and the EMD of the convolved signals themselves, in order to
check if an improvement was truly achieved. Results are shown in
@fig:emd-noiseless.
::: {id=fig:rounds-noiseless}
![](images/6-nonoise-rounds-0.1.pdf){#fig:rless-0.1}
@ -599,8 +583,7 @@ fact, the FFT is the analytical result of the deconvolution.
![](images/6-nonoise-rounds-1.pdf){#fig:rless-1}
EMD as a function of RL rounds for different kernel $\sigma$ values. The
average is shown in red and the standard deviation in grey. Noiseless results
shown.
average is shown in red and the standard deviation in grey. Noiseless results.
:::
::: {id="fig:emd-noiseless"}
@ -616,18 +599,6 @@ the RL deconvolution and the third one shows the EMD for the convolved signal.
Noiseless results.
:::
For this reason, the EMD obtained with the FFT can be used as a reference point
against which to compare the EMDs measured with RL.
As described above, for a given $r$, a thousands of experiments were simulated:
for each of this simulations, an EMD was computed. Besides computing their
average and standard deviations, those values were used to build histograms
showing the EMD distribution.
Once the best numbers of rounds $r^{\text{best}}$ were found, their histograms
were compared to the histograms of the FFT results, started from the same
convolved signals, and the EMD of the convolved signals themselves, in order to
check if an improvement was truly achieved. Results are shown in
@fig:emd-noiseless.
As expected, the FFT results are always of the same order of magnitude,
\num{1e-15}, independently from the kernel width, whereas the RL deconvolution
is greatly affected by it, ranging from \num{1e-16} for $\sigma = 0.1 \, \Delta
@ -644,16 +615,16 @@ original signal, meaning that the deconvolution is indeed working.
### Noisy results
In order to observe the effect of the gaussian noise on the two deconvolution
In order to observe the effect of the Gaussian noise on the two deconvolution
methods, a value of $\sigma = 0.8 \, \Delta \theta$ for the kernel width was
arbitrary chosen. The noise was then applied to the convolved histogram as
follows.
![Example of Noisy histogram,
![Example of noisy histogram,
$\sigma_N = 0.05$.](images/6-noisy.pdf){#fig:noisy}
For each bin, once the convolved histogram was computed, a value $v_N$ was
randomly sampled from a gaussian distribution with standard deviation
randomly sampled from a Gaussian distribution with standard deviation
$\sigma_N$, and the value $v_n \cdot b$ was added to the bin itself, where $b$
is the count of the bin. An example with $\sigma_N = 0.05$ of the new
histogram is shown in @fig:noisy.
@ -661,13 +632,38 @@ The following three values of $\sigma_N$ were tested:
$$
\sigma_N = 0.005 \et
\sigma_N = 0.01 \et
\sigma_N = 0.05
\sigma_N = 0.05
$$
The same procedure followed in @sec:noiseless was then repeated for noisy
signals. Hence, in @fig:rounds-noise the EMD as a function of the RL rounds is
shown, this time varying $\sigma_N$ and keeping $\sigma = 0.8 \, \Delta \theta$
constant.
constant.
::: {id=fig:rounds-noise}
![](images/6-noise-rounds-0.005.pdf){#fig:rnoise-0.005}
![](images/6-noise-rounds-0.01.pdf){#fig:rnoise-0.01}
![](images/6-noise-rounds-0.05.pdf){#fig:rnoise-0.05}
EMD as a function of RL rounds for different noise $\sigma_N$ values with the
kernel $\sigma = 0.8 \Delta \theta$. The average is shown in red and the
standard deviation in grey. Noisy results.
:::
::: {id=fig:emd-noisy}
![$\sigma_N = 0.005$](images/6-noise-emd-0.005.pdf){#fig:enoise-0.005}
![$\sigma_N = 0.01$](images/6-noise-emd-0.01.pdf){#fig:enoise-0.01}
![$\sigma_N = 0.05$](images/6-noise-emd-0.05.pdf){#fig:enoise-0.05}
EMD distributions for different noise $\sigma_N$ values. The plots on the left
show the results for the FFT deconvolution, the central column the results for
the RL deconvolution and the third one shows the EMD for the convolved signal.
Noisy results.
:::
In @fig:rnoise-0.005, the flattening is achieved around $r = 20$ and in
@fig:rnoise-0.01 it is sufficient $\sim r = 15$. When the noise becomes too
high, on the other hand, as $r$ grows, the algorithm becomes
@ -678,7 +674,6 @@ The most fitting values were chosen as:
\sigma_N = 0.01 &\thus r^{\text{best}} = 15 \\
\sigma_N = 0.05 &\thus r^{\text{best}} = 1
\end{align*}
About the distance, as $\sigma_n$ increases, unsurprisingly the EMD grows
larger, ranging from $\sim$ \num{2e-4} in @fig:rnoise-0.005 to $\sim$
\num{1.5e-3} in @fig:rnoise-0.05.
@ -702,28 +697,3 @@ more stable computations.
However, in real world applications the measures are affected by (possibly
unknown) noise and the signal can only be partially reconstructed by either
method.
::: {id=fig:rounds-noise}
![](images/6-noise-rounds-0.005.pdf){#fig:rnoise-0.005}
![](images/6-noise-rounds-0.01.pdf){#fig:rnoise-0.01}
![](images/6-noise-rounds-0.05.pdf){#fig:rnoise-0.05}
EMD as a function of RL rounds for different noise $\sigma_N$ values with the
kernel $\sigma = 0.8 \Delta \theta$. The average is shown in red and the
standard deviation in grey. Noisy results.
:::
::: {id=fig:emd-noisy}
![$\sigma_N = 0.005$](images/6-noise-emd-0.005.pdf){#fig:enoise-0.005}
![$\sigma_N = 0.01$](images/6-noise-emd-0.01.pdf){#fig:enoise-0.01}
![$\sigma_N = 0.05$](images/6-noise-emd-0.05.pdf){#fig:enoise-0.05}
EMD distributions for different noise $\sigma_N$ values. The plots on the left
show the results for the FFT deconvolution, the central column the results for
the RL deconvolution and the third one shows the EMD for the convolved signal.
Noisy results.
:::