analistica/notes/sections/6.md

458 lines
20 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Exercise 6
## Generating points according to Fraunhofer diffraction
The diffraction of a plane wave thorough a round slit must be simulated by
generating $N =$ 50'000 points according to the intensity distribution
$I(\theta)$ on a screen at a great distance $L$ from the slit itself:
$$
I(\theta) = \frac{E^2}{2} \left( \frac{2 \pi a^2 \cos{\theta}}{L}
\frac{J_1(x)}{x} \right)^2 \with x = k a \sin{\theta}
$$
where:
- $E$ is the electric field amplitude, default set $E = \SI{1e4}{V/m}$;
- $a$ is the radius of the slit aperture, default set $a = \SI{0.01}{m}$;
- $\theta$ is the angle specified in @fig:slit;
- $J_1$ is the Bessel function of first order;
- $k$ is the wavenumber, default set $k = \SI{1e-4}{m^{-1}}$;
- $L$ default set $L = \SI{1}{m}$.
\begin{figure}
\hypertarget{fig:slit}{%
\centering
\begin{tikzpicture}
\definecolor{cyclamen}{RGB}{146, 24, 43}
% Walls
\draw [thick] (-1,3) -- (1,3) -- (1,0.3) -- (1.2,0.3) -- (1.2,3)
-- (9,3);
\draw [thick] (-1,-3) -- (1,-3) -- (1,-0.3) -- (1.2,-0.3) -- (1.2,-3)
-- (9,-3);
\draw [thick] (10,3) -- (9.8,3) -- (9.8,-3) -- (10,-3);
% Lines
\draw [thick, gray] (0.7,0.3) -- (0.5,0.3);
\draw [thick, gray] (0.7,-0.3) -- (0.5,-0.3);
\draw [thick, gray] (0.6,0.3) -- (0.6,-0.3);
\draw [thick, gray] (1.2,0) -- (9.8,0);
\draw [thick, gray] (1.2,-0.1) -- (1.2,0.1);
\draw [thick, gray] (9.8,-0.1) -- (9.8,0.1);
\draw [thick, cyclamen] (1.2,0) -- (9.8,-2);
\draw [thick, cyclamen] (7,0) to [out=-90, in=50] (6.6,-1.23);
% Nodes
\node at (0,0) {$2a$};
\node at (5.5,0.4) {$L$};
\node [cyclamen] at (5.5,-0.4) {$\theta$};
\node [rotate=-90] at (10.2,0) {screen};
\end{tikzpicture}
\caption{Fraunhofer diffraction.}\label{fig:slit}
}
\end{figure}
Once again, the *try and catch* method described in @sec:3 was implemented and
the same procedure about the generation of $\theta$ was employed. This time,
though, $\theta$ must be evenly distributed on half sphere:
\begin{align*}
\frac{d^2 P}{d\omega^2} = const = \frac{1}{2 \pi}
&\thus d^2 P = \frac{1}{2 \pi} d\omega^2 =
\frac{1}{2 \pi} d\phi \sin{\theta} d\theta \\
&\thus \frac{dP}{d\theta} = \int_0^{2 \pi} d\phi \frac{1}{2 \pi} \sin{\theta}
= \frac{1}{2 \pi} \sin{\theta} \, 2 \pi = \sin{\theta}
\end{align*}
\begin{align*}
\theta = \theta (x) &\thus
\frac{dP}{d\theta} = \frac{dP}{dx} \cdot \left| \frac{dx}{d\theta} \right|
= \left. \frac{dP}{dx} \middle/ \, \left| \frac{d\theta}{dx} \right| \right.
\\
&\thus \sin{\theta} = \left. 1 \middle/ \, \left|
\frac{d\theta}{dx} \right| \right.
\end{align*}
If $\theta$ is chosen to grew together with $x$, then the absolute value can be
omitted:
\begin{align*}
\frac{d\theta}{dx} = \frac{1}{\sin{\theta}}
&\thus d\theta \sin(\theta) = dx
\\
&\thus - \cos (\theta') |_{0}^{\theta} = x(\theta) - x(0) = x - 0 = x
\\
&\thus - \cos(\theta) + 1 =x
\\
&\thus \theta = \text{acos} (1 -x)
\end{align*}
The sample was binned and stored in a histogram with a customizable number $n$
of bins default set $n = 150$. In @fig:original an example is shown.
![Example of sorted points according to
$I(\theta)$.](images/6_original.pdf){#fig:original}
## Gaussian noise convolution {#sec:convolution}
The sample must then be smeared with a Gaussian noise with the aim to recover
the original sample afterwards, implementing a deconvolution routine.
For this purpose, a 'kernel' histogram with a odd number $m$ of bins and the
same bin width of the previous one, but a smaller number of them ($m < n$), was
filled with $m$ points according to a Gaussian distribution with mean $\mu$,
corresponding to the central bin, and variance $\sigma$.
Then, the original histogram was convolved with the kernel in order to obtain
the smeared signal. Some results in terms of various $\sigma$ are shown in
[@fig:results1; @fig:results2; @fig:results3].
The convolution was implemented as follow. Consider the definition of
convolution of two functions $f(x)$ and $g(x)$:
$$
f \otimes g (x) = \int \limits_{- \infty}^{+ \infty} dy f(y) g(x - y)
$$
Since a histogram is made of discrete values, a discrete convolution of the
signal $s$ and the kernel $k$ must be computed. Hence, the procedure boils
down to an element wise product between $s$ and the reverse histogram of $k$
for each relative position of the two histograms. Namely, if $c_i$ is the
$i^{\text{th}}$ bin of the convoluted histogram:
$$
c_i = \sum_j k_j s_{i - j}
$$
where $j$ runs over the bins of the kernel.
For a better understanding, see @fig:dot_conv. As can be seen, the third
histogram was obtained with $n + m - 1$ bins, a number greater than the initial
one.
\begin{figure}
\hypertarget{fig:dot_conv}{%
\centering
\begin{tikzpicture}
\definecolor{cyclamen}{RGB}{146, 24, 43}
% original histogram
\draw [thick, cyclamen, fill=cyclamen!05!white] (0.0,0) rectangle (0.5,2.5);
\draw [thick, cyclamen, fill=cyclamen!05!white] (0.5,0) rectangle (1.0,2.8);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,2.3);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,1.8);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,0) rectangle (2.5,1.4);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,0) rectangle (3.0,1.0);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,1.0);
\draw [thick, cyclamen, fill=cyclamen!05!white] (3.5,0) rectangle (4.0,0.6);
\draw [thick, cyclamen, fill=cyclamen!05!white] (4.0,0) rectangle (4.5,0.4);
\draw [thick, cyclamen, fill=cyclamen!05!white] (4.5,0) rectangle (5.0,0.2);
\draw [thick, cyclamen, fill=cyclamen!05!white] (5.0,0) rectangle (5.5,0.2);
\draw [thick, cyclamen] (6.0,0) -- (6.0,0.2);
\draw [thick, cyclamen] (6.5,0) -- (6.5,0.2);
\draw [thick, <->] (0,3.3) -- (0,0) -- (7,0);
% kernel histogram
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,-1) rectangle (1.5,-1.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,-1) rectangle (2.0,-1.6);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,-1) rectangle (2.5,-1.8);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,-1) rectangle (3.0,-1.6);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,-1) rectangle (3.5,-1.2);
\draw [thick, <->] (1,-2) -- (1,-1) -- (4,-1);
% arrows
\draw [thick, cyclamen, <->] (1.25,-0.2) -- (1.25,-0.8);
\draw [thick, cyclamen, <->] (1.75,-0.2) -- (1.75,-0.8);
\draw [thick, cyclamen, <->] (2.25,-0.2) -- (2.25,-0.8);
\draw [thick, cyclamen, <->] (2.75,-0.2) -- (2.75,-0.8);
\draw [thick, cyclamen, <->] (3.25,-0.2) -- (3.25,-0.8);
\draw [thick, cyclamen, ->] (2.25,-2.0) -- (2.25,-4.2);
% smeared histogram
\begin{scope}[shift={(0,-1)}]
\draw [thick, cyclamen, fill=cyclamen!05!white] (-1.0,-4.5) rectangle (-0.5,-4.3);
\draw [thick, cyclamen, fill=cyclamen!05!white] (-0.5,-4.5) rectangle ( 0.0,-4.2);
\draw [thick, cyclamen, fill=cyclamen!05!white] ( 0.0,-4.5) rectangle ( 0.5,-2.0);
\draw [thick, cyclamen, fill=cyclamen!05!white] ( 0.5,-4.5) rectangle ( 1.0,-1.6);
\draw [thick, cyclamen, fill=cyclamen!05!white] ( 1.0,-4.5) rectangle ( 1.5,-2.3);
\draw [thick, cyclamen, fill=cyclamen!05!white] ( 1.5,-4.5) rectangle ( 2.0,-2.9);
\draw [thick, cyclamen, fill=cyclamen!25!white] ( 2.0,-4.5) rectangle ( 2.5,-3.4);
\draw [thick, cyclamen] (3.0,-4.5) -- (3.0,-4.3);
\draw [thick, cyclamen] (3.5,-4.5) -- (3.5,-4.3);
\draw [thick, cyclamen] (4.0,-4.5) -- (4.0,-4.3);
\draw [thick, cyclamen] (4.5,-4.5) -- (4.5,-4.3);
\draw [thick, cyclamen] (5.0,-4.5) -- (5.0,-4.3);
\draw [thick, cyclamen] (5.5,-4.5) -- (5.5,-4.3);
\draw [thick, cyclamen] (6.0,-4.5) -- (6.0,-4.3);
\draw [thick, cyclamen] (6.5,-4.5) -- (6.5,-4.3);
\draw [thick, cyclamen] (7.0,-4.5) -- (7.0,-4.3);
\draw [thick, cyclamen] (7.5,-4.5) -- (7.5,-4.3);
\draw [thick, <->] (-1,-2.5) -- (-1,-4.5) -- (8,-4.5);
\end{scope}
% nodes
\node [above] at (2.25,-5.5) {$c_i$};
\node [above] at (3.25,0) {$s_i$};
\node [above] at (1.95,0) {$s_{i-3}$};
\node [below] at (1.75,-1) {$k_3$};
\end{tikzpicture}
\caption{Element wise product as a step of the convolution between the original signal
(above) and the kernel (center). The final result is the lower
fledging histogram.}\label{fig:dot_conv}
}
\end{figure}
## Unfolding with FFT
Two different unfolding routines were implemented, one of which exploiting the
Fast Fourier Transform. This method is based on the property of the Fourier
transform according to which, given two functions $f(x)$ and $g(x)$:
$$
\hat{F}[f \otimes g] = \hat{F}[f] \cdot \hat{F}[g]
$$
where $\hat{F}[\quad]$ stands for the Fourier transform of its argument.
Thus, the implementation of this tecnique lies in the computation of the Fourier
trasform of the smeared signal and the kernel, the ratio between their
transforms and the anti-transformation of the result:
$$
\hat{F}[s \otimes k] = \hat{F}[s] \cdot \hat{F}[k] \thus
\hat{F} [s] = \frac{\hat{F}[s \otimes k]}{\hat{F}[k]}
$$
Being the histogram a discrete set of data, the Discrete Fourier Transform (DFT)
was emploied. In particular, the FFT are efficient algorithms for calculating
the DFT. Given a set of $n$ values {$z_i$}, each one is transformed into:
$$
x_j = \sum_{k=0}^{n-1} z_k \exp \left( - \frac{2 \pi i j k}{n} \right)
$$
The evaluation of the DFT is a matrix-vector multiplication $W \vec{z}$. A
general matrix-vector multiplication takes $O(n^2)$ operations. FFT algorithms,
instad, use a divide-and-conquer strategy to factorize the matrix into smaller
sub-matrices. If $n$ can be factorized into a product of integers $n_1$, $n_2
\ldots n_m$, then the DFT can be computed in $O(n \sum n_i) < O(n^2)$
operations, hence the name.
The inverse Fourier transform is thereby defined as:
$$
z_j = \frac{1}{n}
\sum_{k=0}^{n-1} x_k \exp \left( \frac{2 \pi i j k}{n} \right)
$$
In GSL, `gsl_fft_complex_forward()` and `gsl_fft_complex_inverse()` are
functions which allow to compute the foreward and inverse transform,
respectively.
In order to accomplish this procedure, every histogram was transformed into a
vector. The kernel vector was 0-padded and centred in the middle to make its
length the same as that of the signal, making it feasable to implement the
division between the entries of the vectors one by one.
The inputs and outputs for the complex FFT routines are packed arrays of
floating point numbers. In a packed array the real and imaginary parts of
each complex number are placed in alternate neighboring elements.
In this special case, the sequence of values which must be transformed is made
of real numbers, but the Fourier transform is not real: it is a complex sequence
wich satisfies:
$$
z_k = z^*_{n-k}
$$
where $z^*$ is the conjugate of $z$. A sequence with this symmetry is called
'half-complex'. This structure requires particular storage layouts for the
forward transform (from real to half-complex) and inverse transform (from
half-complex to real). As a consequence, the routines are divided into two sets:
`gsl_fft_real` and `gsl_fft_halfcomplex`. The symmetry of the half-complex
sequence implies that only half of the complex numbers in the output need to be
stored. This works for all lengths: when the length is even, the middle value
is real. Thus, only $n$ real numbers are required to store the half-complex
sequence (half for the real part and half for the imaginary).
If the bin width is $\Delta \theta$, then the DFT domain ranges from $-1 / (2
\Delta \theta)$ to $+1 / (2 \Delta \theta$). The GSL functions aforementioned
store the positive values from the beginning of the array up to the middle and
the negative backwards from the end of the array (see @fig:reorder).
\begin{figure}
\hypertarget{fig:reorder}{%
\centering
\begin{tikzpicture}
\definecolor{cyclamen}{RGB}{146, 24, 43}
% standard histogram
\begin{scope}[shift={(7,0)}]
\draw [thick, cyclamen] (0.5,0) -- (0.5,0.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,0.6);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,1.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,0) rectangle (2.5,1.4);
\draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,0) rectangle (3.0,1.4);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,1.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.5,0) rectangle (4.0,0.6);
\draw [thick, cyclamen] (4.5,0) -- (4.5,0.2);
\draw [thick, ->] (0,0) -- (5,0);
\draw [thick, ->] (2.5,0) -- (2.5,2);
\end{scope}
% shifted histogram
\draw [thick, cyclamen, fill=cyclamen!25!white] (0.5,0) rectangle (1.0,1.4);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,1.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,0.6);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,0.6);
\draw [thick, cyclamen, fill=cyclamen!25!white] (3.5,0) rectangle (4.0,1.2);
\draw [thick, cyclamen, fill=cyclamen!25!white] (4.0,0) rectangle (4.5,1.4);
\draw [thick, ->] (0,0) -- (5,0);
\draw [thick, ->] (2.5,0) -- (2.5,2);
\end{tikzpicture}
\caption{On the left, an example of the DFT as it is given by the gsl function
and the same dataset, on the right, with the rearranged "intuitive"
order of the sequence.}\label{fig:reorder}
}
\end{figure}
When $\hat{F}[s \otimes k]$ and $\hat{F}[k]$ are computed, their normal format
must be restored in order to use them as standard complex numbers and compute
the ratio between them. Then, the result must return in the half-complex format
for the inverse DFT application.
GSL provides the function `gsl_fft_halfcomplex_unpack()` which passes the
vectors from half-complex format to standard complex format. The inverse
procedure, required to compute the inverse transformation of $\hat{F}[s]$, which
is not provided by GSL, was implemented in the code.
The fact that the gaussian kernel is centerd in the middle of the vector and
not in the $\text{zero}^{th}$ bin causes the final result to be shifted of half
the leght of the vector the same as it was produced by a DFT. This makes it
necessary to rearrange the two halfs of the final result.
At the end, the external bins which exceed with respect to the original signal
are cut away in order to restore the original number of bins $n$. Results are
shown in [@fig:results1; @fig:results2; @fig:results3].
## Unfolding with Richardson-Lucy
The RichardsonLucy (RL) deconvolution is an iterative procedure usually used
for recovering an image that has been blurred by a known point spread function.
It is based on the fact that an ideal point source does not appear as a point
but is spread out into the so-called point spread function, thus the observed
image can be represented in terms of a transition matrix
$P$ operating on an underlying image:
$$
d_i = \sum_{j} u_j \, P_{i, j}
$$
where $u_j$ is the intensity of the underlying image at pixel $j$ and $d_i$ is
the detected intensity at pixel $i$. Hence, the matrix describes the portion of
signal from the source pixel $j$ that is detected in pixel $i$.
In one dimension, the transfer function can be expressed in terms of the
distance between the source pixel $j$ and the observed $i$:
$$
P_{i, j} = \widetilde{P}(i-j) = P_{i - j}
$$
In order to estimate $u_j$ given {$d_i$} and $\widetilde{P}$, the following
iterative procedure can be applied for the estimate $\hat{u}^t_j$ of $u_j$,
where $t$ stands for the iteration number. The $t^{\text{th}}$ step is updated
as follows:
$$
\hat{u}^{t+1}_j = \hat{u}^t_j \sum_i \frac{d_i}{c_i} \, P_{i - j}
\with c_i = \sum_j \hat{u}^t_j \, P_{i - j}
$$
where $c_i$ is thereby an estimation of the blurred signal obtained with the
previous estimation of the clean signal.
It has been shown empirically that if this iteration converges, it converges to
the maximum likelihood solution for $u_j$. Writing it in terms of convolution,
it becomes:
$$
\hat{u}^{t+1} = \hat {u}^{t} \cdot \left( \frac{d}{{\hat{u}^{t}} \otimes P}
\otimes P^{\star} \right)
$$
where the division and multiplication are element wise, and
$P^{\star}$ is the flipped point spread function.
When implemented, this method results in an easy step-wise routine:
- create a flipped copy of the kernel;
- elect a zero-order estimate for {$c_i$};
- compute the convolutions with the method described in @sec:convolution, the
product and the division at each step;
- proceed until a given number of reiterations is achieved.
In this case, the zero-order was set $c_i = 0.5 \, \forall i$ and it was
empirically shown that the better result is given with a number of three steps,
otherwise it starts returnig fanciful histograms. Results are shown in
[@fig:results1; @fig:results2; @fig:results3].
## Results comparison
In [@fig:results1; @fig:results2; @fig:results3] the results obtained for three
different $\sigma$s are shown. The tested values are $\Delta \theta$, $0.5 \,
\Delta \theta$ and $0.05 \, \Delta \theta$, where $\Delta \theta$ is the bin
width of the original histogram, which is the one previously introduced in
@fig:original. In each figure, the convolved signal is shown above, the
histogram deconvolved with the FFT method is in the middle and the one
deconvolved with RL is located below.
As can be seen, increasig the value of $\sigma$ implies a stronger smoothing of
the curve. The FFT deconvolution process seems not to be affected by $\sigma$
amplitude changes: it always gives the same outcome, remarkably similar to the
original signal. The same can't be said about the RL deconvolution, which, on
the other hand, looks heavily influenced by the variance magnitude: the greater
$\sigma$, the worse the deconvoluted result. In fact, given the same number of
steps, the deconvolved signal is always the same 'distance' far form the
convolved one: if it very smooth, the deconvolved signal is very smooth too and
if the convolved is less smooth, it is less smooth too.
It was also implemented the possibility to add a Poisson noise to the
convoluted histogram to check weather the deconvolution is affected or not by
this kind of noise. It was took as an example the case with $\sigma = \Delta
\theta$. In @fig:poisson the results are shown for both methods when a Poisson
noise with mean $\mu = 50$ is employed.
In both cases, the addition of the Poisson noise seems to affect partially the
deconvolution. When the FFT method was applied, it adds little spikes nearly
everywhere on the curve but it is particularly evident on the edges of the
curve, where the expected data are very small. This is because the technique is
very accurate and hence returns nearly the exact original data which, in this
case, is the expected one to which the Poisson noise is added.
On the other hand, the Richardson-Lucy routine is less affected by this further
complication being already inaccurate in itself.
<div id="fig:results1">
![Convolved signal.](images/noise-0.05.pdf){width=12cm}
![Deconvolved signal with FFT.](images/deco-fft-0.05.pdf){width=12cm}
![Deconvolved signal with RL.](images/deco-rl-0.05.pdf){width=12cm}
Results for $\sigma = 0.05 \Delta \theta$, where $\Delta \theta$ is the bin
width.
</div>
<div id="fig:results2">
![Convolved signal.](images/noise-0.5.pdf){width=12cm}
![Deconvolved signal with FFT.](images/deco-fft-0.5.pdf){width=12cm}
![Deconvolved signal with RL.](images/deco-rl-0.5.pdf){width=12cm}
Results for $\sigma = 0.5 \Delta \theta$, where $\Delta \theta$ is the bin
width.
</div>
<div id="fig:results3">
![Convolved signal.](images/noise-1.pdf){width=12cm}
![Deconvolved signal with FFT.](images/deco-fft-1.pdf){width=12cm}
![Deconvolved signal with RL.](images/deco-rl-1.pdf){width=12cm}
Results for $\sigma = \Delta \theta$, where $\Delta \theta$ is the bin width.
</div>
<div id="fig:poisson">
![Deconvolved signal with FFT.](images/poisson-fft.pdf){width=12cm}
![Deconvolved signal withh RL.](images/poisson-rl.pdf){width=12cm}
Results for $\sigma = \Delta \theta$, poissoned data.
</div>