analistica/notes/sections/6.md

24 KiB
Raw Blame History

Exercise 6

Generating points according to Fraunhöfer diffraction

The diffraction of a plane wave through a round slit must be simulated by generating N = 50'000 points according to the intensity distribution I(\theta) on a screen at a great distance L from the slit itself:


  I(\theta) = \frac{E^2}{2} \left( \frac{2 \pi a^2 \cos{\theta}}{L}
  \frac{J_1(x)}{x} \right)^2 \with x = k a \sin{\theta}

where:

  • E is the electric field amplitude, default set E = \SI{1e4}{V/m};
  • a is the radius of the slit aperture, default set a = \SI{0.01}{m};
  • \theta is the angle specified in @fig:slit;
  • J_1 is a Bessel function of first kind;
  • k is the wavenumber, default set k = \SI{1e-4}{m^{-1}};
  • L default set L = \SI{1}{m}.

\begin{figure} \hypertarget{fig:slit}{% \centering \begin{tikzpicture} \definecolor{cyclamen}{RGB}{146, 24, 43} % Walls \draw [thick] (-1,3) -- (1,3) -- (1,0.3) -- (1.2,0.3) -- (1.2,3) -- (9,3); \draw [thick] (-1,-3) -- (1,-3) -- (1,-0.3) -- (1.2,-0.3) -- (1.2,-3) -- (9,-3); \draw [thick] (10,3) -- (9.8,3) -- (9.8,-3) -- (10,-3); % Lines \draw [thick, gray] (0.7,0.3) -- (0.5,0.3); \draw [thick, gray] (0.7,-0.3) -- (0.5,-0.3); \draw [thick, gray] (0.6,0.3) -- (0.6,-0.3); \draw [thick, gray] (1.2,0) -- (9.8,0); \draw [thick, gray] (1.2,-0.1) -- (1.2,0.1); \draw [thick, gray] (9.8,-0.1) -- (9.8,0.1); \draw [thick, cyclamen] (1.2,0) -- (9.8,-2); \draw [thick, cyclamen] (7,0) to [out=-90, in=50] (6.6,-1.23); % Nodes \node at (0,0) {$2a$}; \node at (5.5,0.4) {$L$}; \node [cyclamen] at (5.5,-0.4) {$\theta$}; \node [rotate=-90] at (10.2,0) {screen}; \end{tikzpicture} \caption{Fraunhöfer diffraction.}\label{fig:slit} } \end{figure}

Once again, the try and catch method described in @sec:3 was implemented and the same procedure about the generation of \theta was employed. This time, though, \theta must be evenly distributed on half sphere:

\begin{align*} \frac{d^2 P}{d\omega^2} = const = \frac{1}{2 \pi} &\thus d^2 P = \frac{1}{2 \pi} d\omega^2 = \frac{1}{2 \pi} d\phi \sin{\theta} d\theta \ &\thus \frac{dP}{d\theta} = \int_0^{2 \pi} d\phi \frac{1}{2 \pi} \sin{\theta} = \frac{1}{2 \pi} \sin{\theta} , 2 \pi = \sin{\theta} \end{align*}

\begin{align*} \theta = \theta (x) &\thus \frac{dP}{d\theta} = \frac{dP}{dx} \cdot \left| \frac{dx}{d\theta} \right| = \left. \frac{dP}{dx} \middle/ , \left| \frac{d\theta}{dx} \right| \right. \ &\thus \sin{\theta} = \left. 1 \middle/ , \left| \frac{d\theta}{dx} \right| \right. \end{align*}

If \theta is chosen to grew together with x, then the absolute value can be omitted:

\begin{align*} \frac{d\theta}{dx} = \frac{1}{\sin{\theta}} &\thus d\theta \sin(\theta) = dx \ &\thus - \cos (\theta') |_{0}^{\theta} = x(\theta) - x(0) = x - 0 = x \ &\thus - \cos(\theta) + 1 =x \ &\thus \theta = \text{acos} (1 -x) \end{align*}

The sample was binned and stored in a histogram with a customizable number $n$ of bins default set n = 150. In @fig:original an example is shown.

Example of an intensity histogram.{#fig:original}

Gaussian convolution

The sample must then be smeared with a Gaussian function with the aim to recover the original sample afterwards, implementing a deconvolution routine.
For this purpose, a 'kernel' histogram with a odd number m of bins and the same bin width of the previous one, but a smaller number of them (m < n), was filled with m points according to a Gaussian distribution with mean \mu, corresponding to the central bin, and variance \sigma.
Then, the original histogram was convolved with the kernel in order to obtain the smeared signal. Some results in terms of various \sigma are shown in [@fig:results1; @fig:results2; @fig:results3].
The convolution was implemented as follow. Consider the definition of convolution of two functions f(x) and g(x):


  f \otimes g (x) = \int \limits_{- \infty}^{+ \infty} dy f(y) g(x - y)

Since a histogram is made of discrete values, a discrete convolution of the signal s and the kernel k must be computed. Hence, the procedure boils down to an element wise product between s and the reverse histogram of $k$ for each relative position of the two histograms. Namely, if c_i is the i^{\text{th}} bin of the convoluted histogram:


  c_i = \sum_j k_j s_{i - j}

where j runs over the bins of the kernel.
For a better understanding, see @fig:dot_conv. As can be seen, the third histogram was obtained with n + m - 1 bins, a number greater than the initial one.

\begin{figure} \hypertarget{fig:dot_conv}{% \centering \begin{tikzpicture} \definecolor{cyclamen}{RGB}{146, 24, 43} % original histogram \draw [thick, cyclamen, fill=cyclamen!05!white] (0.0,0) rectangle (0.5,2.5); \draw [thick, cyclamen, fill=cyclamen!05!white] (0.5,0) rectangle (1.0,2.8); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,2.3); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,1.8); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,0) rectangle (2.5,1.4); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,0) rectangle (3.0,1.0); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,1.0); \draw [thick, cyclamen, fill=cyclamen!05!white] (3.5,0) rectangle (4.0,0.6); \draw [thick, cyclamen, fill=cyclamen!05!white] (4.0,0) rectangle (4.5,0.4); \draw [thick, cyclamen, fill=cyclamen!05!white] (4.5,0) rectangle (5.0,0.2); \draw [thick, cyclamen, fill=cyclamen!05!white] (5.0,0) rectangle (5.5,0.2); \draw [thick, cyclamen] (6.0,0) -- (6.0,0.2); \draw [thick, cyclamen] (6.5,0) -- (6.5,0.2); \draw [thick, <->] (0,3.3) -- (0,0) -- (7,0); % kernel histogram \draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,-1) rectangle (1.5,-1.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,-1) rectangle (2.0,-1.6); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,-1) rectangle (2.5,-1.8); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,-1) rectangle (3.0,-1.6); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,-1) rectangle (3.5,-1.2); \draw [thick, <->] (1,-2) -- (1,-1) -- (4,-1); % arrows \draw [thick, cyclamen, <->] (1.25,-0.2) -- (1.25,-0.8); \draw [thick, cyclamen, <->] (1.75,-0.2) -- (1.75,-0.8); \draw [thick, cyclamen, <->] (2.25,-0.2) -- (2.25,-0.8); \draw [thick, cyclamen, <->] (2.75,-0.2) -- (2.75,-0.8); \draw [thick, cyclamen, <->] (3.25,-0.2) -- (3.25,-0.8); \draw [thick, cyclamen, ->] (2.25,-2.0) -- (2.25,-4.2); % smeared histogram \begin{scope}[shift={(0,-1)}] \draw [thick, cyclamen, fill=cyclamen!05!white] (-1.0,-4.5) rectangle (-0.5,-4.3); \draw [thick, cyclamen, fill=cyclamen!05!white] (-0.5,-4.5) rectangle ( 0.0,-4.2); \draw [thick, cyclamen, fill=cyclamen!05!white] ( 0.0,-4.5) rectangle ( 0.5,-2.0); \draw [thick, cyclamen, fill=cyclamen!05!white] ( 0.5,-4.5) rectangle ( 1.0,-1.6); \draw [thick, cyclamen, fill=cyclamen!05!white] ( 1.0,-4.5) rectangle ( 1.5,-2.3); \draw [thick, cyclamen, fill=cyclamen!05!white] ( 1.5,-4.5) rectangle ( 2.0,-2.9); \draw [thick, cyclamen, fill=cyclamen!25!white] ( 2.0,-4.5) rectangle ( 2.5,-3.4); \draw [thick, cyclamen] (3.0,-4.5) -- (3.0,-4.3); \draw [thick, cyclamen] (3.5,-4.5) -- (3.5,-4.3); \draw [thick, cyclamen] (4.0,-4.5) -- (4.0,-4.3); \draw [thick, cyclamen] (4.5,-4.5) -- (4.5,-4.3); \draw [thick, cyclamen] (5.0,-4.5) -- (5.0,-4.3); \draw [thick, cyclamen] (5.5,-4.5) -- (5.5,-4.3); \draw [thick, cyclamen] (6.0,-4.5) -- (6.0,-4.3); \draw [thick, cyclamen] (6.5,-4.5) -- (6.5,-4.3); \draw [thick, cyclamen] (7.0,-4.5) -- (7.0,-4.3); \draw [thick, cyclamen] (7.5,-4.5) -- (7.5,-4.3); \draw [thick, <->] (-1,-2.5) -- (-1,-4.5) -- (8,-4.5); \end{scope} % nodes \node [above] at (2.25,-5.5) {$c_i$}; \node [above] at (3.25,0) {$s_i$}; \node [above] at (1.95,0) {$s_{i-3}$}; \node [below] at (1.75,-1) {$k_3$}; \end{tikzpicture} \caption{Element wise product as a step of the convolution between the original signal (above) and the kernel (center). The final result is the lower fledging histogram.}\label{fig:dot_conv} } \end{figure}

Unfolding with FFT

Two different unfolding routines were implemented, one of which exploiting the Fast Fourier Transform. This method is based on the property of the Fourier transform according to which, given two functions f(x) and g(x):


  \hat{F}[f \otimes g] = \hat{F}[f] \cdot \hat{F}[g]

where \hat{F}[\quad] stands for the Fourier transform of its argument.
Thus, the implementation of this tecnique lies in the computation of the Fourier trasform of the smeared signal and the kernel, the ratio between their transforms and the anti-transformation of the result:


  \hat{F}[s \otimes k] = \hat{F}[s] \cdot \hat{F}[k] \thus
  \hat{F} [s] = \frac{\hat{F}[s \otimes k]}{\hat{F}[k]}

Being the histogram a discrete set of data, the Discrete Fourier Transform (DFT) was emploied. In particular, the FFT are efficient algorithms for calculating the DFT. Given a set of n values {$z_i$}, each one is transformed into:


  x_j = \sum_{k=0}^{n-1} z_k \exp \left( - \frac{2 \pi i j k}{n} \right)

The evaluation of the DFT is a matrix-vector multiplication W \vec{z}. A general matrix-vector multiplication takes O(n^2) operations. FFT algorithms, instad, use a divide-and-conquer strategy to factorize the matrix into smaller sub-matrices. If n can be factorized into a product of integers n_1, $n_2 \ldots n_m$, then the DFT can be computed in $O(n \sum n_i) < O(n^2)$ operations, hence the name.
The inverse Fourier transform is thereby defined as:


  z_j = \frac{1}{n}
        \sum_{k=0}^{n-1} x_k \exp \left( \frac{2 \pi i j k}{n} \right)

In GSL, gsl_fft_complex_forward() and gsl_fft_complex_inverse() are functions which allow to compute the foreward and inverse transform, respectively.

In order to accomplish this procedure, every histogram was transformed into a vector. The kernel vector was 0-padded and centred in the middle to make its length the same as that of the signal, making it feasable to implement the division between the entries of the vectors one by one.

The inputs and outputs for the complex FFT routines are packed arrays of floating point numbers. In a packed array the real and imaginary parts of each complex number are placed in alternate neighboring elements.
In this special case, the sequence of values which must be transformed is made of real numbers, but the Fourier transform is not real: it is a complex sequence wich satisfies:


  z_k = z^*_{n-k}

where z^* is the conjugate of z. A sequence with this symmetry is called 'half-complex'. This structure requires particular storage layouts for the forward transform (from real to half-complex) and inverse transform (from half-complex to real). As a consequence, the routines are divided into two sets: gsl_fft_real and gsl_fft_halfcomplex. The symmetry of the half-complex sequence implies that only half of the complex numbers in the output need to be stored. This works for all lengths: when the length is even, the middle value is real. Thus, only n real numbers are required to store the half-complex sequence (half for the real part and half for the imaginary).
If the bin width is \Delta \theta, then the DFT domain ranges from $-1 / (2 \Delta \theta)$ to +1 / (2 \Delta \theta). The aforementioned GSL functions store the positive values from the beginning of the array up to the middle and the negative backwards from the end of the array (see @fig:reorder).

\begin{figure} \hypertarget{fig:reorder}{% \centering \begin{tikzpicture} \definecolor{cyclamen}{RGB}{146, 24, 43} % standard histogram \begin{scope}[shift={(7,0)}] \draw [thick, cyclamen] (0.5,0) -- (0.5,0.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,0.6); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,1.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.0,0) rectangle (2.5,1.4); \draw [thick, cyclamen, fill=cyclamen!25!white] (2.5,0) rectangle (3.0,1.4); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,1.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.5,0) rectangle (4.0,0.6); \draw [thick, cyclamen] (4.5,0) -- (4.5,0.2); \draw [thick, ->] (0,0) -- (5,0); \draw [thick, ->] (2.5,0) -- (2.5,2); \end{scope} % shifted histogram \draw [thick, cyclamen, fill=cyclamen!25!white] (0.5,0) rectangle (1.0,1.4); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.0,0) rectangle (1.5,1.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (1.5,0) rectangle (2.0,0.6); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.0,0) rectangle (3.5,0.6); \draw [thick, cyclamen, fill=cyclamen!25!white] (3.5,0) rectangle (4.0,1.2); \draw [thick, cyclamen, fill=cyclamen!25!white] (4.0,0) rectangle (4.5,1.4); \draw [thick, ->] (0,0) -- (5,0); \draw [thick, ->] (2.5,0) -- (2.5,2); \end{tikzpicture} \caption{On the left, an example of the DFT as it is given by the gsl function and the same dataset, on the right, with the rearranged "intuitive" order of the sequence.}\label{fig:reorder} } \end{figure}

When \hat{F}[s \otimes k] and \hat{F}[k] are computed, their normal format must be restored in order to use them as standard complex numbers and compute the ratio between them. Then, the result must return in the half-complex format for the inverse DFT application.
GSL provides the function gsl_fft_halfcomplex_unpack() which passes the vectors from half-complex format to standard complex format. The inverse procedure, required to compute the inverse transformation of \hat{F}[s], which is not provided by GSL, was implemented in the code.
The fact that the gaussian kernel is centerd in the middle of the vector and not in the \text{zero}^{th} bin causes the final result to be shifted of half the leght of the vector the same as it was produced by a DFT. This makes it necessary to rearrange the two halfs of the final result.

At the end, the external bins which exceed with respect to the original signal are cut away in order to restore the original number of bins n. Results are shown in [@fig:results1; @fig:results2; @fig:results3].

Unfolding with Richardson-Lucy

The RichardsonLucy (RL) deconvolution is an iterative procedure tipically used for recovering an image that has been blurred by a known point spread function.

It is based on the fact that an ideal point source does not appear as a point but is spread out into the so-called point spread function, thus the observed image can be represented in terms of a transition matrix P operating on an underlying image:


 d_i = \sum_{j} u_j \, P_{i, j} 

where u_j is the intensity of the underlying image at pixel j and d_i is the detected intensity at pixel i. Hence, the matrix describes the portion of signal from the source pixel j that is detected in pixel i.
In one dimension, the transfer function can be expressed in terms of the distance between the source pixel j and the observed i:


  P_{i, j} = \widetilde{P}(i-j) = P_{i - j}

In order to estimate u_j given {$d_i$} and \widetilde{P}, the following iterative procedure can be applied for the estimate \hat{u}^t_j of u_j, where t stands for the iteration number. The t^{\text{th}} step is updated as follows:


  \hat{u}^{t+1}_j = \hat{u}^t_j \sum_i \frac{d_i}{c_i} \, P_{i - j}
  \with c_i = \sum_j \hat{u}^t_j \, P_{i - j}

where c_i is thereby an estimation of the blurred signal obtained with the previous estimation of the clean signal.
It has been shown empirically that if this iteration converges, it converges to the maximum likelihood solution for u_j. Writing it in terms of convolution, it becomes:


   \hat{u}^{t+1} = \hat {u}^{t} \cdot \left( \frac{d}{{\hat{u}^{t}} \otimes P}
   \otimes P^{\star} \right)

where the division and multiplication are element wise, and P^{\star} is the flipped point spread function.

When implemented, this method results in an easy step-wise routine:

  • create a flipped copy of the kernel;
  • choose a zero-order estimate for {$c_i$};
  • compute the convolutions with the method described in @sec:convolution, the product and the division at each step;
  • proceed until a given number of reiterations is achieved.

In this case, the zero-order was set c_i = 0.5 \, \forall i and it was empirically shown that the better result is given with a number of three steps, otherwise it starts returnig fanciful histograms. Results are shown in [@fig:results1; @fig:results2; @fig:results3].

Results comparison

In [@fig:results1; @fig:results2; @fig:results3] the results obtained for three different $\sigma$s are shown. The tested values are \Delta \theta, $0.5 , \Delta \theta$ and 0.05 \, \Delta \theta, where \Delta \theta is the bin width of the original histogram, which is the one previously introduced in @fig:original. In each figure, the convolved signal is shown above, the histogram deconvolved with the FFT method is in the middle and the one deconvolved with RL is located below.

As can be seen, increasing the value of \sigma implies a stronger smoothing of the curve. The FFT deconvolution process seems not to be affected by $\sigma$ amplitude changes: it always gives the same outcome, which is exactly the original signal. In fact, the FFT is the analitical result of the deconvolution. In the real world, it is unpratical, since signals are inevitably blurred by noise.
The same can't be said about the RL deconvolution, which, on the other hand, looks heavily influenced by the variance magnitude: the greater \sigma, the worse the deconvoluted result. In fact, given the same number of steps, the deconvolved signal is always the same 'distance' far form the convolved one: if it very smooth, the deconvolved signal is very smooth too and if the convolved is less smooth, it is less smooth too.

The original signal is shown below for convenience.

Example of an intensity histogram.

![Convolved signal.](images/fraun-conv-0.05.pdf){width=12cm}

Deconvolved signal with FFT.{width=12cm}

Deconvolved signal with RL.{width=12cm}

Results for \sigma = 0.05 \Delta \theta, where \Delta \theta is the bin width.

![Convolved signal.](images/fraun-conv-0.5.pdf){width=12cm}

Deconvolved signal with FFT.{width=12cm}

Deconvolved signal with RL.{width=12cm}

Results for \sigma = 0.5 \Delta \theta, where \Delta \theta is the bin width.

![Convolved signal.](images/fraun-conv-1.pdf){width=12cm}

Deconvolved signal with FFT.{width=12cm}

Deconvolved signal with RL.{width=12cm}

Results for \sigma = \Delta \theta, where \Delta \theta is the bin width.

It was also implemented the possibility to add a Poisson noise to the convolved histogram to check weather the deconvolution is affected or not by this kind of interference. It was took as an example the case with $\sigma = \Delta \theta$. In @fig:poisson the results are shown for both methods when a Poisson noise with mean \mu = 50 is employed.
In both cases, the addition of the noise seems to partially affect the deconvolution. When the FFT method is applied, it adds little spikes nearly everywhere on the curve and it is particularly evident on the edges, where the expected data are very small. On the other hand, the Richardson-Lucy routine is less affected by this further complication.

![Deconvolved signal with FFT.](images/fraun-noise-fft.pdf){width=12cm}

Deconvolved signal withh RL.{width=12cm}

Results for \sigma = \Delta \theta, with Poisson noise.

In order to quantify the similarity of a deconvolution outcome with the original signal, a null hypotesis test was made up.
Likewise in @sec:Landau, the original sample was treated as a population from which other samples of the same size were sampled with replacements. For each new sample, the earth mover's distance with respect to the original signal was computed.

In statistics, the earth mover's distance (EMD) is the measure of distance between two probability distributions [@cock41]. Informally, the distributions are interpreted as two different ways of piling up a certain amount of dirt over a region and the EMD is the minimum cost of turning one pile into the other, where the cost is the amount of dirt moved times the distance by which it is moved. It is valid only if the two distributions have the same integral, that is if the two piles have the same amount of dirt.
Computing the EMD is based on a solution to the well-known transportation problem, which can be formalized as follows.

Consider two vectors:


  P = \{ (p_1, w_{p1}) \dots (p_n, w_{pm}) \} \et
  Q = \{ (q_1, w_{q1}) \dots (q_n, w_{qn}) \}

where p_i and q_i are the 'values' and w_{pi} and w_{qi} are their weights. The entries d_{ij} of the ground distance matrix D_{ij} are defined as the distances between p_i and q_j.
The aim is to find the flow F = {$f_{ij}$}, where f_{ij} is the flow between p_i and p_j (which would be the quantity of moved dirt), which minimizes the cost W:


  W (P, Q, F) = \sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij}

with the constraints:

\begin{align*} &f_{ij} \ge 0 \hspace{15pt} &1 \le i \le m \wedge 1 \le j \le n \ &\sum_{j = 1}^n f_{ij} \le w_{pi} &1 \le i \le m \ &\sum_{j = 1}^m f_{ij} \le w_{qj} &1 \le j \le n \end{align*}


  \sum_{j = 1}^n f_{ij} \sum_{j = 1}^m f_{ij} \le w_{qj}
  = \text{min} \left( \sum_{i = 1}^m w_{pi}, \sum_{j = 1}^n w_{qj} \right)

The first constraint allows moving 'dirt' from P to Q and not vice versa. The next two constraints limits the amount of supplies that can be sent by the values in P to their weights, and the values in Q to receive no more supplies than their weights; the last constraint forces to move the maximum amount of supplies possible. The total moved amount is the total flow. Once the transportation problem is solved, and the optimal flow is found, the earth mover's distance D is defined as the work normalized by the total flow:


  D (P, Q) = \frac{\sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij}}
                  {\sum_{i = 1}^m \sum_{j=1}^n f_{ij}}

In this case, where the EMD must be applied to two same-lenght histograms, the procedure simplifies a lot. By representing both histograms with two vectors $u$ and v, the equation above boils down to [@ramdas17]:


  D (u, v) = \sum_i |U_i - V_i|

where the sum runs over the entries of the vectors U and V, which are the cumulative vectors of the histograms.
In the code, the following equivalent recursive routine was implemented.


  D (u, v) = \sum_i |D_i| \with
  \begin{cases}
    D_i = v_i - u_i + D_{i-1} \\ 
    D_0 = 0
  \end{cases}

In fact:

\begin{align*} D (u, v) &= \sum_i |D_i| = |D_0| + |D_1| + |D_2| + |D_3| + \dots \ &= 0 + |v_1 - u_1 + D_0| + |v_2 - u_2 + D_1| + |v_3 - u_3 + D_2| + \dots \ &= |v_1 - u_1| + |v_1 - u_1 + v_2 - u_2| + |v_1 - u_1 + v_2 - u_2 + v_3 - u_3| + \dots \ &= |v_1 - u_i| + |v_1 + v_2 - (u_1 + u_2)| + |v_1 + v_2 + v_3 - (u_1 + u_2 + u_3))| + \dots \ &= |V_1 - U_1| + |V_2 - U_2| + |V_3 - U_3| + \dots \ &= \sum_i |U_i - V_i| \end{align*}

\textcolor{red}{EMD}

These distances were used to build their empirical cumulative distribution.

\textcolor{red}{empirical distribution}

At 95% confidence level, the compatibility of the deconvolved signal with the original one cannot be disporoved if its distance from the original signal is grater than \textcolor{red}{value}.

\textcolor{red}{counts}