images: neatly renamed all the images

This commit is contained in:
Giù Marcer 2020-05-20 18:45:27 +02:00 committed by rnhmjoj
parent 16c2511289
commit ee1d2242eb
54 changed files with 114 additions and 99 deletions

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -9,7 +9,7 @@ $$
f(x) = \int \limits_{0}^{+ \infty} dt \, e^{-t \log(t) -xt} \sin (\pi t) f(x) = \int \limits_{0}^{+ \infty} dt \, e^{-t \log(t) -xt} \sin (\pi t)
$$ $$
![Landau distribution.](images/landau-small.pdf){width=50%} ![Landau distribution.](images/1-landau-small.pdf){width=50%}
The GNU Scientific Library (GSL) provides a number of functions for generating The GNU Scientific Library (GSL) provides a number of functions for generating
random variates following tens of probability distributions. Thus, the function random variates following tens of probability distributions. Thus, the function
@ -20,7 +20,7 @@ an histogram and plotted with matplotlib. The result is shown in @fig:landau.
![Example of N = 10'000 points generated with the `gsl_ran_landau()` ![Example of N = 10'000 points generated with the `gsl_ran_landau()`
function and plotted in a 100-bins histogram ranging from -20 to function and plotted in a 100-bins histogram ranging from -20 to
80.](images/landau-histo.pdf){#fig:landau} 80.](images/1-landau-histo.pdf){#fig:landau}
## Randomness testing of the generated sample ## Randomness testing of the generated sample
@ -102,7 +102,7 @@ PDF, very few parameters can be easily checked: mode, median and full width at
half maximum (FWHM). half maximum (FWHM).
![Landau distribution with emphatized mode $m_e$ and ![Landau distribution with emphatized mode $m_e$ and
FWHM = ($x_+ - x_-$).](images/landau.pdf) FWHM = ($x_+ - x_-$).](images/1-landau.pdf)
\begin{figure} \begin{figure}
\hypertarget{fig:parameters}{% \hypertarget{fig:parameters}{%
@ -304,7 +304,7 @@ $$
![Example of a Moyal distribution density obtained by the KDE method. The rug ![Example of a Moyal distribution density obtained by the KDE method. The rug
plot shows the original sample used in the reconstruction. The 0.6 factor plot shows the original sample used in the reconstruction. The 0.6 factor
compensate for the otherwise peak height reduction.](images/landau-kde.pdf) compensate for the otherwise peak height reduction.](images/1-landau-kde.pdf)
On the other hand, obtaining a good estimate of the FWHM from a sample is much On the other hand, obtaining a good estimate of the FWHM from a sample is much
more difficult. In principle, it could be measured by binning the data and more difficult. In principle, it could be measured by binning the data and

View File

@ -26,7 +26,7 @@ numbers, packed in series, partial sums or infinite products. Thus, the
efficiency of the methods lies on how quickly they converge to their limit. efficiency of the methods lies on how quickly they converge to their limit.
![The area of the blue region converges to the EulerMascheroni ![The area of the blue region converges to the EulerMascheroni
constant.](images/gamma-area.png){#fig:gamma width=7cm} constant.](images/2-gamma-area.png){#fig:gamma width=7cm}
## Computing the constant ## Computing the constant

View File

@ -45,14 +45,14 @@ easily generated by the GSL function `gsl_rng_uniform()`.
<div id="fig:compare"> <div id="fig:compare">
![Uniformly distributed points with $\theta$ evenly distributed between ![Uniformly distributed points with $\theta$ evenly distributed between
0 and $\pi$.](images/histo-i-u.pdf){width=50%} 0 and $\pi$.](images/3-histo-i-u.pdf){width=50%}
![Points uniformly distributed on a spherical ![Points uniformly distributed on a spherical
surface.](images/histo-p-u.pdf){width=50%} surface.](images/3-histo-p-u.pdf){width=50%}
![Sample generated according to $F$ with $\theta$ evenly distributed between ![Sample generated according to $F$ with $\theta$ evenly distributed between
0 and $\pi$.](images/histo-i-F.pdf){width=50%} 0 and $\pi$.](images/3-histo-i-F.pdf){width=50%}
![Sample generated according to $F$ with $\theta$ properly ![Sample generated according to $F$ with $\theta$ properly
distributed.](images/histo-p-F.pdf){width=50%} distributed.](images/3-histo-p-F.pdf){width=50%}
Examples of samples. On the left, points with $\theta$ evenly distributed Examples of samples. On the left, points with $\theta$ evenly distributed
between 0 and $\pi$; on the right, points with $\theta$ properly distributed. between 0 and $\pi$; on the right, points with $\theta$ properly distributed.

View File

@ -150,7 +150,7 @@ $$ {#eq:dip}
Namely: Namely:
![Plot of the expected distribution with ![Plot of the expected distribution with
$P_{\text{max}} = 10$.](images/expected.pdf){#fig:plot} $P_{\text{max}} = 10$.](images/4-expected.pdf){#fig:plot}
## Monte Carlo simulation ## Monte Carlo simulation
@ -194,7 +194,7 @@ following. At first $S_j = 0 \wedge \text{num}_j = 0 \, \forall \, j$, then:
For $P_{\text{max}} = 10$ and $n = 50$, the following result was obtained: For $P_{\text{max}} = 10$ and $n = 50$, the following result was obtained:
![Sampled points histogram.](images/dip.pdf) ![Sampled points histogram.](images/4-dip.pdf)
In order to check whether the expected distribution (@eq:dip) properly matches In order to check whether the expected distribution (@eq:dip) properly matches
the produced histogram, a chi-squared minimization was applied. Being a simple the produced histogram, a chi-squared minimization was applied. Being a simple
@ -232,4 +232,4 @@ distribution. In @fig:fit, the fit function superimposed on the histogram is
shown. shown.
![Fitted sampled data. $P^{\text{oss}}_{\text{max}} = 10.005 ![Fitted sampled data. $P^{\text{oss}}_{\text{max}} = 10.005
\pm 0.018$, $\chi_r^2 = 0.071$.](images/fit.pdf){#fig:fit} \pm 0.018$, $\chi_r^2 = 0.071$.](images/4-fit.pdf){#fig:fit}

View File

@ -77,7 +77,7 @@ given.
![Estimated values of $I$ obatined by Plain MC technique with different ![Estimated values of $I$ obatined by Plain MC technique with different
number of function calls; logarithmic scale; errorbars showing their number of function calls; logarithmic scale; errorbars showing their
estimated uncertainties. As can be seen, the process does a sort o seesaw estimated uncertainties. As can be seen, the process does a sort o seesaw
around the correct value.](images/MC_MC.pdf){#fig:MC} around the correct value.](images/5-MC_MC.pdf){#fig:MC}
--------------------------------------------------------------------------- ---------------------------------------------------------------------------
calls $I^{\text{oss}}$ $\sigma$ diff calls $I^{\text{oss}}$ $\sigma$ diff
@ -220,7 +220,7 @@ to give an overall result and an estimate of its error [@sayah19].
![Estimations $I^{\text{oss}}$ of the integral $I$ obtained for the three ![Estimations $I^{\text{oss}}$ of the integral $I$ obtained for the three
implemented method for different values of function calls. Errorbars implemented method for different values of function calls. Errorbars
showing their estimated uncertainties.](images/MC_MC_MI.pdf){#fig:MI} showing their estimated uncertainties.](images/5-MC_MC_MI.pdf){#fig:MI}
Results for this particular sample are shown in black in @fig:MI and some of Results for this particular sample are shown in black in @fig:MI and some of
them are listed in @tbl:MI. Except for the first very little number of calls, them are listed in @tbl:MI. Except for the first very little number of calls,
@ -400,7 +400,7 @@ calls, the difference is smaller than \SI{1e-10}{}.
![Only the most accurate results are shown in order to stress the ![Only the most accurate results are shown in order to stress the
differences between VEGAS (in gray) and MISER (in black) methods differences between VEGAS (in gray) and MISER (in black) methods
results.](images/MC_MI_VE.pdf){#fig:MI_VE} results.](images/5-MC_MI_VE.pdf){#fig:MI_VE}
In conclusion, between a plain Monte Carlo technique, stratified sampling and In conclusion, between a plain Monte Carlo technique, stratified sampling and
importance sampling, the last one turned out to be the most powerful mean to importance sampling, the last one turned out to be the most powerful mean to

View File

@ -87,13 +87,14 @@ The so obtained sample was binned and stored in a histogram with a customizable
number $n$ of bins (default set $n = 150$) ranging from $\theta = 0$ to $\theta number $n$ of bins (default set $n = 150$) ranging from $\theta = 0$ to $\theta
= \pi/2$ bacause of the system symmetry. In @fig:original an example is shown. = \pi/2$ bacause of the system symmetry. In @fig:original an example is shown.
![Example of intensity histogram.](images/original.pdf){#fig:original} ![Example of intensity histogram.](images/6-original.pdf){#fig:original}
## Gaussian convolution {#sec:convolution} ## Gaussian convolution {#sec:convolution}
The sample has then to be smeared with a Gaussian function with the aim to In order to simulate the instrumentation response, the sample was then to be
recover the original sample afterwards, implementing a deconvolution routine. smeared with a Gaussian function with the aim to recover the original sample
afterwards, implementing a deconvolution routine.
For this purpose, a 'kernel' histogram with an even number $m$ of bins and the For this purpose, a 'kernel' histogram with an even number $m$ of bins and the
same bin width of the previous one, but a smaller number of them ($m \sim 6\% same bin width of the previous one, but a smaller number of them ($m \sim 6\%
\, n$), was created according to a Gaussian distribution with mean $\mu$, \, n$), was created according to a Gaussian distribution with mean $\mu$,
@ -102,10 +103,10 @@ descussed lately.
Then, the original histogram was convolved with the kernel in order to obtain Then, the original histogram was convolved with the kernel in order to obtain
the smeared signal. As an example, the result obtained for $\sigma = \Delta the smeared signal. As an example, the result obtained for $\sigma = \Delta
\theta$, where $\Delta \theta$ is the bin width, is shown in @fig:convolved. \theta$, where $\Delta \theta$ is the bin width, is shown in @fig:convolved.
As expected, the smeared signal looks smoother with respect to the original The smeared signal looks smoother with respect to the original one: the higher
one. $\sigma$, the greater the smoothness.
![Convolved signal.](images/smoothed.pdf){#fig:convolved} ![Convolved signal.](images/6-smoothed.pdf){#fig:convolved}
The convolution was implemented as follow. Consider the definition of The convolution was implemented as follow. Consider the definition of
convolution of two functions $f(x)$ and $g(x)$: convolution of two functions $f(x)$ and $g(x)$:
@ -331,13 +332,22 @@ variable $\xi$ when the available measure is a sample {$x_i$} of points
dronwn not by $f(x)$ but by another function $\phi(x)$ such that: dronwn not by $f(x)$ but by another function $\phi(x)$ such that:
$$ $$
\phi(x) = \int d\xi \, f(\xi) P(x | \xi) \phi(x) = \int d\xi \, f(\xi) P(x | \xi)
$$ $$ {#eq:conv}
where $P(x | \xi) \, d\xi$ is the probability (presumed known) that $x$ falls where $P(x | \xi) \, d\xi$ is the probability (presumed known) that $x$ falls
in the interval $(x, x + dx)$ when $\xi = \xi$. An example of this problem is in the interval $(x, x + dx)$ when $\xi = \xi$. If the so-colled point spread
precisely that of correcting an observed distribution $\phi(x)$ for the effect function $P(x | \xi)$ follows a normal distribution with variance $\sigma$,
of observational errors, which are represented by the function $P (x | \xi)$, namely:
called point spread function. $$
P(x | \xi) = \frac{1}{\sqrt{2 \pi} \sigma}
\exp \left( - \frac{(x - \xi)^2}{2 \sigma^2} \right)
$$
then, @eq:conv becomes a convolution and finding $f(\xi)$ turns out to be a
deconvolution.
An example of this problem is precisely that of correcting an observed
distribution $\phi(x)$ for the effect of observational errors, which are
represented by the function $P (x | \xi)$.
Let $Q(\xi | x) d\xi$ be the probability that $\xi$ comes from the interval Let $Q(\xi | x) d\xi$ be the probability that $\xi$ comes from the interval
$(\xi, \xi + d\xi)$ when the measured quantity is $x = x$. The probability that $(\xi, \xi + d\xi)$ when the measured quantity is $x = x$. The probability that
@ -382,22 +392,18 @@ $$
P(x | \xi) P(x | \xi)
$$ {#eq:solution} $$ {#eq:solution}
If the spread function $P(x | \xi)$ follows a normal distribution with variance When the spread function $P(x | \xi)$ is Gaussian, @eq:solution can be
$\sigma$, namely: rewritten in terms of convolutions:
$$
P(x | \xi) = \frac{1}{\sqrt{2 \pi} \sigma}
\exp \left( - \frac{(x - \xi)^2}{2 \sigma^2} \right)
$$
then, @eq:solution can be rewritten in terms of convolutions:
$$ $$
f^{t + 1} = f^{t}\left( \frac{\phi}{{f^{t}} * P} * P^{\star} \right) f^{t + 1} = f^{t}\left( \frac{\phi}{{f^{t}} * P} * P^{\star} \right)
$$ $$
where $P^{\star}$ is the flipped point spread function [@lucy74]. where $P^{\star}$ is the flipped point spread function [@lucy74].
In this special case, the Gaussian kernel stands for the point spread function In this special case, the Gaussian kernel which was convolved with the original
and, dealing with discrete values, the division and multiplication are element histogram stands for the point spread function and, dealing with discrete
wise and the convolution is to be carried out as described in @sec:convolution. values, the division and multiplication are element wise and the convolution is
to be carried out as described in @sec:convolution.
When implemented, this method results in an easy step-wise routine: When implemented, this method results in an easy step-wise routine:
- create a flipped copy of the kernel; - create a flipped copy of the kernel;
@ -405,53 +411,16 @@ When implemented, this method results in an easy step-wise routine:
- compute the convolutions, the product and the division at each step; - compute the convolutions, the product and the division at each step;
- proceed until a given number of reiterations is achieved. - proceed until a given number of reiterations is achieved.
In this case, the zero-order was set $f(\xi) = 0.5 \, \forall \, \xi$. In this case, the zero-order was set $f(\xi) = 0.5 \, \forall \, \xi$. Different
number of iterations where tested. Results are discussed in
@sec:conv_results.
## Results comparison {#sec:conv_Results} ## The earth mover's distance
In [@fig:results1; @fig:results2; @fig:results3] the results obtained for three With the aim of comparing the two deconvolution methods, the similarity of a
different $\sigma$s are shown. The tested values are $\Delta \theta$, $0.5 \, deconvolved outcome with the original signal was quantified using the earth
\Delta \theta$ and $0.05 \, \Delta \theta$, where $\Delta \theta$ is the bin mover's distance.
width of the original histogram, which is the one previously introduced in
@fig:original. In each figure, the convolved signal is shown above, the
histogram deconvolved with the FFT method is in the middle and the one
deconvolved with RL is located below.
As can be seen, increasing the value of $\sigma$ implies a stronger smoothing of
the curve. The FFT deconvolution process seems not to be affected by $\sigma$
amplitude changes: it always gives the same outcome, which is exactly the
original signal. In fact, the FFT is the analitical result of the deconvolution.
In the real world, it is unpratical, since signals are inevitably blurred by
noise.
The same can't be said about the RL deconvolution, which, on the other hand,
looks heavily influenced by the variance magnitude: the greater $\sigma$, the
worse the deconvolved result. In fact, given the same number of steps, the
deconvolved signal is always the same 'distance' far form the convolved one:
if it very smooth, the deconvolved signal is very smooth too and if the
convolved is less smooth, it is less smooth too.
The original signal is shown below for convenience.
It was also implemented the possibility to add a Poisson noise to the
convolved histogram to check weather the deconvolution is affected or not by
this kind of interference. It was took as an example the case with $\sigma =
\Delta \theta$. In @fig:poisson the results are shown for both methods when a
Poisson noise with mean $\mu = 50$ is employed.
In both cases, the addition of the noise seems to partially affect the
deconvolution. When the FFT method is applied, it adds little spikes nearly
everywhere on the curve and it is particularly evident on the edges, where the
expected data are very small. On the other hand, the Richardson-Lucy routine is
less affected by this further complication.
In order to quantify the similarity of a deconvolution outcome with the original
signal, a null hypotesis test was made up.
Likewise in @sec:Landau, the original sample was treated as a population from
which other samples of the same size were sampled with replacements. For each
new sample, the earth mover's distance with respect to the original signal was
computed.
In statistics, the earth mover's distance (EMD) is the measure of distance In statistics, the earth mover's distance (EMD) is the measure of distance
between two probability distributions [@cock41]. Informally, the distributions between two probability distributions [@cock41]. Informally, the distributions
@ -460,22 +429,36 @@ a region and the EMD is the minimum cost of turning one pile into the other,
where the cost is the amount of dirt moved times the distance by which it is where the cost is the amount of dirt moved times the distance by which it is
moved. It is valid only if the two distributions have the same integral, that moved. It is valid only if the two distributions have the same integral, that
is if the two piles have the same amount of dirt. is if the two piles have the same amount of dirt.
Computing the EMD is based on a solution to the well-known transportation Computing the EMD is based on a solution to the transportation problem, which
problem, which can be formalized as follows. can be formalized as follows.
Consider two vectors: Consider two vectors $P$ and $Q$ which represent the two probability
distributions whose EMD has to be measured:
$$ $$
P = \{ (p_1, w_{p1}) \dots (p_n, w_{pm}) \} \et P = \{ (p_1, w_{p1}) \dots (p_m, w_{pm}) \} \et
Q = \{ (q_1, w_{q1}) \dots (q_n, w_{qn}) \} Q = \{ (q_1, w_{q1}) \dots (q_n, w_{qn}) \}
$$ $$
where $p_i$ and $q_i$ are the 'values' and $w_{pi}$ and $w_{qi}$ are their L'istogramma P deve essere distrutto in modo tale da ottenere l'istogramma Q,
weights. The entries $d_{ij}$ of the ground distance matrix $D_{ij}$ are che in partenza è vuoto ma so che vorrò avere w_qj in ogni bin che sta alla
defined as the distances between $p_i$ and $q_j$. posizione qj.
The aim is to find the flow $F =$ {$f_{ij}$}, where $f_{ij}$ is the flow - sposto solo da P a Q
between $p_i$ and $p_j$ (which would be the quantity of moved dirt), which - sposto non più di ogni ingresso di P
minimizes the cost $W$: - ottengo non più di ogni ingreddo di Q
- sposto tutto quello che posso: o ottengo tutto Q o ho finito P
e non devono venire uguali, quindi!
where $p_i$ and $q_i$ are the 'values' (that is, the location of the dirt) and
$w_{pi}$ and $w_{qi}$ are the 'weights' (that is, the quantity of dirt). A
ground distance matrix $D_{ij}$ is defined such as its entries $d_{ij}$ are the
distances between $p_i$ and $q_j$. The aim is to find the flow matrix $F_{ij}$,
where each entry $f_{ij}$ is the flow from $p_i$ to $q_j$ (which would be
the quantity of moved dirt), which minimizes the cost $W$:
$$ $$
W (P, Q, F) = \sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij} W (P, Q, F) = \sum_{i = 1}^m \sum_{j = 1}^n f_{ij} d_{ij}
@ -486,7 +469,7 @@ with the constraints:
\begin{align*} \begin{align*}
&f_{ij} \ge 0 \hspace{15pt} &1 \le i \le m \wedge 1 \le j \le n \\ &f_{ij} \ge 0 \hspace{15pt} &1 \le i \le m \wedge 1 \le j \le n \\
&\sum_{j = 1}^n f_{ij} \le w_{pi} &1 \le i \le m \\ &\sum_{j = 1}^n f_{ij} \le w_{pi} &1 \le i \le m \\
&\sum_{j = 1}^m f_{ij} \le w_{qj} &1 \le j \le n &\sum_{i = 1}^m f_{ij} \le w_{qj} &1 \le j \le n
\end{align*} \end{align*}
$$ $$
\sum_{j = 1}^n f_{ij} \sum_{j = 1}^m f_{ij} \le w_{qj} \sum_{j = 1}^n f_{ij} \sum_{j = 1}^m f_{ij} \le w_{qj}
@ -555,3 +538,35 @@ the original one cannot be disporoved if its distance from the original signal
is grater than \textcolor{red}{value}. is grater than \textcolor{red}{value}.
\textcolor{red}{counts} \textcolor{red}{counts}
## Results comparison {#sec:conv_results}
As can be seen, increasing the value of $\sigma$ implies a stronger smoothing of
the curve. The FFT deconvolution process seems not to be affected by $\sigma$
amplitude changes: it always gives the same outcome, which is exactly the
original signal. In fact, the FFT is the analitical result of the deconvolution.
In the real world, it is unpratical, since signals are inevitably blurred by
noise.
The same can't be said about the RL deconvolution, which, on the other hand,
looks heavily influenced by the variance magnitude: the greater $\sigma$, the
worse the deconvolved result. In fact, given the same number of steps, the
deconvolved signal is always the same 'distance' far form the convolved one:
if it very smooth, the deconvolved signal is very smooth too and if the
convolved is less smooth, it is less smooth too.
The original signal is shown below for convenience.
It was also implemented the possibility to add a Poisson noise to the
convolved histogram to check weather the deconvolution is affected or not by
this kind of interference. It was took as an example the case with $\sigma =
\Delta \theta$. In @fig:poisson the results are shown for both methods when a
Poisson noise with mean $\mu = 50$ is employed.
In both cases, the addition of the noise seems to partially affect the
deconvolution. When the FFT method is applied, it adds little spikes nearly
everywhere on the curve and it is particularly evident on the edges, where the
expected data are very small. On the other hand, the Richardson-Lucy routine is
less affected by this further complication.

View File

@ -40,7 +40,7 @@ An example of the two samples is shown in @fig:points.
![Example of points sorted according to two Gaussian with ![Example of points sorted according to two Gaussian with
the given parameters. Noise points in pink and signal points the given parameters. Noise points in pink and signal points
in yellow.](images/points.pdf){#fig:points} in yellow.](images/7-points.pdf){#fig:points}
Assuming not to know how the points were generated, a model of classification Assuming not to know how the points were generated, a model of classification
must then be implemented in order to assign each point to the right class must then be implemented in order to assign each point to the right class
@ -96,7 +96,7 @@ maximization, it can be found that $w \propto (m_2 m_1)$.
resulting from projection onto the line joining the class means: note that resulting from projection onto the line joining the class means: note that
there is considerable overlap in the projected space. The right plot shows the there is considerable overlap in the projected space. The right plot shows the
corresponding projection based on the Fisher linear discriminant, showing the corresponding projection based on the Fisher linear discriminant, showing the
greatly improved classes separation.](images/fisher.png){#fig:overlap} greatly improved classes separation.](images/7-fisher.png){#fig:overlap}
There is still a problem with this approach, however, as illustrated in There is still a problem with this approach, however, as illustrated in
@fig:overlap: the two classes are well separated in the original 2D space but @fig:overlap: the two classes are well separated in the original 2D space but
@ -195,9 +195,9 @@ The projection of the points was accomplished by the use of the function
this case were the weight vector and the position of the point to be projected. this case were the weight vector and the position of the point to be projected.
<div id="fig:fisher_proj"> <div id="fig:fisher_proj">
![View from above of the samples.](images/fisher-plane.pdf){height=5.7cm} ![View from above of the samples.](images/7-fisher-plane.pdf){height=5.7cm}
![Gaussian of the samples on the projection ![Gaussian of the samples on the projection
line.](images/fisher-proj.pdf){height=5.7cm} line.](images/7-fisher-proj.pdf){height=5.7cm}
Aerial and lateral views of the projection direction, in blue, and the cut, in Aerial and lateral views of the projection direction, in blue, and the cut, in
red. red.
@ -265,9 +265,9 @@ $$
$$ $$
<div id="fig:percep_proj"> <div id="fig:percep_proj">
![View from above of the samples.](images/percep-plane.pdf){height=5.7cm} ![View from above of the samples.](images/7-percep-plane.pdf){height=5.7cm}
![Gaussian of the samples on the projection ![Gaussian of the samples on the projection
line.](images/percep-proj.pdf){height=5.7cm} line.](images/7-percep-proj.pdf){height=5.7cm}
Aerial and lateral views of the projection direction, in blue, and the cut, in Aerial and lateral views of the projection direction, in blue, and the cut, in
red. red.