ex-4: revised and typo-fixed

This commit is contained in:
Giù Marcer 2020-05-12 22:38:14 +02:00 committed by rnhmjoj
parent 4aa8eefaf9
commit 12f15e1998
5 changed files with 80 additions and 100 deletions

View File

@ -41,6 +41,9 @@ int main(int argc, char **argv)
size_t n = 50; size_t n = 50;
double p_max = 10; double p_max = 10;
size_t go = 0; size_t go = 0;
// Get eventual CLI arguments.
//
int res = parser(&N, &n, &p_max, argc, argv, &go); int res = parser(&N, &n, &p_max, argc, argv, &go);
if (go == 0 && res == 1) if (go == 0 && res == 1)
{ {

View File

@ -15,7 +15,7 @@ edges = np.linspace(0, bins*step, bins+1)
plt.rcParams['font.size'] = 17 plt.rcParams['font.size'] = 17
plt.hist(edges[:-1], edges, weights=counts, plt.hist(edges[:-1], edges, weights=counts,
color='#eacf00', edgecolor='#3d3d3c') histtype='stepfilled', color='#e3c5ca', edgecolor='#92182b')
plt.title('Vertical component distribution', loc='right') plt.title('Vertical component distribution', loc='right')
plt.xlabel(r'$p_h$') plt.xlabel(r'$p_h$')
plt.ylabel(r'$\left<|p_v|\right>$') plt.ylabel(r'$\left<|p_v|\right>$')

Binary file not shown.

Binary file not shown.

View File

@ -3,14 +3,14 @@
## Kinematic dip PDF derivation ## Kinematic dip PDF derivation
Consider a great number of non-interacting particles, each of which with a Consider a great number of non-interacting particles, each of which with a
random momentum $\vec{p}$ with module between 0 and $p_{\text{max}}$ randomly random momentum $\vec{P}$ with module between 0 and $P_{\text{max}}$ randomly
angled with respect to a coordinate system {$\hat{x}$, $\hat{y}$, $\hat{z}$}. angled with respect to a coordinate system {$\hat{x}$, $\hat{y}$, $\hat{z}$}.
Once the polar angle $\theta$ is defined, the momentum vertical and horizontal Once the polar angle $\theta$ is defined, the momentum vertical and horizontal
components of a particle, which will be referred as $\vec{p_v}$ and $\vec{p_h}$ components of a particle, which will be referred as $\vec{P_v}$ and $\vec{P_h}$
respectively, are the ones shown in @fig:components. respectively, are the ones shown in @fig:components.
If $\theta$ is evenly distributed on the sphere and the same holds for the If $\theta$ is evenly distributed on the sphere and the same holds for the
module $p$, which distribution will the average value of $|p_v|$ follow as a module $P$, which distribution will the average value of $|P_v|$ follow as a
function of $p_h$? function of $P_h$?
\begin{figure} \begin{figure}
\hypertarget{fig:components}{% \hypertarget{fig:components}{%
@ -30,9 +30,9 @@ function of $p_h$?
\draw [thick, dashed, cyclamen] (5,7.2) -- (3.8,6); \draw [thick, dashed, cyclamen] (5,7.2) -- (3.8,6);
\draw [ultra thick, ->, pink] (5,2) -- (5,7.2); \draw [ultra thick, ->, pink] (5,2) -- (5,7.2);
\draw [ultra thick, ->, pink] (5,2) -- (3.8,0.8); \draw [ultra thick, ->, pink] (5,2) -- (3.8,0.8);
\node at (4.8,1.1) {$\vec{p_h}$}; \node at (4.8,1.1) {$\vec{P_h}$};
\node at (5.5,6.6) {$\vec{p_v}$}; \node at (5.5,6.6) {$\vec{P_v}$};
\node at (3.3,5.5) {$\vec{p}$}; \node at (3.3,5.5) {$\vec{P}$};
% Angle % Angle
\draw [thick, cyclamen] (4.4,4) to [out=80,in=210] (5,5); \draw [thick, cyclamen] (4.4,4) to [out=80,in=210] (5,5);
\node at (4.7,4.2) {$\theta$}; \node at (4.7,4.2) {$\theta$};
@ -41,11 +41,11 @@ function of $p_h$?
} }
\end{figure} \end{figure}
Since the aim is to compute $\langle |p_v| \rangle (p_h)$, the conditional Since the aim is to compute $\langle |P_v| \rangle (P_h)$, the conditional
distribution probability of $p_v$ given a fixed value of $p_h = x$ must first distribution probability of $P_v$ given a fixed value of $P_h = x$ must first
be determined. It can be computed as the ratio between the probablity of be determined. It can be computed as the ratio between the probability of
getting a fixed value of $P_v$ given $x$ over the total probability of $x$: getting a fixed value of $P_v$ given $x$ over the total probability of
getting that $x$:
$$ $$
f (P_v | P_h = x) = \frac{f_{P_h , P_v} (x, P_v)} f (P_v | P_h = x) = \frac{f_{P_h , P_v} (x, P_v)}
{\int_{\{ P_v \}} d P_v f_{P_h , P_v} (x, P_v)} {\int_{\{ P_v \}} d P_v f_{P_h , P_v} (x, P_v)}
@ -53,35 +53,31 @@ $$
$$ $$
where $f_{P_h , P_v}$ is the joint pdf of the two variables $P_v$ and $P_h$ and where $f_{P_h , P_v}$ is the joint pdf of the two variables $P_v$ and $P_h$ and
the integral runs over all the possible values of $P_v$ given $P_h$. the integral $I$ runs over all the possible values of $P_v$ given a certain
$P_h$.
This joint pdf can simly be computed from the joint pdf of $\theta$ and $p$ with $f_{P_h , P_v}$ can simply be computed from the joint pdf of $\theta$ and $P$
a change of variables. For the pdf of $\theta$ $f_{\theta} (\theta)$, the same with a change of variables. For the pdf of $\theta$ $f_{\theta} (\theta)$, the
considerations done in @sec:3 lead to: same considerations done in @sec:3 lead to:
$$ $$
f_{\theta} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta) f_{\theta} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta)
$$ $$
whereas, being $p$ evenly distributed: whereas, being $P$ evenly distributed:
$$ $$
f_p (p) = \chi_{[0, p_{\text{max}}]} (p) f_P (P) = \chi_{[0, P_{\text{max}}]} (P)
$$ $$
where $\chi_{[a, b]} (y)$ is the normalized characteristic function which value where $\chi_{[a, b]} (y)$ is the normalized characteristic function which value
is $1/N$ between $a$ and $b$, where $N$ is the normalization term, and 0 is $1/N$ between $a$ and $b$ (where $N$ is the normalization term) and 0
elsewhere. Being a couple of independent variables, their joint pdf is simply elsewhere. Being a couple of independent variables, their joint pdf is simply
given by the product of their pdfs: given by the product of their pdfs:
$$ $$
f_{\theta , p} (\theta, p) = f_{\theta} (\theta) f_p (p) f_{\theta , P} (\theta, P) = f_{\theta} (\theta) f_P (P)
= \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta)
\chi_{[0, p_{\text{max}}]} (p) \chi_{[0, P_{\text{max}}]} (P)
$$ $$
Given the new variables: Given the new variables:
$$ $$
\begin{cases} \begin{cases}
P_v = P \cos(\theta) \\ P_v = P \cos(\theta) \\
@ -89,12 +85,11 @@ $$
\end{cases} \end{cases}
$$ $$
with $\theta \in [0, \pi]$, the previous ones are written as: with $\theta \in [0, \pi]$, the previous ones can be written as:
$$ $$
\begin{cases} \begin{cases}
P = \sqrt{P_v^2 + P_h^2} \\ P = \sqrt{P_v^2 + P_h^2} \\
\theta = \text{atan}^{\star} ( P_h/P_v ) := \theta = \text{atan}_2 ( P_h/P_v ) :=
\begin{cases} \begin{cases}
\text{atan} ( P_h/P_v ) &\incase P_v > 0 \\ \text{atan} ( P_h/P_v ) &\incase P_v > 0 \\
\text{atan} ( P_h/P_v ) + \pi &\incase P_v < 0 \text{atan} ( P_h/P_v ) + \pi &\incase P_v < 0
@ -103,24 +98,21 @@ $$
$$ $$
which can be shown having Jacobian: which can be shown having Jacobian:
$$ $$
J = \frac{1}{\sqrt{P_v^2 + P_h^2}} J = \frac{1}{\sqrt{P_v^2 + P_h^2}}
$$ $$
Hence: Hence:
$$ $$
f_{P_h , P_v} (P_h, P_v) = f_{P_h , P_v} (P_h, P_v) =
\frac{1}{2} \sin[ \text{atan}^{\star} ( P_h/P_v )] \frac{1}{2} \sin[ \text{atan}_2 ( P_h/P_v )]
\chi_{[0, \pi]} (\text{atan}^{\star} ( P_h/P_v )) \cdot \\ \chi_{[0, \pi]} (\text{atan}_2 ( P_h/P_v )) \cdot \\
\frac{\chi_{[0, p_{\text{max}}]} \left( \sqrt{P_v^2 + P_h^2} \right)} \frac{\chi_{[0, p_{\text{max}}]} \left( \sqrt{P_v^2 + P_h^2} \right)}
{\sqrt{P_v^2 + P_h^2}} {\sqrt{P_v^2 + P_h^2}}
$$ $$
from which, the integral $I$ can now be computed. The edges of the integral from which, the integral $I$ can now be computed. The edges of the integral
are fixed by the fact that the total momentum can not exceed $P_{\text{max}}$: are fixed by the fact that the total momentum can not exceed $P_{\text{max}}$:
$$ $$
I = \int I = \int
\limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}} \limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}}
@ -128,39 +120,32 @@ $$
$$ $$
after a bit of maths, using the identity: after a bit of maths, using the identity:
$$ $$
\sin[ \text{atan}^{\star} ( P_h/P_v )] = \frac{P_h}{\sqrt{P_h^2 + P_v^2}} \sin[ \text{atan}_2 ( P_h/P_v )] = \frac{P_h}{\sqrt{P_h^2 + P_v^2}}
$$ $$
and the fact that both the characteristic functions are automatically satisfied and the fact that both the characteristic functions play no role within the
within the integral limits, the following result arises: integral limits, the following result arises:
$$ $$
I = 2 \, \text{atan} \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right) I = 2 \, \text{atan} \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right)
$$ $$
from which: from which:
$$ $$
f (P_v | P_h = x) = \frac{x}{P_v^2 + x^2} \cdot f (P_v | P_h = x) = \frac{x}{P_v^2 + x^2} \cdot
\frac{1}{2 \, \text{atan} \frac{1}{2 \, \text{atan}
\left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right)} \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right)}
$$ $$
Finally, putting all the pieces together, the average value of $|P_v|$ can now Finally, putting all the pieces together, the average value of $|P_v|$ can be
be computed: computed:
$$
\begin{align*} \langle |P_v| \rangle = \int
\langle |P_v| \rangle &= \int
\limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}} \limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}}
f (P_v | P_h = x) f (P_v | P_h = x) = [ \dots ]
\\ = x \, \frac{\ln \left( \frac{P_{\text{max}}}{x} \right)}
&= [ \dots ]
\\
&= x \, \frac{\ln \left( \frac{P_{\text{max}}}{x} \right)}
{\text{atan} \left( \sqrt{ \frac{P^2_{\text{max}}}{x^2} - 1} \right)} {\text{atan} \left( \sqrt{ \frac{P^2_{\text{max}}}{x^2} - 1} \right)}
\end{align*} $$ {#eq:dip}
Namely: Namely:
@ -171,80 +156,71 @@ Namely:
## Monte Carlo simulation ## Monte Carlo simulation
The same distribution should be found by generating and binning points in a The same distribution should be found by generating and binning points in a
proper way. proper way. A number of $N = 50'000$ points were generated as a couple of values
A number of $N = 50'000$ points were generated as a couple of values ($p$, ($P$, $\theta$), with $P$ evenly distributed between 0 and $P_{\text{max}}$ and
$\theta$), with $p$ evenly distrinuted between 0 and $p_{\text{max}}$ and
$\theta$ given by the same procedure described in @sec:3, namely: $\theta$ given by the same procedure described in @sec:3, namely:
$$ $$
\theta = \text{acos}(1 - 2x) \theta = \text{acos}(1 - 2x)
$$ $$
with $x$ uniformely distributed between 0 and 1. with $x$ uniformly distributed between 0 and 1.
The data binning turned out to be a bit challenging. Once $p$ was sorted and The data binning turned out to be a bit challenging. Once $P$ was sampled and
$p_h$ was computed, the bin in which the latter goes in must be found. If $n$ is $P_h$ was computed, the bin containing the latter's value must be found. If $n$
the number of bins in which the range [0, $p_{\text{max}}$] is willing to be is the number of bins in which the range $[0, P_{\text{max}}]$ is divided into,
divided into, then the width $w$ of each bin is given by: then the width $w$ of each bin is given by:
$$ $$
w = \frac{p_{\text{max}}}{n} w = \frac{P_{\text{max}}}{n}
$$ $$
and the $j^{th}$ bin in which $p_h$ goes in is: and the $i^{th}$ bin in which $P_h$ goes in is:
$$ $$
j = \text{floor} \left( \frac{p_h}{w} \right) i = \text{floor} \left( \frac{P_h}{w} \right)
$$ $$
where 'floor' is the function which gives the upper integer lesser than its where 'floor' is the function which gives the bigger integer smaller than its
argument and the bins are counted starting from zero. argument and the bins are counted starting from zero.
Then, a vector in which the $j^{\text{th}}$ entry contains the sum $S_j$ of all Then, a vector in which the $j^{\text{th}}$ entry contains both the sum $S_j$
the $|p_v|$s relative to each $p_h$ fallen into the $j^{\text{th}}$ bin and the of all the $|P_v|$s relative to each $P_h$ fallen into the $j^{\text{th}}$ bin
number num$_j$ of entries in the $j^{\text{th}}$ bin was reiteratively updated. itself and the number num$_j$ of the bin entries was reiteratively updated. At
At the end, the average value of $|p_v|_j$ was computed as $S_j / the end, the average value of $|P_v|_j$ was computed as $S_j / \text{num}_j$.
\text{num}_j$. For the sake of clarity, for each sampled couple the procedure is the
For the sake of clarity, for each sorted couple, it works like this: following. At first $S_j = 0 \wedge \text{num}_j = 0 \, \forall \, j$, then:
- the couple $[p; \theta]$ is generated; - the couple $(P, \theta)$ is generated,
- $p_h$ and $p_v$ are computed; - $P_h$ and $P_v$ are computed,
- the $j^{\text{th}}$ bin in which $p_h$ goes in is computed; - the $j^{\text{th}}$ bin containing $P_h$ is found,
- num$_j$ is increased by 1; - num$_j$ is increased by 1,
- $S_j$ (which is zero at the beginning of everything) is increased by a factor - $S_j$ is increased by $|P_v|$.
$|p_v|$.
At the end, $\langle |p_h| \rangle_j$ = $\langle |p_h| \rangle (p_h)$ where: For $P_{\text{max}} = 10$ and $n = 50$, the following result was obtained:
$$ ![Sampled points histogram.](images/dip.pdf)
p_h = j \cdot w + \frac{w}{2} = w \left( 1 + \frac{1}{2} \right)
$$
For $p_{\text{max}} = 10$, the following result was obtained: In order to check whether the expected distribution (@eq:dip) properly matches
the produced histogram, a chi-squared minimization was applied. Being a simple
![Histogram of the obtained distribution.](images/dip.pdf)
In order to check wheter the expected distribution properly metches the
produced histogram, a chi-squared minimization was applied. Being a simple
one-parameter fit, the $\chi^2$ was computed without a suitable GSL function one-parameter fit, the $\chi^2$ was computed without a suitable GSL function
and the error of the so obtained estimation of $p_{\text{max}}$ was given as and the error of the so obtained estimation of $P_{\text{max}}$ was given as
the inverese of the $\chi^3$ second derivative in its minimum, according to the the inverse of the $\chi^2$ second derivative in its minimum, according to the
Cramér-Rao bound. Cramér-Rao bound.
The following results were obtained: The following results were obtained:
$$ $$
p^{\text{oss}}_{\text{max}} = 10.005 \pm 0.018 \with \chi^2 = 0.071 P^{\text{oss}}_{\text{max}} = 10.005 \pm 0.018 \with \chi_r^2 = 0.071
$$ $$
In order to compare $p^{\text{oss}}_{\text{max}}$ with the expected value where $\chi_r^2$ is the reduced $\chi^2$, proving a good convergence.
$p_{\text{max}} = 10$, the following compatibility t-test was applied: In order to compare $P^{\text{oss}}_{\text{max}}$ with the expected value
$P_{\text{max}} = 10$, the following compatibility t-test was applied:
$$ $$
p = 1 - \text{erf}\left(\frac{t}{\sqrt{2}}\right)\ \with p = 1 - \text{erf}\left(\frac{t}{\sqrt{2}}\right)\ \with
t = \frac{|p^{\text{oss}}_{\text{max}} - p_{\text{max}}|} t = \frac{|P^{\text{oss}}_{\text{max}} - P_{\text{max}}|}
{\Delta p_{\text{max}}} {\Delta P^{\text{oss}}_{\text{max}}}
$$ $$
where $\Delta p_{\text{max}}$ is the absolute error of $p_{\text{max}}$. At 95% where $\Delta P^{\text{oss}}_{\text{max}}$ is the $P^{\text{oss}}_{\text{max}}$
confidence level, the values are compatible if $p > 0.05$. uncertainty. At 95% confidence level, the values are compatible if $p > 0.05$.
In this case: In this case:
- t = 0.295 - t = 0.295
@ -254,4 +230,5 @@ which allows to assert that the sampled points actually follow the predicted
distribution. In @fig:fit, the fit function superimposed on the histogram is distribution. In @fig:fit, the fit function superimposed on the histogram is
shown. shown.
![Fitted data.](images/fit.pdf){#fig:fit} ![Fitted sampled data. $P^{\text{oss}}_{\text{max}} = 10.005
\pm 0.018$, $\chi_r^2 = 0.071$.](images/fit.pdf){#fig:fit}