ex-4: review

This commit is contained in:
Michele Guerini Rocco 2020-05-29 19:32:03 +02:00
parent 12c3cc6025
commit 180898a6b0

View File

@ -1,16 +1,16 @@
# Exercise 4 # Exercise 4
## Kinematic dip PDF derivation ## Kinematic dip derivation
Consider a great number of non-interacting particles, each of which with a Consider a great number of non-interacting particles having random momenta
random momentum $\vec{P}$ with module between 0 and $P_{\text{max}}$ randomly $\vec{P}$, with magnitude between 0 and $P_{\text{max}}$, at an angle $\theta$
angled with respect to a coordinate system {$\hat{x}$, $\hat{y}$, $\hat{z}$}. wrt to some coordinate system ($\hat{x}$, $\hat{y}$, $\hat{z}$).
Once the polar angle $\theta$ is defined, the momentum vertical and horizontal The vertical and horizontal components of a particle momentum, which will be
components of a particle, which will be referred as $\vec{P_v}$ and $\vec{P_h}$ referred as $\vec{P_v}$ and $\vec{P_h}$ respectively, are shown in
respectively, are the ones shown in @fig:components. @fig:components.
If $\theta$ is evenly distributed on the sphere and the same holds for the If $\theta$ is uniformly distributed on the unit sphere and $P$ is uniformly
module $P$, which distribution will the average value of $|P_v|$ follow as a distributed in $[0, P^\text{max}]$, what will be the average value
function of $P_h$? $|P_v|$ of the particles with a given $P_h$?
\begin{figure} \begin{figure}
\hypertarget{fig:components}{% \hypertarget{fig:components}{%
@ -41,10 +41,11 @@ function of $P_h$?
} }
\end{figure} \end{figure}
Since the aim is to compute $\langle |P_v| \rangle (P_h)$, the conditional Since the aim is to compute $\langle |P_v| \rangle (P_h)$, the conditional
distribution probability of $P_v$ given a fixed value of $P_h = x$ must first distribution probability of $P_v$ given a fixed value of $P_h = x$ must first
be determined. It can be computed as the ratio between the probability of be determined. It can be computed as the ratio between the probability of
getting a fixed value of $P_v$ given $x$ over the total probability of getting a fixed value of $P_v$ given $x$ to the total probability of
getting that $x$: getting that $x$:
$$ $$
f (P_v | P_h = x) = \frac{f_{P_h , P_v} (x, P_v)} f (P_v | P_h = x) = \frac{f_{P_h , P_v} (x, P_v)}
@ -52,137 +53,141 @@ $$
= \frac{f_{P_h , P_v} (x, P_v)}{I} = \frac{f_{P_h , P_v} (x, P_v)}{I}
$$ $$
where $f_{P_h , P_v}$ is the joint pdf of the two variables $P_v$ and $P_h$ and where $f_{P_h , P_v}$ is the joint PDF of the two variables $P_v$ and $P_h$ and
the integral $I$ runs over all the possible values of $P_v$ given a certain the integral $I$ runs over all the possible values of $P_v$ given a certain
$P_h$. $P_h$.
$f_{P_h , P_v}$ can simply be computed from the joint pdf of $\theta$ and $P$ $f_{P_h , P_v}$ can simply be computed from the joint PDF of $\theta$ and $P$
with a change of variables. For the pdf of $\theta$ $f_{\theta} (\theta)$, the with a change of variables. For the PDF of $\theta$ $f_{\theta} (\theta)$, the
same considerations done in @sec:3 lead to: same considerations done in @sec:3 lead to:
$$ $$
f_{\theta} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta) f_{\theta} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta)
$$ $$
whereas, being $P$ evenly distributed: whereas, being $P$ uniform:
$$ $$
f_P (P) = \chi_{[0, P_{\text{max}}]} (P) f_P (P) = \chi_{[0, P_{\text{max}}]} (P)
$$ $$
where $\chi_{[a, b]} (y)$ is the normalized characteristic function which value where $\chi_{[a, b]} (y)$ is the normalized characteristic function which value
is $1/N$ between $a$ and $b$ (where $N$ is the normalization term) and 0 is $1/N$ between $a$ and $b$ (where $N$ is the normalization term) and 0
elsewhere. Being a couple of independent variables, their joint pdf is simply elsewhere. Since $P,\theta$ are independent variables, their joint PDF is
given by the product of their pdfs: simply given by the product:
$$ $$
f_{\theta , P} (\theta, P) = f_{\theta} (\theta) f_P (P) f_{\theta , P} (\theta, P) = f_{\theta} (\theta) f_P (P)
= \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta) = \frac{1}{2} \sin{\theta} \chi_{[0, \pi]} (\theta)
\chi_{[0, P_{\text{max}}]} (P) \chi_{[0, P_{\text{max}}]} (P)
$$ $$
and they are related to the vertical and horizontal components
by a standard polar coordinate transformation:
Given the new variables: \begin{align*}
$$
\begin{cases} \begin{cases}
P_v = P \cos(\theta) \\ P_v = P \cos(\theta) \\
P_h = P \sin(\theta) P_h = P \sin(\theta)
\end{cases} \end{cases}
$$ &&
with $\theta \in [0, \pi]$, the previous ones can be written as:
$$
\begin{cases} \begin{cases}
P = \sqrt{P_v^2 + P_h^2} \\ P = \sqrt{P_v^2 + P_h^2} \\
\theta = \text{atan}_2 ( P_h/P_v ) := \theta = \text{atan2}(P_h, P_v)
\begin{cases}
\text{atan} ( P_h/P_v ) &\incase P_v > 0 \\
\text{atan} ( P_h/P_v ) + \pi &\incase P_v < 0
\end{cases}
\end{cases} \end{cases}
\end{align*}
where:
- $\theta \in [0, \pi]$,
- and atan2 is defined by:
$$
\begin{cases}
\arctan(P_h/P_v) &\incase P_v > 0 \\
\arctan(P_h/P_v) + \pi &\incase P_v < 0 \\
\pi/2 &\incase P_v = 0
\end{cases}
$$ $$
which can be shown having Jacobian: The Jacobian of the inverse transformation is easily found to be:
$$ $$
J = \frac{1}{\sqrt{P_v^2 + P_h^2}} |J^{-1}| = \frac{1}{\sqrt{P_v^2 + P_h^2}}
$$ $$
Hence, the PDF written in the new coordinates is:
Hence:
$$ $$
f_{P_h , P_v} (P_h, P_v) = f_{P_h , P_v} (P_h, P_v) =
\frac{1}{2} \sin[ \text{atan}_2 ( P_h/P_v )] \frac{1}{2} \sin\left[ \text{atan2}(P_h, P_v) \right]
\chi_{[0, \pi]} (\text{atan}_2 ( P_h/P_v )) \cdot \\ \chi_{[0, \pi]} \left[\text{atan2}(P_h, P_v)\right] \cdot \\
\frac{\chi_{[0, p_{\text{max}}]} \left( \sqrt{P_v^2 + P_h^2} \right)} \frac{\chi_{[0, p_{\text{max}}]} \left(\sqrt{P_v^2 + P_h^2} \right)}
{\sqrt{P_v^2 + P_h^2}} {\sqrt{P_v^2 + P_h^2}}
$$ $$
from which, the integral $I$ can now be computed. The edges of the integral The integral $I$ can now be computed. Note that the domain
are fixed by the fact that the total momentum can not exceed $P_{\text{max}}$: is implicit in the characteristic function:
$$ $$
I = \int I(x) = \int_{-\infty}^{+\infty} dP_v \, f_{P_h , P_v} (x, P_v)
\limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}} = \int \limits_{- \sqrt{P_{\text{max}}^2 - P_h}}
^{\sqrt{P_{\text{max}}^2 - P_h}}
dP_v \, f_{P_h , P_v} (x, P_v) dP_v \, f_{P_h , P_v} (x, P_v)
$$ $$
after a bit of maths, using the identity: With some basic calculus and the identity
$$ $$
\sin[ \text{atan}_2 ( P_h/P_v )] = \frac{P_h}{\sqrt{P_h^2 + P_v^2}} \sin[ \text{atan2}(P_h, P_v)] = \frac{P_h}{\sqrt{P_h^2 + P_v^2}},
$$ $$
the integral can be evaluated to give
and the fact that both the characteristic functions play no role within the
integral limits, the following result arises:
$$ $$
I = 2 \, \text{atan} \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right) I = 2 \, \arctan \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right),
$$ $$
from which: from which:
$$ $$
f (P_v | P_h = x) = \frac{x}{P_v^2 + x^2} \cdot f (P_v | P_h = x) =
\frac{1}{2 \, \text{atan} \frac{x}{P_v^2 + x^2} \cdot
\frac{\chi_{[0, \pi]} \left[\text{atan2}(P_h, P_v)\right]
\chi_{[0, p_{\text{max}}]} \left(\sqrt{P_v^2 + P_h^2}\right)}{2 \, \arctan
\left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right)} \left( \sqrt{\frac{P_{\text{max}}^2}{x^2} - 1} \right)}
$$ $$
Finally, putting all the pieces together, the average value of $|P_v|$ can be Finally, putting all the pieces together, the average value of $|P_v|$ can be
computed: computed:
$$ $$
\langle |P_v| \rangle = \int \langle |P_v| \rangle(x)
\limits_{- \sqrt{P_{\text{max}}^2 - P_h}}^{\sqrt{P_{\text{max}}^2 - P_h}} = \int P_v f (P_v | P_h = x) dP_v
f (P_v | P_h = x) = [ \dots ] = \frac{x \ln \left( \frac{P_{\text{max}}}{x} \right)}
= x \, \frac{\ln \left( \frac{P_{\text{max}}}{x} \right)} {\arctan \left( \sqrt{ \frac{P^2_{\text{max}}}{x^2} - 1} \right)}
{\text{atan} \left( \sqrt{ \frac{P^2_{\text{max}}}{x^2} - 1} \right)}
$$ {#eq:dip} $$ {#eq:dip}
Namely: The result is plotted in the figure below:
![Plot of the expected distribution with ![Plot of the expected dependence of $\langle |P_v| \rangle$ with
$P_{\text{max}} = 10$.](images/4-expected.pdf){#fig:plot} $P_{\text{max}} = 10$.](images/4-expected.pdf){#fig:plot}
## Monte Carlo simulation ## Monte Carlo simulation
The same distribution should be found by generating and binning points in a This dependence should be found by running a Monte Carlo simulation and
proper way. A number of $N = 50000$ points were generated as a couple of values computing a binned average of the vertical momentum. A number of $N = 50000$
($P$, $\theta$), with $P$ evenly distributed between 0 and $P_{\text{max}}$ and particles were generated as pair of values ($P$, $\theta$), with $P$
$\theta$ given by the same procedure described in @sec:3, namely: uniformly distributed between 0 and $P_{\text{max}}$ and $\theta$ given by the
same procedure described in @sec:3, namely:
$$ $$
\theta = \text{acos}(1 - 2x) \theta = \arccos(1 - 2x)
$$ $$
with $x$ uniformly distributed between 0 and 1. where $x$ is uniformly distributed between 0 and 1.
The data binning turned out to be a bit challenging. Once $P$ was sampled and The binning turned out to be quite a challenge: once a $P$ is sampled and
$P_h$ was computed, the bin containing the latter's value must be found. If $n$ $P_h$ computed, the bin containing the latter has to be found. If
is the number of bins in which the range $[0, P_{\text{max}}]$ is divided into, the range $[0, P_{\text{max}}]$ is divided into $n$ equal bins
then the width $w$ of each bin is given by: of the width
$$ $$
w = \frac{P_{\text{max}}}{n} w = \frac{P_{\text{max}}}{n}
$$ $$
then (counting from zero) $P_h$ goes into the $i$-th bin where
and the $i^{th}$ bin in which $P_h$ goes in is:
$$ $$
i = \text{floor} \left( \frac{P_h}{w} \right) i = \left\lfloor \frac{P_h}{w} \right\rfloor
$$ $$
where 'floor' is the function which gives the bigger integer smaller than its Then, the sum $S_j$ of all the $|P_v|$ values relative to the $P_h$ of the
argument and the bins are counted starting from zero. $j$-th bin itself and number num$_j$ of the bin counts are stored in an array
Then, a vector in which the $j^{\text{th}}$ entry contains both the sum $S_j$ and iteratively updated. Once every bin has been updated, the average value of
of all the $|P_v|$s relative to each $P_h$ fallen into the $j^{\text{th}}$ bin $|P_v|_j$ is computed as $S_j / \text{num}_j$.
itself and the number num$_j$ of the bin entries was iteratively updated. At
the end, the average value of $|P_v|_j$ was computed as $S_j / \text{num}_j$.
For the sake of clarity, for each sampled couple the procedure is the For the sake of clarity, for each sampled couple the procedure is the
following. At first $S_j = 0 \wedge \text{num}_j = 0 \, \forall \, j$, then: following. At first $S_j = 0 \wedge \text{num}_j = 0 \, \forall \, j$, then:
@ -196,21 +201,21 @@ For $P_{\text{max}} = 10$ and $n = 50$, the following result was obtained:
![Sampled points histogram.](images/4-dip.pdf) ![Sampled points histogram.](images/4-dip.pdf)
In order to check whether the expected distribution (@eq:dip) properly matches In order to assert the compatibility of the expected function (@eq:dip)
the produced histogram, a chi-squared minimization was applied. Being a simple with the histogram, a least squares minimization was applied. Being a simple
one-parameter fit, the $\chi^2$ was computed without a suitable GSL function one-parameter fit, the $\chi^2$ was implemented manually and minimised
and the error of the so obtained estimation of $P_{\text{max}}$ was given as without using a general LSQ routine. The error of the estimation of
the inverse of the $\chi^2$ second derivative in its minimum, according to the $P_{\text{max}}$ was computed as the inverse of the $\chi^2$ second derivative
Cramér-Rao bound. at the minimum, according to the Cramér-Rao bound.
The following results were obtained: The following results were obtained:
$$ \begin{align*}
P^{\text{oss}}_{\text{max}} = 10.005 \pm 0.018 \with \chi_r^2 = 0.071 P^{\text{oss}}_{\text{max}} = 10.005 \pm 0.018 && \chi^2 &= 0.071 \\
$$ && \text{P}(x > \chi^2) &= 0.79
\end{align*}
where $\chi_r^2$ is the $\chi^2$ per degree of freedom, proving a good The $\chi^2$ and $p$-value show a very good agreement.
convergence.
In order to compare $P^{\text{oss}}_{\text{max}}$ with the expected value In order to compare $P^{\text{oss}}_{\text{max}}$ with the expected value
$P_{\text{max}} = 10$, the following compatibility $t$-test was applied: $P_{\text{max}} = 10$, the following compatibility $t$-test was applied:
@ -228,8 +233,8 @@ In this case:
- p = 0.768 - p = 0.768
which allows to assert that the sampled points actually follow the predicted which allows to assert that the sampled points actually follow the predicted
distribution. In @fig:fit, the fit function superimposed on the histogram is function. In @fig:fit, the fit function superimposed on the histogram is
shown. shown.
![Fitted sampled data. $P^{\text{oss}}_{\text{max}} = 10.005 ![Fitted sampled data. $P^{\text{oss}}_{\text{max}} = 10.005
\pm 0.018$, $\chi_r^2 = 0.071$.](images/4-fit.pdf){#fig:fit} \pm 0.018$, $\chi^2 = 0.071$.](images/4-fit.pdf){#fig:fit}