ex-5: complete Miser section
This commit is contained in:
parent
66f98c788f
commit
cc7b9ba5a4
@ -147,47 +147,61 @@ population.
|
|||||||
|
|
||||||
The MISER technique aims to reduce the integration error through the use of
|
The MISER technique aims to reduce the integration error through the use of
|
||||||
recursive stratified sampling.
|
recursive stratified sampling.
|
||||||
Consider two disjoint regions $a$ and $b$ with volumes $V_a$ and $V_b$ and Monte
|
As stated before, according to the law of large numbers, for a large number of
|
||||||
Carlo estimates $I_a = V_a \cdot \langle f \rangle_a$ and $I_b = V_b \cdot
|
extracted points, the estimation of the integral $I$ can be computed as:
|
||||||
\langle f \rangle_b$ of the integrals, where $\langle f \rangle_a$ and $\langle
|
|
||||||
f \rangle_b$ are the means of $f$ of the points sorted in those regions, and
|
|
||||||
variances $\sigma_a^2$ and $\sigma_b^2$ of those points. If the weights $N_a$
|
|
||||||
and $N_b$ of $I_a$ and $I_b$ are unitary, then the variance $\sigma_I^2$ of the
|
|
||||||
combined estimate $I$:
|
|
||||||
|
|
||||||
\textcolor{red}{QUI}
|
|
||||||
|
|
||||||
$$
|
$$
|
||||||
I = \frac{1}{2} (I_a + I_b)
|
I= V \cdot \langle f \rangle
|
||||||
|
$$
|
||||||
|
|
||||||
|
|
||||||
|
Since $V$ is known (in this case, $V = 1$), it is sufficient to estimate
|
||||||
|
$\langle f \rangle$.
|
||||||
|
|
||||||
|
Consider two disjoint regions $a$ and $b$, such that $a \cup b = \Omega$, in
|
||||||
|
which $n_a$ and $n_b$ points were uniformely sampled. Given the Monte Carlo
|
||||||
|
estimates of the means $\langle f \rangle_a$ and $\langle f \rangle_b$ of those
|
||||||
|
points and their variances $\sigma_a^2$ and $\sigma_b^2$, if the weights $N_a$
|
||||||
|
and $N_b$ of $\langle f \rangle_a$ and $\langle f \rangle_b$ are chosen unitary,
|
||||||
|
then the variance $\sigma^2$ of the combined estimate $\langle f \rangle$:
|
||||||
|
|
||||||
|
$$
|
||||||
|
\langle f \rangle = \frac{1}{2} \left( \langle f \rangle_a
|
||||||
|
+ \langle f \rangle_b \right)
|
||||||
$$
|
$$
|
||||||
|
|
||||||
is given by:
|
is given by:
|
||||||
|
|
||||||
$$
|
$$
|
||||||
\sigma_I^2 = \frac{\sigma_a^2}{N_a} + \frac{\sigma_b^2}{N_b}
|
\sigma^2 = \frac{\sigma_a^2}{4n_a} + \frac{\sigma_b^2}{4n_b}
|
||||||
$$
|
$$
|
||||||
|
|
||||||
It can be shown that this variance is minimized by distributing the points such
|
It can be shown that this variance is minimized by distributing the points such
|
||||||
that:
|
that:
|
||||||
|
|
||||||
$$
|
$$
|
||||||
\frac{N_a}{N_a + N_b} = \frac{\sigma_a}{\sigma_a + \sigma_b}
|
\frac{n_a}{n_a + n_b} = \frac{\sigma_a}{\sigma_a + \sigma_b}
|
||||||
$$
|
$$
|
||||||
|
|
||||||
Hence, the smallest error estimate is obtained by allocating sample points in
|
Hence, the smallest error estimate is obtained by allocating sample points in
|
||||||
proportion to the standard deviation of the function in each sub-region.
|
proportion to the standard deviation of the function in each sub-region.
|
||||||
|
The whole integral estimate and its variance are therefore given by:
|
||||||
|
|
||||||
such that $a \cup b = \Omega$
|
$$
|
||||||
|
I = V \cdot \langle f \rangle \et \sigma_I^2 = V^2 \cdot \sigma^2
|
||||||
|
$$
|
||||||
|
|
||||||
When implemented, MISER is in fact a recursive method. With a given step, all
|
When implemented, MISER is in fact a recursive method. With a given step, all
|
||||||
the possible bisections are tested and the one which minimizes the combined
|
the possible bisections are tested and the one which minimizes the combined
|
||||||
variance of the two sub-regions is selected. The same procedure is then repeated
|
variance of the two sub-regions is selected. The variance in the sub-regions is
|
||||||
recursively for each of the two half-spaces from the best bisection. At each
|
estimated with a fraction of the total number of available points. The remaining
|
||||||
recursion step, the integral and the error are estimated using a plain Monte
|
sample points are allocated to the sub-regions using the formula for $n_a$ and
|
||||||
Carlo algorithm.
|
$n_b$, once the variances are computed.
|
||||||
After a given number of calls, the final individual values and their error
|
The same procedure is then repeated recursively for each of the two half-spaces
|
||||||
estimates are then combined upwards to give an overall result and an estimate of
|
from the best bisection. At each recursion step, the integral and the error are
|
||||||
its error.
|
estimated using a plain Monte Carlo algorithm. After a given number of calls,
|
||||||
|
the final individual values and their error estimates are then combined upwards
|
||||||
|
to give an overall result and an estimate of its error.
|
||||||
|
|
||||||
Results for this particular sample are shown in @tbl:MISER.
|
Results for this particular sample are shown in @tbl:MISER.
|
||||||
|
|
||||||
@ -201,10 +215,13 @@ $\sigma$ 0.0000021829 0.0000001024 0.0000000049
|
|||||||
diff 0.0000032453 0.0000000858 000000000064
|
diff 0.0000032453 0.0000000858 000000000064
|
||||||
-------------------------------------------------------------------------
|
-------------------------------------------------------------------------
|
||||||
|
|
||||||
Table: MISER results with different numbers of function calls. {#tbl:MISER}
|
Table: MISER results with different numbers of function calls. Be careful:
|
||||||
|
while in @tbl:MC the number of function calls stands for the number of
|
||||||
|
total sampled poins, in this case it stands for the times each section
|
||||||
|
is divided into subsections. {#tbl:MISER}
|
||||||
|
|
||||||
The error, altough it lies always in the same order of magnitude of diff, seems
|
This time the error, altough it lies always in the same order of magnitude of
|
||||||
to seesaw around the correct value as $N$ varies.
|
diff, seems to seesaw around the correct value.
|
||||||
|
|
||||||
|
|
||||||
## VEGAS \textcolor{red}{WIP}
|
## VEGAS \textcolor{red}{WIP}
|
||||||
|
Loading…
Reference in New Issue
Block a user