diff --git a/notes/sections/3.md b/notes/sections/3.md index f72de81..2aaadea 100644 --- a/notes/sections/3.md +++ b/notes/sections/3.md @@ -188,7 +188,7 @@ The overall minimisation ends when the gradient module is smaller than $10^{-4}$ The program took 25 of the above iterations to reach the result shown in @eq:Like_res. The Cramér-Rao bound states that the covariance matrix of parameters estimated -by MLM is greater than the of the inverse of the Hessian matrix of $-\log(L)$ +by MLM is greater than the the inverse of the Hessian matrix of $-\log(L)$ at the minimum. Thus, the Hessian matrix was computed analytically and inverted by a Cholesky decomposition, which states that every positive definite symmetric square matrix $H$ can be written as the