I often end up in situations where it is necessary to check if the obtained difference is above machine precision. Seems like for this purpose R has a handy variable: .Machine$double.eps
. However when I turn to R source code for guidelines about using this value I see multiple different patterns.
Examples
Here are a few examples from stats
library:
t.test.R
if(stderr < 10 *.Machine$double.eps * abs(mx))
chisq.test.R
if(abs(sum(p)-1) > sqrt(.Machine$double.eps))
integrate.R
rel.tol < max(50*.Machine$double.eps, 0.5e-28)
lm.influence.R
e[abs(e) < 100 * .Machine$double.eps * median(abs(e))] <- 0
princomp.R
if (any(ev[neg] < - 9 * .Machine$double.eps * ev[1L]))
etc.
Questions
- How can one understand the reasoning behind all those different
10 *
,100 *
,50 *
andsqrt()
modifiers? - Are there guidelines about using
.Machine$double.eps
for adjusting differences due to precision issues?