Quantcast
Channel: Active questions tagged r - Stack Overflow
Viewing all articles
Browse latest Browse all 206342

Complete loss of accuracy in modulus when calculating with very large numbers

$
0
0

I have the following problem:

> 1e20 %% 3
[1] 0
Warning message:
probable complete loss of accuracy in modulus 

The result can't be correct and I'm sure it is because 1e20 is really big. But I want to solve calculations like this in R. Is there a chance to come up with this?

EDIT: I want to do the following challenge: https://www.codeabbey.com/index/task_view/modular-calculator

This is my code:

library(tidyverse)
library(magrittr)

get_result <- function(.string){

  terms <- .string %>% 
    str_split("\n") %>%
    unlist %>% 
    str_replace("%", "%%") %>% 
    str_squish

  terms[1] %<>% 
    str_c("x <<- ", .)

  terms[2:length(terms)] %<>%
    str_c("x <<- x ", .)

    map(terms, ~ {
      eval(parse(text = .x))      
      })

    x

}

get_result("6
+ 12
           * 99
           + 5224
           * 53
           * 2608
           * 4920
           + 48
           + 7
           * 54
           * 4074
           + 76
           * 2
           * 97
           + 4440
           + 3
           * 130
           + 432
           * 50
           * 1
           + 933
           + 3888
           + 600
           + 9634
           * 10
           * 59
           + 62
           * 358
           + 82
           + 1685
           * 78
           + 8
           * 266
           * 256
           * 26
           * 793
           + 1248
           * 746
           * 135
           * 10
           * 184
           + 4
           * 502
           * 60
           + 9047
           * 5
           + 416
           * 7
           * 6287
           * 8
           % 4185")

With the real test data I get a huge number before I want to use the modulus


Viewing all articles
Browse latest Browse all 206342

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>