How to optimize converting an unsigned arbitrary precision integer to a string for printing?

I am working on an unsigned arbitrary precision integer class in C++, just as an exercise for myself. While running some benchmarks (calculating the Fibonacci sequence at some number) I noticed that converting my class to a string for printing was taking about 40% of the total runtime. Currently, I am using a traditional grade school approach to successively divide the integer by 10 to get the printable string and was wondering if there was a faster approach.

I have attempted to implement Newton-Raphson division but even with my Karatsuba implementation it was still substantially slower than the traditional approach.

My current idea is to generate the number 0.1 with enough precision that I can replace the division operation with a multiplication operation instead. Of course, this method has resulted in rounding errors when trying to print large numbers. An example from one of my tests:

Print test:   11111111112345678906735299999996
My old print: 11111111112345678906735300000000

Here are my questions:

  1. Is there a way to know how much precision is required to calculate 0.1 such that when I multiply it by the number I am trying to print it will have no noticeable rounding error? i.e. The rounding error will get shifted away when converting back to an integer.
  2. Is there some other way to optimize division by a constant when dealing with arbitrary precision? It seems to me that I can potentially find some way to use this multiplication method for the most significant bits and use the grade school method for the least significant bits but I doubt the return in runtime would be significant.

Note: I am storing my integer as an std::vector<bool> and am using the -O3 optimization flag. Any information for this would be a big help! Thanks in advance.

Source: Windows Questions C++

LEAVE A COMMENT