Abstract :
High-precision arithmetic is useful in many different computational problems. The most common is a numerically unstable algorithm, for which, say, 53-bit (ANSI/IEEE 754-1985 Standard) double precision would not yield a sufficiently accurate result.. For most current machines, 53-bit double precision is the highest provided in hardware, giving about 16 significant digits. (By "significant digits", I mean the number of equivalent decimal digits of precision, rather than the number with base (not equal to) 10.) Calculating with 30 or 40 significant digits can often overcome the algorithmʹs instability and provide adequate accuracy. In the article, I give some examples of calculations in which multiple precision can be useful.