In computing, especially computational geometry, a real RAM (random-access machine) is a mathematical model of a computer that can compute with exact real numbers instead of the binary fixed point or floating point numbers used by most actual computers. The real RAM was formulated by Michael Ian Shamos in his 1978 Ph.D. dissertation.[1]

Model

edit

The "RAM" part of the real RAM model name stands for "random-access machine". This is a model of computing that resembles a simplified version of a standard computer architecture. It consists of a stored program, a computer memory unit consisting of an array of cells, and a central processing unit with a bounded number of registers. Each memory cell or register can store a real number. Under the control of the program, the real RAM can transfer real numbers between memory and registers, and perform arithmetic operations on the values stored in the registers.

The allowed operations typically include addition, subtraction, multiplication, and division, as well as comparisons, but not modulus or rounding to integers. The reason for avoiding integer rounding and modulus operations is that allowing these operations could give the real RAM unreasonable amounts of computational power, enabling it to solve PSPACE-complete problems in polynomial time.[2]

When analyzing algorithms for the real RAM, each allowed operation is typically assumed to take constant time.

Implementation

edit

Software libraries such as LEDA have been developed which allow programmers to write computer programs that work as if they were running on a real RAM. These libraries represent real values using data structures which allow them to perform arithmetic and comparisons with the same results as a real RAM would produce. For example, In LEDA, real numbers are represented using the leda_real datatype, which supports k-th roots for any natural number k, rational operators, and comparison operators.[3] The time analysis of the underlying real RAM algorithm using these real datatypes can be interpreted as counting the number of library calls needed by a given algorithm.[4]

Comparison to other computational models

edit
  • In the Turing machine model, the basic unit of computation involves one bit. Therefore, the time and space complexity of numeric algorithms depends on the number of bits needed to represent the numbers. In contrast, in the Real RAM model, the basic unit of computation involves a real number, regardless of how many bits are required to represent it. This difference is important when analyzing algorithms such as Gaussian elimination: this algorithm requires a polynomial number of arithmetic operations on real numbers, so it is polynomial in the Real RAM model; however, the numbers used in the intermediate computations may (if implemented naively) grow exponentially large, so its run-time in the Turing Machine model is exponential.[5]: Sec.1.4 
  • The real RAM closely resembles the later Blum–Shub–Smale machine.[6] However, the real RAM is typically used for the analysis of concrete algorithms in computational geometry, while the Blum–Shub–Smale machine instead forms the basis for extensions of the theory of NP-completeness to real-number computation.
  • An alternative to the real RAM is the word RAM, in which both the inputs to a problem and the values stored in memory and registers are assumed to be integers with a fixed number of bits. The word RAM model can perform some operations more quickly than the real RAM; for instance, it allows fast integer sorting algorithms, while sorting on the real RAM must be done with slower comparison sorting algorithms. However, some computational geometry problems have inputs or outputs that cannot be represented exactly using integer coordinates; see for instance the Perles configuration, an arrangement of points and line segments that has no integer-coordinate representation.

References

edit
  1. ^ Shamos, Michael Ian (1978), Computational Geometry, Ph.D. dissertation, Yale University.
  2. ^ Schönhage, Arnold (1979), "On the power of random access machines", Proceedings of the Sixth International Colloquium on Automata, Languages and Programming (ICALP '79), Lecture Notes in Computer Science, vol. 71, Springer, pp. 520–529, doi:10.1007/3-540-09510-1_42, ISBN 978-3-540-09510-1, MR 0573259.
  3. ^ Melhorn, Kurt; Näher, Stefan (1999). The LEDA Platform of Combinatorial and Geometric Computing. Cambridge University Press. Retrieved 12 November 2019.
  4. ^ Mehlhorn, Kurt; Schirra, Stefan (2001), "Exact computation with leda_real—theory and geometric applications" (PDF), Symbolic Algebraic Methods and Verification Methods (Dagstuhl, 1999), Springer, pp. 163–172, doi:10.1007/978-3-7091-6280-4_16, ISBN 978-3-211-83593-7, MR 1832422.
  5. ^ Grötschel, M.; Lovász, L.; Schrijver, A. (1981-06-01). "The ellipsoid method and its consequences in combinatorial optimization". Combinatorica. 1 (2): 169–197. doi:10.1007/BF02579273. ISSN 1439-6912. S2CID 43787103.
  6. ^ Blum, Lenore; Shub, Mike; Smale, Steve (1989), "On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines", Bulletin of the American Mathematical Society, 21 (1): 1–46, doi:10.1090/S0273-0979-1989-15750-9, Zbl 0681.03020.
edit