[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: rrrs-authors@martigny.ai.mit.edu*Subject*: Efficiency of unlimited precision integer arithmetic*From*: William D Clinger <will@ccs.neu.edu>*Date*: Tue, 14 May 1996 11:07:30 +0000*Cc*: gjr@martigny.ai.mit.edu, blume@cs.Princeton.EDU, will@ccs.neu.edu*Organization*: Northeastern University*References*: <199605132126.RAA22919@severi.mit.edu> <199605132149.RAA23408@zayin.CS.Princeton.EDU>*Reply-To*: will@ccs.neu.edu

Bill Rozas wrote of unlimited precision exact integer arithmetic: > Why not make it the default? I can only imagine efficiency being the > issue, and there are ways to overcome that. In addition, having to > detect overflow causes the same level of inefficiency. The last sentence quoted, which appears to have been just a throwaway remark, isn't true. Although an implementation of Standard ML does have to detect overflow, it doesn't have to detect non-fixnum operands. On most machines (the SPARC is an exception) this makes Scheme's unlimited precision exact integer arithmetic less efficient than Standard ML's exact integer arithmetic with overflow detection. Like Bill, I can't imagine any argument against unlimited precision integer arithmetic other than efficiency. On the other hand, efficiency can be a convincing argument if you're one of those people for whom a programming language provides a way to run programs as well as a formal medium for expressing computational ideas. Matthias Blume would probably want equal time: > [....] The parameterized > module approach seems like a real win to me here. There is only one > thing you loose -- the illusion that all numbers are ideal complex > numbers by default. Parameterized modules also lose the efficiency of built-in support for unlimited precision integer arithmetic. You must represent integers that _might_ lie outside the interval [-2^29, 2^29-1] or [-2^31, 2^31-1] or whatever by a relatively complicated data structure, which will probably have to be allocated on the heap. Adding such integers is likely to require a full-blown non-tail-recursive call. In Scheme, these two overheads are incurred only when an operand or result actually does lie outside the fixnum range. So Standard ML's limited precision arithmetic is more efficient if your programs never use integers outside the fixnum range, or if there are only a few infrequently executed places where an operand or result might be outside the fixnum range. Scheme's unlimited precision arithmetic is more efficient if your programs sometimes use integers outside the fixnum range, and some considerable fraction of the arithmetic operations in your programs may have to deal with an operand or result that is outside the fixnum range. My guess is that Standard ML's arithmetic is more efficient for most programs, but that Scheme's arithmetic is more efficient for a significant minority of programs. This is highly machine-dependent, however. On a SPARC, for example, there should be absolutely no difference between the efficiency of arithmetic on small exact integers in Standard ML and in Scheme.. The use of parameterized modules to implement unlimited precision arithmetic in Standard ML would be considerably less efficient than the built-in unlimited precision arithmetic of Scheme. Conclusions: 1. When it comes to efficiency on the SPARC, Scheme's model of exact integer arithmetic dominates that of Standard ML. 2. There probably aren't any popular target architectures for which Standard ML's model of exact integer arithmetic dominates that of Scheme. 3. There are definitely many programs for which Standard ML's model is more efficient than Scheme's when executed on most popular target architectures. 4. There are definitely some programs for which Scheme's model is more efficient than Standard ML's when executed on most popular target architectures. Will

**Follow-Ups**:**Re: Efficiency of unlimited precision integer arithmetic***From:*Guillermo J. Rozas <gjr@martigny.ai.mit.edu>

**Re: Efficiency of unlimited precision integer arithmetic***From:*Matthias Blume <blume@CS.Princeton.EDU>

**References**:**Re: Why would anyone want opacity?***From:*david carlton <carlton@math.mit.edu>

**Re: Why would anyone want opacity?***From:*Matthias Blume <blume@CS.Princeton.EDU>

- Prev by Date:
**Re: Revised straw proposal for heuristic info** - Next by Date:
**R4RS compliance: negative numbers** - Prev by thread:
**Re: Why would anyone want opacity?** - Next by thread:
**Re: Efficiency of unlimited precision integer arithmetic** - Index(es):