[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: will@ccs.neu.edu*Subject*: Re: Efficiency of unlimited precision integer arithmetic*From*: Matthias Blume <blume@CS.Princeton.EDU>*Date*: Tue, 14 May 1996 18:20:59 -0400*Cc*: rrrs-authors@martigny.ai.mit.edu*In-Reply-To*: Your message of "Tue, 14 May 1996 11:07:30 +0000"*References*: <31986972.2CFD@ccs.neu.edu>

From: William D Clinger <will@ccs.neu.edu> Subject: Efficiency of unlimited precision integer arithmetic Date: Tue, 14 May 1996 11:07:30 +0000 > Matthias Blume would probably want equal time: Thanks. :) > > [....] The parameterized > > module approach seems like a real win to me here. There is only one > > thing you loose -- the illusion that all numbers are ideal complex > > numbers by default. > > Parameterized modules also lose the efficiency of built-in support > for unlimited precision integer arithmetic. You must represent > integers that _might_ lie outside the interval [-2^29, 2^29-1] or > [-2^31, 2^31-1] or whatever by a relatively complicated data > structure, which will probably have to be allocated on the heap. > Adding such integers is likely to require a full-blown > non-tail-recursive call. In Scheme, these two overheads are > incurred only when an operand or result actually does lie outside > the fixnum range. This is certainly true in current implementations, but I can see ways of improving the optimizer to take care of it. If everything else fails, then one can always make the compiler aware of the generic number type and pull all the same tricks a Scheme compiler would pull. ML compilers occasionally do such things in similar circumstances already. I am not arguing against such an approach. Just leave the language definition alone. Compilers might very well be hacked to support certain libraries hyper-efficiently -- as long as there is a way for a dumb compiler to produce a working program as well. > My guess is that Standard ML's arithmetic is more efficient for > most programs, but that Scheme's arithmetic is more efficient > for a significant minority of programs. This is highly > machine-dependent, however. And implementation-dependent as well. > On a SPARC, for example, there should be absolutely no difference > between the efficiency of arithmetic on small exact integers in > Standard ML and in Scheme.. The use of parameterized modules to > implement unlimited precision arithmetic in Standard ML would be > considerably less efficient than the built-in unlimited precision > arithmetic of Scheme. Depends on many things... Parameterized modules are can be specialized and optimized according to the actual functor argument. Function calls can be inlined -- perhaps even accross module- and compilation unit boundaries. Datatypes can be represented very efficiently in smart compilers. Example: datatype bignum = SMALLNUM of int | LARGENUM of <something more complicated> The compiler could realize that <something more complicated> is represented as a pointer, and that it therefore suffices to represent the SMALLNUM constructor by simply its value (a tagged integer, which will always be distinguishable from a pointer). > Conclusions: > > 1. When it comes to efficiency on the SPARC, Scheme's model of > exact integer arithmetic dominates that of Standard ML. Not necessarily. Just nobody has bothered to worry about it too much in existing ML implementations. > 4. There are definitely some programs for which Scheme's model > is more efficient than Standard ML's when executed on most > popular target architectures. Again, not necessarily. One last word: Even if I give the impression that my intent is to put Scheme down and tell everybody how great ML is -- this impression is slightly inaccurate. I have expressed my opinions of which direction I would like to see Scheme develop into. Usually I am in favor of more statically decidable information in the program, better means for expressing abstractions, better support for separate compilation, and simplicity. My saying so involved mentioning the bad word ``type'', and some people are now jumping all over me. In order to defend my views I have to use examples from my own experience -- and as it so happens, this experience largely involves SML. So, yes, my views are heavily influenced by the way things are done in SML. And no, I don't think SML is superior in *every* aspect. There are still a few stong points about Scheme, I just happen to believe that the type system is not one of them. -Matthias

**References**:**Efficiency of unlimited precision integer arithmetic***From:*William D Clinger <will@ccs.neu.edu>

- Prev by Date:
**Re: Revised straw proposal for heuristic info** - Next by Date:
**Re: Revised straw proposal for heuristic info** - Prev by thread:
**Re: Efficiency of unlimited precision integer arithmetic** - Next by thread:
**Re: Why would anyone want opacity?** - Index(es):