[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Efficiency of unlimited precision integer arithmetic
| Date: Tue, 14 May 1996 11:07:30 +0000
| From: William D Clinger <will@ccs.neu.edu>
|
| > Why not make it the default? I can only imagine efficiency being the
| > issue, and there are ways to overcome that. In addition, having to
| > detect overflow causes the same level of inefficiency.
|
| The last sentence quoted, which appears to have been just a
| throwaway remark, isn't true. Although an implementation of
| Standard ML does have to detect overflow, it doesn't have to
| detect non-fixnum operands. On most machines (the SPARC is an
| exception) this makes Scheme's unlimited precision exact integer
| arithmetic less efficient than Standard ML's exact integer
| arithmetic with overflow detection.
The point of my comment was not that there are not machine
combinations on which it is not expensive. It may even be more
expensive on most current machines. The point is that it is not
_fundamentally_ more expensive. The detection of non fixnum operands
can be done in parallel with the fixnum operation, the hardware is
incredibly cheap and does not affect cycle time or scalability.
I'm not saying that efficiency is not important. But minor efficiency
hurdles with known trivial solutions (although infrequently available)
should not be an argument against a feature.
At any rate, even on machines not providing support the overhead
becomes lower as the degree of instruction-level parallelism (number
of pipe stages, parallel functional units, etc.) increases.
Commercial and research compilers are not having an easy time exploiting
the parallelism available on current and future designs in all but some
limited programs. Thus on many current and future generation machines,
barring a breakthrough in compilation technology, there are going to
be additional functional units and pipe stages available to carry out
these checks in parallel with the latency-critical part of the
computation.