[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: S&I's idea of EQ?



   MIT Scheme uses a somewhat modified stop and copy garbage collector
   which originally increased performance significantly.  I have no
   current figures.  On average code (compiled or not) the overhead
   caused by the garbage collector is at worst 10% (% of time spent
   garbage collecting vs. total time).  I feel that this is completely
   acceptable.  Some of us (GJS in particular) don't believe very much on
   optimizing consing, but rather in (very) large memories and very fast
   garbage collectors.

In Chez Scheme, for a program using no assignments, creating no closures
with free variables, and using no call/cc, garbage collection overhead
from the system is 0%.  This is a fairly common situation.

In allocation-intensive code, forcing a collection every 256K bytes, the
percentage of time spent in the collector seems to be about 10-12%.
(And Chez Scheme doesn't have any slow variable references or other
comparable overhead to pump up the total time.)  When I am fortunate
enough to run on a machine with more than 2 MB of physcial memory
available, collections can happen less frequently.  I have seen the
collection overhead for allocation-intensive code be as low as 3% on a
VAX 11/785 with 10MB of physical memory.

I would guess that for "average" code, the "worst-case" is around 1-2%
with a reasonably large physical memory.

The reason for the low overhead is that system allocation is held to a
minimum, not because I have a highly-tuned collector.  Less allocation
means fewer collections.