[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: exception systems

This is really way off topic now, so this will be the last time I Cc:
it to rrrs-authors...

From: Marc Feeley <feeley@IRO.UMontreal.CA>
Subject: Re: exception systems
Date: Fri, 19 Apr 1996 14:06:51 -0400

> > > 3) References to parameters when the reference follows a non-tail call
> > > (because frames can't be extended in place, you actually have to chain
> > > the various parts of a function's frame, each part corresponding
> > > to a non-tail call in the function's body).
> > 
> > I didn't quite follow.  Parameters are always passed in
> > registers... so why does parameter access slow down?
> It is when the number of parameters and/or the number of live local
> variables exceeds the number of registers that the overhead occurs.
> So for trivial benchmarks you will not see an overhead but for
> "realistic" code you will probably see it (note: I have no data to
> back this up).

I believe, for realistic code on realistic machines you will hardly
see this situation arise.

> Anyway, for simplicity, assume you have no registers,
> then in the following code:
> (lambda (x)
>   (let ((y (f x)))
>     (+ y x)))
> which is CPS-converted to:
> (lambda (x k)
>   (f x (lambda (y) (k (+ y x)))))
> the second reference to x is more expensive than the first because you
> have to follow one static link to get to it.

If you don't have real registers, then you can still use
pseudo-registers, which might be quite fast due to good cache
locality.  In any case, modern machines have a lot of registers.  I
don't think register spilling is all too common.  Have a look at the
Appel/George paper on iterated register coalescing (POPL'96).  It has
some statistics on that.

You seem to have one particular model of how to implement things in
mind.  Other models do better than that.  For example, I would expect
Shao's closure analysis to put x into a callee-save register (i.e.,
pass it to f as an extra parameter, which in turn passes it to the
continuation as an extra parameter).  Together with smart register
allocation this will result in zero additional cost for accessing x
and also only minimal cost for the creation of the continuation
(because it is basically down to a return address and an empty closure).

> The reference cost can be decreased by using a "flat-closures" representation
> but then it becomes more expensive to create each continuation frame
> because the live variables need to be copied to the new frame. In a stack
> implementation you can simply extend the current frame by adjusting
> the stack-pointer/frame-pointer (which can be done lazily, i.e. at
> most one pair of SP increment/decrement per function call).

Again, this is what Shao's closure analysis is all about.  The model
of flat closures is quite useful if you are interested in
implementations that are safe-for-space (SML/NJ is!), but it doesn't
mean one has to take it too literally.  I would recommend having a
look at Shao's thesis and/or the Appel/Shao paper I mentioned earlier.