[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

S&I's idea of EQ? (long)

While I think that you are right in assuming that no code written
depends on coalescing not being used, and I understand the position
that being able to coalesce is a desirable goal, your example does not
strike me as particularly good.  You claim that an optimizing compiler
will obtain considerably more perfomance out of the coalesced case
than out of the "split" case, but I'm not certain about that:

Clearly if map is known (eg. by the use of declarations, like in
MIT-Scheme), it can be known that eq? (eqv?) is not used on the
procedure values, so coalescing is permitted, but in this case it is
irrelevant since you would want to inline code map into a loop, and
this would hopefully cause the argument to map (the lambda expression)
to be inline coded also, giving you considerably more (time)
efficiency.  I think the issue is moot in this case.

If map is not known, the time to look its value up at runtime is
comparable to the time which it takes to close the procedure.  In
particular, in MIT-Scheme (because of first class environments, etc),
looking up map can take considerably longer than closing the
lambda-expression, and the latter time is usually negligible.  I think
that the small time difference which can be gained in this case is not
very interesting.

It seems from your comment that you are assuming that closing is
inordinately expensive.  This is not necessarily so (except
potentially in space).  There are 2 implementations of closures I'm
familiar with, and in both cases the above closing can be made very
efficient (in time), although admittedly not as efficient as the
coalesced case, but probably efficient enough:

The "linked environment frame" implementation: Every time a procedure is
applied, a new frame with the arguments is created and the environment
where the body is evaluated is just the "cons" of this new-frame and
the environment where the procedure was closed.  To close a lambda
expression, a "cons" is made of its code, and the environment object
("list" of frames) where the lambda expression is evaluated.  Thus
closing is just as expensive as a cons.

The "grab what you need" implementation: every time a lambda
expression is "evaluated", the locations of its free variables (or the
values if it can be proven that no assignments occur) are collected
into a closure frame, and the closure is the "cons" of the code
corresponding to the lambda expression and this "closure frame".
Closing in this model is potentially more expensive since variables
may have to be referenced at closing time, and thus a strong desire to
"close eagerly" may occur.  In your example, the lambda-expression has
no free variables (except +, but allow me to assume that this is a
known operator), thus closing in this case becomes no more expensive
than a cons.  Assume now that it does have free variables as in

(define (foo x) (map (lambda (n) (+ n y)) x))

where y is defined in some outer context.

Then the code can be transformed into

(define foo
  (let ((closure-frame (location-of y)))
    (lambda (x)
      (map (enclose (lambda (n) (+ n y))

Where enclose and location-of have the obvious meanings.  Again
closing is just as expensive as consing (which is hopefully cheap).
This sort of thing can obviously be extended to more complicated
cases, where the general case would be something like

(define foo
  (let ((constant-part (locations-of * + - y w)))
    (lambda (x z)
      (map (enclose (lambda (n) (* w (+ y (- n z))))
		    (frame-cons (location-of z) constant-part))


(define (foo x z)
  (map (lambda (n) (* w (+ y (- n z)))) x))

So the parts that can be done eagerly are done eagerly, and the others
(plus a cons) are done dynamically.  Note that this solution breaks
down if the environment of the lambda-expression is used as a first
class environment, but if this is the case, coalescing is obviously
disallowed also.  This intermediate implementation has wider
applicability than coalescing since in the last example the lambda
expression could not trivially be coalesced.  

Thus coalescing seems (to me) like a cute little "trick" which may
give you a few instructions (whatever a cons takes) of efficiency in
very limited situations, while a very simple optimization that has no
impact on the semantics gives you most of it in those cases and more
in others.

Again, I understand your position, but it seems that the arguments for
it are not very strong.  There is an argument against coalescing which
carries at least as much weight (with me), and has not been presented

Coalescing breaks the environment model of evaluation explained in
S&ICP.  This model (which students are shown how to implement in
chapter 4) implies that coalescing is not possible, since a
"different" procedure object is created every time that a lambda
expression is "evaluated".  If adherence to this model is important
(which it is to a lot of people involved with teaching S&ICP around
here), coalescing becomes an optimization which can only be used when
the optimizer can prove that the procedures will not be passed to eq?
(eqv?), and not usable by default.

PS: While I don't feel very strongly about it, I think that the
arguments against coalescing should also be heard, expecially since
some of the people who think it is a bad idea are not on the mailing
list, and the ones who are feel that it is unfriendly and refuse to
send messages to it.