[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
S&I's idea of EQ?
I disagree that an "optimizing" garbage collector implies an
"optimizing" compiler. We have one, but not the other to the same
degree. The former is considerably easier to write since it is a much
smaller program.
By first class environments I do not mean that they can be passed
around, but that they can be manipulated. What's the use if they can
only be passed around? Clearly it is incremental definition that
causes the problem, but I consider them an essential feature of first
class environments.
A cons in MIT-Scheme consists of 4 (move) instructions in compiled
code (without garbage collection). A variable reference consists of 1
to 2 in most cases (the largest exception is when they must be left to
the interpreter, because of potential incremental definitions, and
this occurs rarely in compiled code).
It seems to me that the extra performance is not worth the effort,
since it makes the language at lot harder to use because it puts the
user at the mercy of the compiler. I believe that a declaration
allowing the optimization is appropriate since then any confusion
would be caused by the user, not by the compiler doing something
unexpected.
Again, I believe that this optimization can be obtained by the user
almost all of the time. I don't accept the argument that users do not
have control over macro expansion, or over imbedded languages. They
certainly do not have control, but the macro writer or "imbedder" does
and should be careful about the code being generated in the same way
that a compiler writer must be careful about using registers. A
declaration would help here too since the macro could expand into code
that contained it by default. Supposedly the user of the macro could
not use any "hidden" lambdas since he would not know the
implementation of the macro.
I am not necessarily advocating for operational semantics, but once we
have accepted that there is no splitting, it seems that the only
consistent model left is the one that requires every "evaluation" of a
lambda expression to be consed. And this means no coalescing in the
absence of declarations or proof. Either procedures have associated
locations or they do not.
I think that this "solution" of allowing coalescing but not splitting
is the worst of both worlds since it breaks both models. It will
probably not confuse me (although it might), but I think it would
confuse a naive user.
PS: How come people object to the declaration allowing the
optimization? It seems inocuous to me and would give everybody what
they wanted. Given that the semantics are no longer "clean" since
splitting is not allowed, we may as well be consistent.
Note that some implementations could advertise that the
optimization was on by default, and the other behaviour could be
obtained by a "negative" declaration.