[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: A (Low-Level) Opaque Object Proposal for R5RS
I really hate to step into the middle of this particular
crossfire. Unlike Kent Dybvig, I prefer a procedural interface
for records to a syntactic interface.
I don't think a procedural interface for records is much harder
to optimize than a syntactic interface, so in that respect I
would like to be on Michael Blair's side. In fact, however,
I think Blair's proposed technique for implementing opaque
records would be more difficult to optimize than a record-only
interface would be. In particular I'm a little concerned that
Blair is overestimating the powers of certain general-purpose
optimizations and underestimating the degree to which his
proposed optimizations require a more specialized analysis.
On the other hand, it has been a confusing thread and I may well
misunderstand what is going on.
using USE/DEF/ESCAPE analysis (or whatever you want to call it)
I would call it a specialized escape analysis. This begins to
matter when Blair says
This is standard USE/DEF analysis that even C and Fortran compilers do.
Any compiler that claims to be an `optimizing' compiler surely must be
doing USE/DEF/ESCAPE analysis.
Blair's example shows that he is talking about an interprocedural
analysis for heap-allocated variables.
In the C and Fortran world, USE/DEF analysis usually refers to
the problem of finding all definitions (that is, assignments)
that reach a given use. I don't think interprocedural USE/DEF
analysis is terribly common yet in C or Fortran compilers.
Interprocedural USE/DEF analysis for heap-allocated "variables"
in C is certainly rare.
I claim to have written an optimizing Scheme compiler. It
performs almost no USE/DEF analysis, though it probably performs
more of this than many other Scheme compilers. Interprocedural
USE/DEF analysis for heap-allocated variables is not trivial,
and it is far from clear how much of it is worthwhile.
On the other hand, Blair's example doesn't really require USE/DEF
analysis because the <hidden-passkey> is never assigned or side-
effected. Although his example can be handled by escape analysis,
I would point out that no existing Scheme compiler performs the
specific escape analysis that would be required.
It probably wouldn't be difficult to add that specific escape
analysis, but I can't blame Dybvig or any other compiler writer
for thinking that their time might be better spent on different
optimizations. If there's an alternative way to do it that
doesn't require a special-purpose escape analysis, then we
shouldn't reject that alternative just because Blair's way
could be implemented efficiently.
Blair also writes:
Moreover, each textual occurrence of the <hidden-underlying-opaque-type> in
the code can likewise be compile-time proven to be EQ?. Therefore, all
VEIL/UNVEIL calls in the code above can be flattened out to not bother
checking the passkey at run time, and the <record-id>? predicate can
likewise be partial evaluated into a flattened test that also does not
bother to check the passkey at run time (since it is compile-time provable
that <hidden-passkey> _is_ the passkey for this opaque type).
This requires only Orbit-style closure analysis. Such closure analysis has
been a standard technique for at least a decade. It (among other things)
is the bread and butter of LISP optimization. Any Scheme compiler that
claims to be an `optimizing' compiler surely must be doing closure
Yes, closure analysis is a standard optimization, but calling
this a closure analysis seems bizarre to me.
Lots of compilers perform closure analysis. No existing
Scheme compiler performs the optimization Blair is talking
about. They can't, because the correctness of Blair's optimization
depends upon the inability to take apart and to forge objects
created by OPAQUE-VEIL, and this inability cannot be known by
Of course, I won't be contrite...
Hey, we could deal with a little contrition.