[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

tail call proposal [was: Re: tail recursion proposal]



   Date: Thu, 8 Jan 1998 22:06:30 -0500
   From: Stephen Adams <adams@martigny.ai.mit.edu>

   ... the MIT Scheme interpreter has a `history' mechanism which
   records tail calls.  If you use the MIT Scheme debugger, the tail
   calls show up as "reductions", and the non-tail calls as "subproblems".

This is the "heuristic" technology that "usually" works.  [I might also in
another forum debate its naturalness, but let's not do that here.
Hopefully, my concern over its naturalness is an existence proof that
controversy exists.]

   The MIT Scheme implementation uses a constant (configurable)
   amount of storage to maintain the history.  I could imagine
   a more sophisticated implementation that keeps as much
   information as possible and only releases storage when it is
   required to make the illusion of running in finite space fit
   within the actual space available.

Heh.  I actually thought about this.  But you know, the tail elimination
requirement makes it not conforming if it uses a non-constant amount of
space.  In effect, this is expressly prohibited.  :-) [And anyway, adding
more storage does nothing to the "naturalness" problem.]

However, assuming that the user might be willing to waive conformance in
desperate situations, it means you are up against the original problem,
which is that you have encouraged programmers to program in a way that will
accumulate "debug info" rather than "stack" at a much higher rate than they
would if they had a "looping" primitive (which allowed a
programmer-controlled sense of when this is necessary).  Fundamentally, the
problem is "caused" by encouraging a certain style of programming and cannot
be erradicated without removing that cause.  I'm not saying it's a bad
trade-off, and I'm not arguing Scheme change.  I understand why Scheme makes
the choice, and I agree it's central to the community.  I'm just saying it
is a choice with a definite consequence that cannot be hidden and I'm asking
not to treat those consequences as if they matter to no one.

   I think it is poor form to use the term "tail recursion" as
   in general it is undecidable if a tail call is recursive.
   The fact that this term keeps cropping up makes me favour
   the "pessimization" explanation.

I understand what you're getting at here, and I agree with the sentiment if
not the motivation.  (The issue of undecidability seems irrelevant to me
since we're not asking the implementation to recognize something
undecidable.  However, making a clear distinction between "tail recursion"
and "tail call" seems worth making.  I'm personally just used to blurring
them--sometimes saying "tail recursion" where I mean "tail calling" because
I was trained long ago before I understood the difference.. As long as one
understands the difference, little practical harm seems to come from abuse
of the terminology--but I agree that bad terminology is a barrier to entry
into the community, and I'm happy to see the terminology used consistently
and correctly in the literature.)

Although I happen to like the "pessimization" explanation myself, I don't
advocate it just because it avoids this confusion; I think the other wording
could be repaired to avoid the confusion by careful replacement of some uses
of "tail recursion" with "tail call", etc. and would have no problem about
suggestions of that kind from those who prefer the other explanation.