[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: proper tail recursion proposal, take 2

From: Richard Kelsey <kelsey@research.nj.nec.com>
Subject: Re: proper tail recursion proposal, take 2
Date: Wed, 14 Jan 1998 22:43:15 -0500

> Don't worry, you aren't repeating yourself.  In a message
> sent last Friday you wrote
>  The proposed text provides a lower bound on the set of
>  programs that run out of space.
> which is certainly not `completely undefined'.

Perhaps not "completely", but it certainly does not provide any new
insight.  Remember that we want to characterize the programs that do
NOT run out of space.  To say "Here are a few programs that are
permitted to run out of space." does not help much in that pursuit.

>    The problem is that not all programs that would lead to an unbounded
>    number of active tail calls can also be executed in finite space.
> You are correct.  There are lots of ways to run out of space.
> You can use too many cons cells.  Integers can get too large.
> You can write too many characters to a file.
> You are also correct that the proper tail recursion requirement
> addresses only one of these.  Your point of view seems to be that
> unless we address *all* of them there is no point in addressing any.

The point is that it does not even address the one that it tries to
address.  By not addressing all of them it fails to even address a
single one.

Once again, consider the following program:

(letrec ((grow (lambda (x) (grow (cons 1 x)))))
   (grow '()))

I claim that a conforming implementation is allowed to implement the
call to GROW as a stack-space consuming, ordinary function call.  The
program will run out of space, but it would have run out of space even
if I had implemented the call to GROW as a "proper tail call".
If you challenge my implementation as "non-conforming", then I will
simply tell you that the additional cost in space is due the CONS.

Now, let's look at another program:

(letrec ((blink (lambda (x) (blink (- 1 x)))))
  (blink 0))

An implementation that conforms to the word (but certainly not the
spirit) of the language definition, again, may run out of space here,
because nothing is said about the space behavior of arithmetic.  If I
implement the call to BLINK as a space-consuming call, then I will be
able to defend this decision as "conforming" due to the fact that I
can charge the expense in space to "-".

I can furthermore "prove" this view by asking you to remove the calls
to "-", yielding

(letrec ((stay (lambda (x) (stay x))))
  (stay 0))

which my compiler will recognize and implement as a tight loop.

> Your earlier suggestion that we adopt some definition of
> safe-for-space doesn't help, by your measure, because they also
> leave gaps.  Do they say how big a file you can write?  How many
> non-tail-recursive active calls are allowed?

One does not have to be precise to the byte.  There is a neat thing
called "asymptotic complexity".  It is well within our capabilities to
put an upper bound on that for every Scheme program.   It has been
done for other languages, and according to Will Clinger it has been
done for Scheme.

But if you really resist going all the way -- here is how it can be
done.  It works by abstracting over our idea of space complexity.
What we want to say is the following:

"The asymptotic space complexity of any Scheme program must be no
greater than the asymptotic complexity of the same Scheme program
where expenses in space are not counted if they are due to function
calls in tail position."

This definition would satisfy me -- still, it does not say much, but
one can think of it as being parameterized by the choice of space