[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: exception systems
A few quick notes. Sorry I don't have more time and this will look a bit
hastily written and ragged, but I'm on a tight release schedule and this is
all the time I can spare today...
Date: Tue, 16 Apr 1996 14:58:52 +0000
From: William D Clinger <email@example.com>
To evaluate any proposed exception system, we need some idea
of what we want to do with it. Here are six distinct reasons
we might want an exception system.
1. So an application can use exceptions as a control structure
without having to load an SLIB module or use
I'm not sure what to make of this because it's presented in terms of
the operator being used and not the functionality being achieved.
Does this mean it's criterial that you can't do any non-local
transfer of control? Is it EXPLICIT use of continuations that's bad
but if they're created (and passed to you) implicitly that's ok?
If no uses are allowed, is it only "full catch" you're trying to avoid?
2. So an application can give up when it detects an untenable
I claim this is the definition of exception handling, not a consequence
or evaluation criterion.
Whenever a program reaches a state in which proceeding cannot occur
without intervention from an outside module, the program must stop. It
necessarily "goes meta" at that point, although the implementations for
said action vary. Whether it exits to a Unix shell dumping core, pops
up a "You lose. [OK]" menu on the Mac, or does something more sophisticated
like transfer control to an automatic or interactive debugger, it is
"giving up" and selecting among possible options for how to proceed.
There is always at least one option, again by definition.
The real issues are qualitative issues of "to whom do I yield
control", "is my state re-entrant", "how might I specify ways to
proceed within my re-entrant state", "what am I permitted to ask for
by way of verification before proceeding", "am I allowed to ask
interactive questions" or "am I allowed to require data passed back", ...
3. So an application doesn't have to give up when a predefined
procedure detects and reports an error.
As mentioned in two, once you provide for the opportunity of
annotating your program with "restart points" and protocols for
accessing them, the notion of "giving up" is meaningless. Whether you
say you "gave up" but someone coaxes you back to life, or whether you
say that you "didn't give up because you still had the ability to
broadcast an S.O.S. which might successfully yield help in restarting"
is purely a subjective matter.
There is an issue of "containment", but the issue is more general than
this suggests. For example, there is the situation in which an
application only doesn't have to "give up" because an outer context
explains to the application which restart point within the application
I might say better "So that an application can advise a program it
controls about how to proceed without having to appeal to its caller
But the choice is not over "giving up or not giving up" it is about
"asking for help from beyond vs not asking for help from beyond".
Giving up is determined only by the process which discovers that there
is no outer help, and--structured correctly--a program can STILL
detect this situation and keep it from landing you in the debugger
unless you want it. (I call this "default handling"--providing advice
that doesn't override outer advice if some is available, but that
staves off interactive debugging if there is no outer advice.)
4. So an application can give meaning to some situation that
the language standards describe as an error. For example, an
application might want (CAR '()) to evaluate to #f, or (+ LOG EXP)
to be (LAMBDA (X) (+ (LOG X) (EXP X))).
Since there are only two errors that implementations are required
to detect, this usage would almost always be
Well, let's be careful with words. You can give meaning to a
situation that the language standard describes as a DETECTED error.
In CL terminology, you can handlel situations in which the language
says "signals an error" but you cannot simply assume that the word
error, in its casual English usage, will be mirrored by detection code
in the running image without enormous cost. So your condition system
should help you in two ways:
(1) by providing you with terminology that allows you to distinguish promises
to detect something ("An error shall be signaled if the argument to FOO
is not an integer.") and something that doesn't ("It is an error to
destructively modify any object once given as a key to a hash table
for storage purposes.")
(2) by providing you with mechanisms for customizing modularly informing
the behavior of the system in situations where an error is detected
By the way, in this regard, it's worth making the distinction that CL did
between "high safety" and "low safety" compilations, so that some errors can
be reliably detected only in contexts where it has been either
programmatically requested (as in a WITH-HIGH-SAFETY form, or a declaration
like CL's SAFETY declaration) or requested globally of a compiler (and not
overridden by a low-safety declaration from within code). In this way,
you can make functions like + that reliably detect errors for debugging or
in certain safety-critical passages, but without saying that the result of
"fast" compilation is simply to "throw away the semantics". Common Lisp
adopts the terminology "should signal" to mean "must signal if you're in a
high safety context, and might signal in a low safety context". I can't
stress how valuable this is to people who want to live in a universe that
offers all three of (a) speed, (b) debuggability, (c) formal semantics.
5. So an implementation can inline a common case, but take an
exception to handle less common cases. For example, (+ X 1)
might generate a MIXED-MODE-ARITHMETIC exception if the value
of X is not a small exact integer. This usage would always be
I think it should not be a goal of any exception system to communicate
among two modules that know about each other. An exception system is,
I think, an introduction service. It provides handshake protocols for
two parts of a system (a lost soul looking for advice and a wise sage
looking to give advice) to meet, agree on a plan of action, and continue.
This is not the case in what you're talking about and while you could use
an exception system to implement what you're talking about internally
to the implementation, I seriously doubt that it would be worthwhile for
the user to get involved except where the implementation was getting a
data type it never heard of (i.e., something + was not supposed to handle)
in which case the error should be DOMAIN-ERROR with EXPECTED-TYPE being
some representation of the number type or a NUMBER? predicate.
6. So an implementation can implement asynchronous interrupts.
For example, an exception might occur when a key is pressed
or a timer reaches 0. This usage would always be
There are two parts to this:
(1) Any part of any program should expect condition signaling at any
time. As such, if an asynchronous interrupt occurs, it simply transfers
control to a special continuation that presumably gets the current
continuation as an argument and might or might not return to it. Having
said this, it's plain that once this transfer has occurred, the asynchronous
program has been synchronously injected into the other program and there is
nothing weird or magic going on, so there is no special way AT ALL that
the exception system should know about the interrupt system.
(2) There is a separate concept of an INTERRUPT system, which I think is
NOT about exceptions. You might or might not be able to do keyboard or
device interrupts, but once those interrupts run, they are just running
synchronously. I think it is a mistake to conflate the interrupt system
with the exception system. Pressing an ABORT key, for example, is properly
modeled as follows:
 Process is interrupted. A primitive interrupt handler takes control.
 The primitive interrupt handler SYNCHRONOUSLY signals a
KEYBOARD-EXCEPTION with data of the key.
 Some handler might handle the key by transferring control to an ABORT
restart point within the program. If so, the interrupt is handled,
and control never returns to the program that was running.
 If no handler is found, the primitive interrupt handler takes some
default action, like just returning to the program continuation or
forcing entry to the debugger because of an unhandled keyboard
It appears to me that the exception system that was proposed last
September by Friedman, Haynes, and Dybvig is barely adequate for
purposes 1 and 2. I say "barely" because each application would
still have to roll its own method for encoding exceptions (bad
for purpose 1), and there is no way to guard against an accidental
clash of encodings (bad for purpose 2).
Richard Kelsey's proposal is barely adequate for purposes 1, 2,
and 3. I say "barely" because, although it can recognize when
a predefined procedure signals an error, it has no way to know
which error is being signalled, let alone what might be done
It seems to me that we're more likely to end up with an useful
exception system if we focus on purposes 4, 5, and 6. I think
purposes 1, 2, and 3 will be easy to add to any system that can
deal with purposes 4, 5, and 6.
Given my confusions about what you've written in the above descriptions,
I don't find this breakdown particularly helpful.
Moreover, I think there are numerous other important qualities of a
condition system that one must solve which you didn't enumerate, which
makes this as a checklist scoring system seem missing. e.g., how well
does the system allow you to create and locate restarts, does the
condition system provide a way for handlers to learn enough detail to
figure out if they want to handle something, can a handler decline to
handle an error upon inspection of it and realizing it's not going to
know what to do, is it possible to resume at the point of call or only
to return to outer points, are facilities provided for interactive
intervention in a graceful way, etc.
Another issue you didn't mention but that necessarily becomes involved
in production code and is frequently cited as a weakness in CL's
condition system is floating point traps. I think this is really just
orthogonal, like interrupts, but it still always comes up and it's
worth thinking about. Some code wants to run with traps enabled, some
doesn't. The mode may affect whether you can guarantee detection of
an error (or guarantee non-detection). Dynamic establishment and
disestablishment of handlers around individual calls to * and + may be
too expensive in practice, but turning on and off trapping may be
I think this is usually similar to a domain error
Btw, I think this is maybe not similar but my r4rs is at home so I
can't check. Common Lisp makes the distinction between
SERIOUS-CONDITION and ERROR by saying that some things are serious
enough to stop program execution without being semantic errors. The
canonical example is stack overflow (or "storage exhausted" if you
prefer not to think stacks), which is plainly not a semantic error but
which can stop a program dead in the water just as fast. Similarly,
if there was a limit on the number of arguments a function could take
or a limit on the size of a float, that's not something the language
One place the difference shows up is in your decision of whether things
like IGNORE-ERRORS should muffle implementation restrictions. There is
a school of thought that says that programs that want to handle those
should go to extra work to do so because perhaps IGNORE-ERRORS was written
on a belief that only errors of a certain type could occur because of some
proof about the semantics, but since semantics is not necessarily violated
in an implementation restriction, all bets might be off as to whether the
program will behave correctly by continuing under program control unless
the program identifies itself to have considered the meta issue of
implementation restrictions. It's a messy issue, I admit.