[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Date: Mon, 24 Mar 86 10:13:39 est
From: Kent Dybvig <dyb%indiana.csnet at CSNET-RELAY.ARPA>
I would like to see the Common Lispy definition that anything with no
other possible syntax (especially wrt numbers) is an identifier. I don't
necessarily like it, but I see no reason to do otherwise, and I might
be interested sometime in implementing Common Lisp in Scheme.
There were some objections to this at the Brandeis meeting; it's nice to
be able to catch typos in numbers, knowing from the first character that
what's coming is a symbol makes for simpler readers, and the "everything
except" definition can't be captured by a context-free grammar. But
regardless of the reasons, there wasn't agreement on allowing anything
but what's there, so as usual this made the language smaller than some
people would have liked, but it includes the things that everyone could
If you want to do Common Lisp in Scheme, this will be the least of your
problems. You'll need to be able to deal with many other extensions to
the read syntax, including package prefixes, escape sequences, circular
structure, and a zillion other things.
- I don't know of anyone who is a NAMED-LAMBDA partisan, so I intend to
flush it. (It's not essential, anyhow.) However, I know there are some
people out there who are partial to REC, so I'll take the conservative
position again and leave it in (even though I and other MIT folks don't
Thank you for leaving in REC. I could not live without it.
I forgot that Jim Miller wanted NAMED-LAMBDA, and Henry Wu also spoke up
later in support of it, so it stays. I should have insisted on the
things that I can't live without at the Brandeis meeting instead of
being a nice guy. As it is I can't stand to program in RRRS Scheme.
But that's beside the point.
I've forgotten what the non-controversial changes to DO are. Perhaps
you could refresh my memory. I hope it was related to the implied use of
set! in the description used in the manual.
I would also like to bring up the case insensitivity issue once again.
Yes, I do prefer that A-Symbol and a-symbol be different. I like to use
case to set off certain things, like X for set and x for the element in
(member x X). I see no value in having case-insensitive symbols, and
a lot of conversion trouble. I think most of us now have terminals with
lower-case letters. I would like the special-form keywords and function
names to be in lower case.
Sorry about my previous reply. I just meant to say that I have never
heard a case-sensitivity argument on either side of the issue which
wasn't basically religious. I don't think the two sides will ever be
able to speak to each other dispassionately. I have been on both sides
of the question at various times myself. (I think I started to change
my mind when I tried to explain to my father, a computer novice, what a
fantastically liberating thing it was that Foo and fOO were different,
and he thought I had taken leave of my senses.)
The purely political arguments for case-insensitivity are: conservatism
- don't change the report more than necessary - we had a chance a year
ago to talk about this, why bring it up now; compatibility with T, MIT
Scheme, PC Scheme, MacScheme, and many others; and compatibility with
Common Lisp and most operating systems (other than Unix and Multics) and
languages (other than C).
If you insist on going counter to the Brandeis decision, making your
implementation and book case-sensitive, then there will be some painful
decisions to make about what to say in the report. The report will have
to say that some implementations of Scheme are case-sensitive; as for
what to do with upper case, I can think of two solutions:
1. It's OK to write it, but only programs (and data files) which don't
care one way or the other (using lower case for the things in the
manual, and don't depend on EITHER (eq? 'foo 'Foo) or (not (eq? 'foo
'Foo))), will be portable.
2. Say that only programs which use lower-case only will be portable.
I would even be willing to say that SYMBOL->STRING will return lower
case, and STRING->SYMBOL will only be guaranteed to accept lower-case
letters. Implementations could be permitted to do what they like, even
signal an error, if there are upper-case letters. (Case-insensitivity
advocates should note that this makes no statement about how print names
are to be stored internally, although these particular procedures would
presumably be a little more efficient if the internal case of portable
identifiers was lower.)
I'm not sure how much it matters much which case WRITE and DISPLAY
generate, but there will be no agreement on this, so we can just make a
note that what these things do is implementation-dependent, but
case-sensitive implementations will print lower case so that READ will
work. I think this can only cause problems if you're FTP'ing between an
insensitive implementation which prints upper case to a case sensitive
implementation, but it seems to me that this problem is politically
unsolvable if you will not agree to be case-insensitive.
There will be problems if you actually exploit case sensitivity in your
book, that is, if any program depends on the non-eq-ness of two
identifiers with the same name in differing cases. Then your book will
conflict with most implementations. If you use varying case but never
depend on (not (eq? 'foo 'Foo)) then everything should be fine.
PLEASE - if people want to discuss this question - keep in mind the
political situation; remember that if you like case-sensitivity or
case-insensitivity, there's no chance you'll make a convert of someone
in the opposite camp, so don't inflict pain by arguing this question.
Be nice. The real question is what concessions are we willing to make
in order to come to a consensus. I have stated above how far I'll go.
On one other issue, there are some of us here at IU who have serious
difficulty with #!true and #!false, and we will be sending out a new
proposal under separate cover in a day or so.
I agree completely; I have always thought that #!true and #!false were
incredibly ugly, and I argued against them at Brandeis. And I have
heard at least four different people say the same thing to me in the
past couple of days alone. T uses #T and #F (or #t and #f), inspired by
3-Lisp's $T and $F. Who doesn't like #T and #F besides people at MIT
(with whom I can speak in person)? Why?
- #t and #f
- From: Robert Halstead <rhh@MIT-VAX.ARPA>