Minutes prepared by Jim Miller from information supplied by a number of attendees. I'd especially like to thank Gordon Irlam, Rohit Khare and Henryk Frystyk for their contributions.
OREXX (presented by Rick McGuire, IBM)
REXX has been in use since 1982 and IBM has lots of experience porting it to over 12 platforms. OREXX, the object-oriented version, is more recent. As Rick put it "portability is (a) hard and (b) over-rated." REXX has a solid standard, but almost no-one is content to restrict themselves to just that. Everyone wanted portability but at the same time insisted that they needed to access to platform-specific features. In fact, they "picked features over portability every time."
Rick also made the point that it is important to organize so that you "get the language right and then let other people contribute extensions." This was echoed positively by a number of other participants. Rick pointed out that it is a delicate balancing act, since portability, efficiency, and cleanliness are not always weighted the same way by the core language developers and the users of the language.
Selected questions and answers:
Q: What's the most successful use of REXX?
A: Operating system shell scripts; but the language itself is designed for generic scripting (e.g. an editor scripting language). It makes a nice glue between applications, and was designed as an end-user (non-programmer) language.
Q: How do people get access to REXX?
A: It's bundled by IBM on most systems. It is also available (free) for Commodore and Amiga.
Q: What do you mean by platform-specific features?
A: The Workplace Shell is unique to OS/2 but important. File system access conventions are different across platforms and users need to see this. The REXX developers have been erring on the side of "make the O/S interface good" over "make it portable."
Q: What kind of modularity features are there in OREXX?
A: Sort of a funny structure which is hard to describe. It's higher-level abstractions around low-level functions. It uses include files but provides encapsulation.
Q: Is OREXX available?
A: Not yet, but a Linux version (with source) should be available over the summer. They plan to ship on OS/2, TSO, and VM and are hoping for AIX as well.
Java (presented by James Gosling, Sun)
The Java project started about five years ago, initially to build a "web-like system wth secure distributed code in a heterogenous network." C++ gave them too many problems, so they decided to roll their own. Portability was an important goal from the first; Java was intended as a self-contained environment and not a scripting language. They aimed for sensible APIs that can run everywhere without being too "beefy;" the API is a thin layer between applications and the system (e.g. Motif). AWT was described as "a sensible intersection of all existing window systems," but James also commented that the next version of AWT is better. They also put a lot of effort into the file system API, and got a leg up on that because URLs have standardized the syntax to a certain extent.
The Java security model, which was described as "maximally paranoid, avoiding the use of cryptography" seems to be working well. They ruled out RSA technology because of "hideous patent problems." In general, they are worried about export liability with cryptographic security systems. They also stated that PKP was not supporting new implementations of the RSA algorithms. In the Java system, all public functions must be safe to use; there is no notion of barring an untrusted client from running some piece of public code. There is no way of labelling code fragments with "levels of trust." Each API has its own security mechanism.
Q: Is there any notion of trust classes?
A: No. But they've added the ability for a routine to check [something] at create-time, allowing each API to track "trusted classes". They would like to have certified trust classes for applets ("this code will only use files, ...")
Safe-TCL (presented by John Ousterhout, Sun)
John presented from six slides, reproduced roughly below:
Tcl has been out for about 5 years and has about 100,000 users. "Tcl isn't the solution to every problem; but we will interoperate will solutions."
Obliq (presented by Luca Cardelli, Digital)
The basic idea is copy immutable data and send network pointers to mutable data when an object is transmitted from one location to another. Procedures evaluate to closures, thus there is full lexical scoping, and closures can be transmitted. This allows the creation of compute servers, remote execution, remote agents, etc.
There are two major lessons from Obliq:
What does a "good Web library" (or network operating system) have to provide?
What are the hard problems?
Q: Obliq seems on-line while the Web is off-line.
A: You can PCL a closure, and you can use closures with no free variables. Closures with no free variables can run off-line, since they need no remote information.
Q: What applications are there in Obliq?
A: Some small exercise by summer students (distributed games, etc.) The main application is Visual Obliq
Q: What program representation do you use to transmit over the net?
A: We use parse trees, not text. We transmit only the (values of the) free variables, not the entire stack.
Q: What is the availability?
A: It's part of the Modula-3 distribution, and it's fully free.
Q: Is this kind of like General Magic's system?
A: I'm not sure, but I think they transmit entire stacks.
Scheme 48 (presented by Olin Shivers, MIT)
Olin is finishing an implementation of a HTTP 1.0 Web Server and tools (CGI scripting, etc.) in Scheme. The server is highly extensible (e.g. you can hang procedures off arbitrary chunks of URL name space -- path handlers). It has safe code uploading. It is clearly written, for pedagogical purposes. You can post Scheme to the server to be executed (safely).
Richard Kelsey (NEC labs) has an alternative system build around Scheme 48 that is more like Obliq: it translates into byte codes, PCLs closures and transports them to other sites with network pointers to the original objects.
ScriptX (presented by Norman Gilmore, Kaleida Labs)
ScriptX is a dynamic language, modelled after Dylan and CommonLisp, inteneded for multimedia applications. The focus of Kaleida is on CD-ROM based delivery systems using a distributed object model. The issues that they have addressed and would like the community to learn from include:
Q: What are the applications?
A: Title development (multimedia products). They are particularly useful for porting multimedia applications between the Mac and Windows. They also do multimedia of small objects, downloading the ones you need over a net and composing them together locally.
Python (presented by Guido van Rossum, CNRI)
Python is halfway between scripting and object-oriented programming; it fits in John Ousterhout's spectrum near Tcl. They have not yet addressed security, but are thinking along the same lines as the Scheme-48 people, with control by adding naming domains. One problem they are working on now stems from Python's ability to "introspect" (examine its own runtime system). This is a powerful debugging and extension mechanism, but is clearly a security problem. CNRI is planning to use Python to build "knowbots" but that will require being able to PCL the intertwined Python and C stack. Python as a complete POSIX API and a large number of extensions which makes language interoperability difficult; they do have a foreign function interface (perhaps similar to Scheme-48) which is a bit more complicated than Tcl's. Python has a home page.
The remainder of the morning was spent brainstorming about important issues in the mobile code area. There was considerable discussion and a number of topics were suggested. From the following list, a smaller set was chosen for close discussion:
It is hard to summarize the ensuing hour of discussion, but the following key points of discussion and agreement emerged
What interfaces to system services are needed? The REXX experience suggested that they be ranked (from most urgent attention required to least) as GUI, then network, then file system. There was general agreement on the list, although some discussion of priority. Everyone agreed that getting the GUI right is both urgent and still open. It was pointed out that the problem is even harder than most people think because existing popular systems don't support 3D graphics or real-time video and audio. Worse yet, the blind are being further and further disenfranchised. Also, for some important applications it is truly necessary to get all the way down to the lowest levels of implementation (high-performance video systems with animation in real-time, for example) and this interacts with goals for ease of use and high level abstraction.
Are the security models really different? What, exactly, are they? There appear to be three kinds of models being discussed, and they demarkation isn't always clear. In the "padded cell model" security is provided by controlling the name space to provide the same kind of distinction that an operating system provides between two (or a few) levels of privilege. In the "capabilities model" there are interfaces that provide access to privileged services and use of the interface is restricted to certain pieces of code; this is more flexible than the padded cell model (one piece of code can be privileged to use the file system but not the network, while another the reverse, and so on) but its properties are harder to understand and is therefore harder to control. In the "cryptographic model" security is provided by a combination of authentication and authorization on a very flexible basis (per-user, per-application, per-service are all possible and intermixable).
One language or many? To the surprise of no-one, there was universal agreement that this is not the time to standardize on a single programming language for the Web or for extensions to Web browsers. There was a murmur that if users of the Web were asked they might answer the question differently.
After lunch, the workshop split into four groups, with one person appointed to facilitate the discussion in each group. The groups had one hour for discussion and then reported back to the entire workshop. The following are notes from these reports back to the group as a whole. Details of the individual groups may be available by contacting the group facilitator and will be included in the forthcoming Technical Report from the workshop.
APIs and levels of abstraction (facilitated and reported by Paul Benati). There are no simple answers in this area. The tension between high-level APIs and low-level access still dominates the discussion. There is room for work however; a clear distinction needs to be made between facilities available from the infrastructure itself and that which becomes available from the use of downloaded mobile code. The sense of this subgroup was that the answers aren't clear yet, but will emerge over time. It is premature to decide on just one of Java, Tcl, etc. But there is a worry that unless the world converges on one solution, or a very small set of solutions, there will be tremendous amounts of wasted time, space, and bandwidth needed to deal with redundancies between the solutions.
API negotiation and discovery (facilitated and reported by Tim Berners-Lee). There seems to be a fundamental conflict between relying on the local operating system to provide services and running a virtual machine above the operating system to provide a more standard interface for downloaded code. Part of the question centers around the size of the object that must be downloaded, part around the effort needed to produce versions for specific platforms. The world may be reasonably divided into incremental add-ons to existing applications and complete application down loads. The group suggested that a UUID (as in OLE and DCE) may be sufficient to identify a particular object, but there still remain three important issues for which the group provided no firm answers: [editor's note: I'm not sure I captured this correctly, since all of the notes are incomplete. I've added a few notes that may or may not come from the meeting discussion in brackets.]
Security model (facilitated by Murray Mazer, reported by Phillip Hallam-Baker). "You've got to make the most security-paranoics happy." The model must be simple to understand. The subgroup's discussion centered around two models, the "padded cell" model (John Ousterhout was part of the group...) and the "capability" model. The argument was advanced that the padded cell model is easier to understand and will help avoid the problem of having non-computer folks build systems that don't work and then blame the mobile code community for the failure. The group also suggested that it be assumed that the security community (not the mobile code community) will supply an authentication mechanism that can be used for other models. The group acknowledged that there must be an extension mechanism that allows padded cells to communicate with one another through a carefully controlled channel ("padded cells, with little thin letterboxes between them").
Application example (facilitated and reported by Norman Gilmore). They chose to examine client-side forms validation as an example application. They see the solution as layered with HTML on top, supported by a browser, supported by security, supported by transport. They proposed some abstract interface classes that could be used to build the application, and an extension to HTML: <LINK=code-for-my-page (LANG=OREXX)> to specify code to be downloaded. They suggested that a standard set of events be defined for things like forms that would provide the hooks needed by the application code (on-entry and on-exit for forms and fields). They also suggested the need for standard interfaces to get information from the client platform, such as the user's name, phone number, company, etc. (the "business card API").
The wrap up included a good deal of free-form discussion and some more directed components. The following attempts to summarize some of the major points that were discussed.
What can be standardized? There was a clear sense that it is too early to standardize on a single language for mobile code. There was some discussion of whether groups (i.e. standards bodies) or the market will do a better job at this anyway. There was a separate discussion about standardizing on a transportable representation (Java byte codes) and strong feedback from the REXX community (but backed by nods of assent and grunts of agreement from most others) that it was both too early and extremely unlikely to be workable. The REXX experience indicated that the largest part of the problem is not the language itself, but rather the design of the runtime library. While it is conceivable that a careful inspection of the runtime systems for OREXX, Java, Tcl, Obliq, Scheme 48, ... will show sufficient overlap that a single common runtime library might be possible, no one really thought this was likely to be true. Without such a shared library, which would presumably be efficiently implemented on each platform and made available as a standard part of the Web environment, applications would be simply too large to download individually, since it would require loading the entire runtime as well -- you'd be better off just building a single language (or maybe a couple languages) interpreter and runtime system and just stick to that.
Jim Miller offered a strawman proposal that we attempt to standardize on a few things, and others added to the list. The final list, which was never fully discussed and didn't really receive wide support, was:
Are there any shared next steps? There was fairly strong sentiment in favor of defining a small set of sample applications that would allow the different language communities to share ideas and compare approaches. Three applications were suggested: client-side forms validation, client-side spread sheets, and conditional HTML.
The 80% solution. John Ousterhout made a strong statement that the Web community (at least) should be striving for acceptable solutions that cover the most important 80% of the cases and not fall into the trap of trying to find perfect solutions for 100% of the problems. This was coupled, to an extent, to statements about how markets make choices and that the market is what drives evolution of the Web.
Technical Report. The workshop is on the hook to produce a technical report. After some discussion it was agreed that it will simply be an introduction (probably derived largely from these minutes) followed by reports of each of the four break-out groups. The four reports for those subgroups agreed to produce the contents for their sections. The report will be prepared by the workshop chair and sent to all participants for review and comments before being made available to the W3C membership and then (a month later) to the general public. [Editor's note: since I'm going to be on vacation until early August, the subgroup reporters should plan on submitting their drafts to me, Jmiller@w3.org, by the end of the first full week of August to avoid being harrassed. If this occurs, a draft report should be available to the attendees by August 15.]