A lot of this argument depends on how the relationship between XEmacs
and emacs lisp is viewed. I haven't spent a lot of time looking at
XEmacs code, but when I looked at GNU Emacs code, I always got the
impression that lisp was the implementation language and C was a
portable "assembly language". It looked to me as if XEmacs had more low
level C stuff, but an attempt was made to do a bit of object based
programming in order to expose it to the lisp level. Now if emacs lisp
is seen as an extension language, then it doesn't much matter what the
new extension language is. But if the language is actually going to be
the implementation language and the boundary between the lower level and
the lisp level will be blurry, then I will argue for common lisp every
time for the same reasons that I stated before. I will now attempt to
make feints in the direction of answering some of Stephen's questions.
1. The standard better _not_ talk about text objects. This is a general
purpose programming language. What one would like is a rich enough
substrate, which is standard, upon which to build the abstractions that
XEmacs needs. To me this means structures, objects generic functions.(
To me it also means a MOP but I wont get into that)
2. I think that call/cc would be impossible to do in common lisp. I
think that tail recursion could be done in common lisp. I think that
CLOS and the condition system could be done in scheme but close
attention would have to be paid to tiny clos to make it efficient and
make conditions objects. You also need to have structures participate in
the type system. There are no standard structure types in scheme, and
everyone who has them has a different syntax. This is mostly because
small/elegant and practical are mutually exclusive. :-) Same with object
systems.
Note that people always seem to make this "small is beautiful" argument
for scheme. When you implement something like XEmacs, there will have to
be a rich library of functions and macros available for people to
implement the XEmacs functionality. Its also going to have to be as well
documented as stuff is now. It seems to me to be contrary to scheme to
have,for example, doc strings in functions like they are in emacs lisp
and common lisp, and it seems a bit weird to implement them if you
really want scheme. Sorry, that just popped in my head. On with the
answers.
3. Contrary to what Michael says, I think that it will be less emulation
going on from emacs lisp to common lisp than from emacs lisp to scheme.
We can always ask Kent Pitman what he thinks. :-) Naively, one could
imagine havine an emacs-lisp-user package like the standard defined
common-lisp-user package except that in this package, all variables are
special and the let acts something like fluid-let which is defined in
some schemes. In addition to an emacs-lisp mode, you could also have an
emacs-common-lisp mode. Both modes are package aware and when you send
regions or definitions to XEmacs, the code gets evaluated in the correct
semantic environment.
4. I would say that semantically, the way most people use emacs lisp
would change less going from emacs lisp to common lisp than to scheme.
Michael brings up the let issue, I bring up documentation strings which
are part of defun I like the documentation being in the function, part
of the doc string. Javadoc annoys the heck out of me and that's probably
what you will have to do using scheme unless you propose to change
define to be non standard or to create a defun macro. If you create a
defun macro, why use scheme? Anyway, people who get their kicks dusting
off obscure corners of languages might complain but I would bet that if
we made the *scratch* buffer be an inferior emacs-lisp mode and have the
.emacs eval in the emacs-lisp-user environment, most people probably
wont notice. God! That's naive! :-)
While I think that call/cc is way cool, I have to say that I believe
that less than 1% of the people who hack serious lisp will notice its
presence. Quite a lot more will notice your object system. It would be
way cool to be able to point them to point the serious users to Sonya
Keene's book which provides better documentation on OO programming in
CLOS than I ever could and Paul Grahms' books. I would think that this
kind of documentation would make it easier for both the serious user and
the end user. On the scheme side, even after reading all of the
wonderful scheme books out there, of which I own several, you still
don't know squat about using the more powerful features in scheme48,
drscheme, or Rscheme. That's not a good thing.
5. I know that both clisp and scheme48 have foreign function APIs that
should be up to the task.
Like I said before it would be a bad thing for a general purpose
programming language to define things like editing buffers or
multilingual text processing, frames or extents. They are sure to get
them wrong. :-) We want the proper abstraction power tools to allow us
to define what needs to be written at a lower level, what needs to be at
a higher level and how to make the boundary nice and neat. In scheme I
think that you would have to define your char types to be unicode by
default. I am probably wrong.
Having a standard just reduces the number of non-standard fundamental
language extensions you need like a low level macro system,
records/structures, objects, and conditions. You would like the
semantics of these things to have been worked out through blood, sweat,
and usage and not have to worry about having to change the semantics
because now they are not powerful enough. Since we are the system
implementors, the boundary between the language and the system (whatever
that means in this context) will be fuzzy anyway. My whole take on the
this was that since this is a big project and lots of people all over
the planet have fun filling the kitchen sink, that it would be nice to
have a language that would give you better engineering control over the
system. From an end users perspective, the language is just an extension
language. I could use scheme just as well. I would NOT use perl. :-)
I probably didn't answer everything, but since its 1:30am, you can point
out what I missed and I can answer it another time.
-Reggie
-----Original Message-----
From: owner-xemacs-beta(a)xemacs.org
[mailto:owner-xemacs-beta@xemacs.org]On Behalf Of Stephen J. Turnbull
Sent: Sunday, July 12, 1998 9:24 PM
To: XEmacs Beta Test
Subject: Re: scheme - my opinion
I think it's time the experts on CL and Scheme sit down and do
feasibility studies. I've seen attractive abstract arguments on both
sides, but not so much that's concrete. Personally I'm biased toward
elegant and minimal; I'd especially like to see more concrete
information about CL to counteract that bias.
Some questions I'd really like to know answers to:
1. Do the respective standards specify things (like behavior of text
objects) that we would prefer to specify ourselves? (Neither camp has
said anything about this AFAIK.) If so, this is harmful to ease of
migration.
2. What constructs in one dialect would be difficult or impossible to
implement efficiently in the other? (Michael Sperber _has_ given at
least two potentially important concrete examples, efficient tail
recursion and call/cc. Hrvoje Niksic and Martin Buchholz worry that
an emulation of CL would be a "poor man's CL". This is important, but
concrete examples of what they are afraid will be missing would be
nice, so that we can check available emulations against them.)
3. Where will it be hard to emulate current elisp efficiently?
4. Where the semantics of common constructs differ among elisp,
Common Lisp, and Scheme, how hard is it going to be for casual elisp
programmers to make the switchover? How hard is it going to be for
core developers and Lisp implementors to make the switchover?
5. Do we or CLisp or any of the suggested Scheme variants have an API
that will allow us to separate the Lisp engine proper from extensions,
like the C-level support for fontification? This is especially
important if we hope to offload Lisp engine maintenance, as Bruno has
offered to do for CLisp.
>>>>> "mb" == Martin Buchholz <martin(a)xemacs.org> writes:
mb> scheme is an interesting language, but ...
mb> Scheme has been designed for programming language research,
mb> rather than practical use. It has such a history of
mb> minimalism that all scheme implementations had to provide
mb> extensions to be useful.
Hmm. Sounds like Common Lisp is going to require lots of extensions
to be useful to us, to me. CLtL1, at least, makes no reference to
editing buffers or multilingual text processing, frames or extents.
Also, it seems possible to me that Common Lisp specifies some things
we would rather be free to implement in our own way (doesn't CL
specify the use of bucky bits and fonts in character representation,
or did that go away?---I can't find the passage I'm sure I remember in
CLtL1, sorry if this is misinformed).
Minimal is good, if we want to support different extension languages.
I don't know if that's a good idea in principle. I don't know if we
can do it well, and almost certainly it won't get done well in a
hurry. But it's an interesting idea.
Minimal is good, if we are going to end up having to maintain the Lisp
engine ourselves. Yes, Bruno says that he will help maintain a Common
Lisp engine specific to XEmacs. Does he understand that that will
possibly include things like C-level support for fontification? Do we
or CLisp or any of the suggested Scheme variants have an API that will
allow us to separate the Lisp engine proper from such extensions?
I scanned CLtL1 again. "I am large. I contain multitudes." Or
whatever old WW said. I think he had Perl in mind, though.
mb> It still is missing many things. For example, my beloved
mb> (when ...) is missing (from the standard). Hashtables are
mb> missing.
(when ...) is a trivial macro, isn't it? ("Very useful" and now
"beloved"! What loyalty a trivial extension can engender!)
Hashtables, we can do.
Why would having these (and many similar items, I'm sure) _in the
standard_ be useful to us?
Why can't we do "the underlying engine will be Scheme, where
extensions are desired Common-Lisp-compatible extensions will be
preferred," as Michael has suggested is possible?
mb> (call-with-current-continuation) is a very interesting
mb> function, but one cannot expect ordinary humans to understand
mb> it. It is only a tool for system implementors.
XEmacs is a system, is it not? _We_ are system implementors. (Sorry
for the implicit self-aggrandizement.) However, I will leave call/cc's
serious usage to Michael and the Mark II version of Karl Hegbloom ;-)
I imagine most package writers and so on will do so, too.
If we can get coroutines and threads as cheaply as Michael suggests,
that is a _big_ deal. I can stop running two (or more) XEmacsen and
cutting and pasting URLs because of the damn blocking gethostbyname().
mb> The scheme community seems to be in a downturn. From casual
mb> visits to the scheme repository, there seems to be little
mb> progress in implementations or standardization.
Hm? "Eye of the beholder," I guess. Dates on the most recent
documents and source code seems recent enough.
What does all the standardization associated with Common Lisp buy
XEmacs? I understand that having a very powerful engine is a good
thing, especially if it's standard. But what are the benefits to the
editing or other core functionality of XEmacs?
--
University of Tsukuba Tennodai 1-1-1 Tsukuba 305-8573
JAPAN
Institute of Policy and Planning Sciences Tel/fax: +1 (298)
53-5091