On 13 May 2001, Hrvoje Niksic stipulated:
Nix <nix(a)esperi.demon.co.uk> writes:
> That would be very nice; it would obsolete big chunks of mailcrypt,
> jka-compr, uncompress... even term-mode's `term-emulate-terminal',
> come to think of it. In fact, probably by the time you've got that
> generic, `input filter' and `output filter' are better terms for
> this stackable thing than `coding system'.
Exactly. A form of "coding system" would probably remain, because
when you finally import the bytes into XEmacs, you have to convert
them to chars of a certain charset. At this point the coding systems
would still be useful.
... if just to tell Emacs what coding system you've finally *got*.
Hmm.
Suggestion; the input and output of the filters are typed (perhaps by
being a cons cell whose car is a type and whose cdr is a `character';
obviously this is wildly inefficient and wouldn't work for stateful
coding systems, but you get the idea) and you can only stack filters
where the output type of one filter equals the input type of the one
next in the sequence. The name of the output type of the last filter is
what XEmacs currently calls the coding system's name.
Or is a coding system more than `the output of a transformative program
in some language, of a named type'?
> This is sort of a generalization both of coding systems and of
the
> existing asynchronous subprocess filters; almost certainly both of
> these could be reimplemented in terms of the filter stack.
I'm not sure I would generalize to that point, but perhaps it's
possible.
I'm a generalist past sanity, I know. (But then, exactly that
generalizing viewpoint gave us Lisp to start with, and Emacs; there is
some merit in it. :) )
> Hmm. One question, actually. The Golden Rules of Redisplay (no
GC,
> no Lisp); are they there because having Lisp touch the *_set
> variables while redisplay is running could be nasty?
I'm not sure about the "no GC" rule, but the no Lisp rule is there
because of efficiency and robustness. Efficiency means that Lisp code
might be too expensive to call within redisplay.
That's a reason for discouraging large amounts of Lisp, not for banning
it completely. (It's also a reason to speed up parts of the Lisp
interpreter, like the funcall path; but I note that things have been
happening on that front :)
I'm aiming to attack the other nasty Emacs monster, the GC; I hate GCPRO
with a passion and I hate the nonincremental GC almost as much,
especially on small machines...)
Robustness means
that there is no good way to handle Lisp errors while you're in
redisplay "critical section". It's just too dangerous.
Er, why not wrap any invoked Lisp in a condition-case?
(Or do you mean flat-out syntactic errors?)
There have been discussions about how to circumvent the Golden Rules.
I'd rather abolish them completely if possible. Redisplay is complex
code, yes, but why should that grant it the right to ban Lisp? (Emacs's
power is due to the pervasiveness of Lispability throughout its core,
after all... IMHO, the ideal Emacs core would be a really, really fast
Lisp engine, glue code to external libraries, and nothing else.
Well, not quite. The *ideal* Emacs core would have pluggable languages,
but that's a *total* pipe-dream :) maybe by 2010...)
One of them is to allow a safe subset of Lisp to be run, sort of
like
CCL is currently allowed.
That sounds like a good first step ;P
Remember that a) specifiers are cached, and b) specifiers are
resolved
from within redisplay. The Golden Rules strike back. :-)
The caching, of course, changes what it is sane to do. It seems sensible
to make specifiers subject to the rule that the specifiers, and any
functions invoked from them, must not modify global state (agh, but that
wrecks `gensym'); and also that they should not rely upon global state
without care (but of course they can sometimes for user customization;
relying on oft-changing variables can be nasty though).
This is a right tangle :(
> structures. Things like e.g. keymaps should appear to the Lisp
layer
> as directly Lisp-manipulable objects, so that `read' and `print' can
> work on them.
We can discuss that. I have personally made steps in that direction
by implemented ways to create arbitrary events and adding read/print
syntax to hash tables. Could you explain why you need a read/print
syntax for keymaps?
I don't. I was just being purist, and I haven't found a nice syntax for
keymaps yet; they were just an example. The only objects I got fully
happy with were, uh, hash tables, which used a read syntax very similar
to yours (and CLs). (I started working on char tables too, but rapidly
found that they had a read syntax, just a totally undocumented one.)
But the read syntaxes don't have to be *nice*; they just have to be
consistent. I happen to think that `read' and `print' are very useful,
and the presence of things like `desktop.el' writing out big chunks of
Emacs state as Lisp and just `load'ing it back in again tends to support
me there :)
My keymap syntax was a rather horrible one; you could specify an entire
keymap at once via a #k(keyword-args) syntax, where the keyword
arguments could be :parents, :default-binding, :prompt, or :contents,
where :contents took an alist mapping from a key to a def as in
`define-key'.
I defined printing a keymap to print out only this C-level keymap, with
keymap names representing the keymaps used for key sequences
(automatically generated and assigned to them if necessary; yes, this
meant `print' could modify objects, and I hated it). I could
alternatively have had it print out the entire stack of keymaps, but
that would have meant that sub-keymaps would lose their identity.
(Perhaps I should have made it configurable.)
Now it's true that the input side of this could all have been done with
a little `make-entire-keymap' wrapper, but I couldn't have got them
printed that way; at least, not by `print'. And it would have been less
regular, and regularity is the essence of Lisp. (And it was a fun hack.)
Speaking of keymaps, remember that they are not opaque objects
because
of our perversion. When you change key definitions, the keymap code
actually recalculates internal caches that make operations like
`where-is' fast, which is extremely important for menus.
Agreed; this is why I never did anything to let you read in *bits* of
keymaps. I never expected calling `read' on a million keymaps to be fast
--- or sane --- anyway.
> I've got half- completed patches that exploit common-lisp
style #x()
> syntax to do this, so you can set and update many opaque objects
> from the lisp reader, with the letter indicating the type of formerly
> opaque object, as in Common Lisp... if you want, I can clean them up
> (and they'll require a good bit of cleanup; the patches were initially
> a proof-of-concept, and clean they are not) and submit them.
I would like to know more about the changes you envisioned. For
I'd like to *find* them again. I just spent three hours looking for them
after you asked and they seem to have vanished. Oh well, all the more
reason for me to rewrite them better. (I was mostly doing this as a way
to get to know the Emacs C core in a fairly general way, so the code
was, er, ugly in places. I can do better than that now.)
instance, could you post a Lisp code snippet of how updating an
opaque
object from the reader would work with your patches applied.
(setq message-mode-map
#k(:contents ((M-m . mml-mode-map)
(tab . message-tab)
(copy . copy-primary-selection))))
... only an awful lot longer in the general case (and if `print' emitted
it, of course not laid out like that either).
--
`LARTing lusers is supposed to be satisfying. This is just tedious. The
silly shite I'm doing now is like trying to toothpick to death a Black
Knight made of jelly.' --- RDD