Jerry James writes:
Yow, I had no idea a coding cookie could do so much damage.
They shouldn't be able to. IMO, we shouldn't have enabled coding
cookies at all, because our coding systems are coded on the optimistic
assumption that they will get valid encodings. Cf. the David Kastrup
thread on TeX error messages. I just got tired of the push-back I get
when I veto dangerous patches without being able to *prove* that they
suck, and let this one go through.
Technically, the problem manifested here because Ben unified support
for EOL conventions with Mule-style coding systems in no-mule, which
is defenseless against this kind of thing because it doesn't actually
do any coding beyond EOL conventions. I imagine Ben knew what he was
doing, but who else does? :-)
Have you, or do you intend to, inform Stefan Monnier about this?
No. It hadn't occurred to me; it's purely our problem. Now that you
mention it, the answer is "Heck, no! This is *embarrassing*!" ;-)
What I suggest is (1) an immediate release of haskell-mode without the
coding cookie, which isn't going to cost anybody data or crashes,
although word motion may behave oddly in ISO-8859-X locales; and (2) a
fix to the coding cookie code before the next update to Stefan's
code, at which time the cookie(s) can be restored.
I'm not sure how to fix it yet; probably in no-Mule the coding cookie
code should refuse to load .elcs with the ;;;coding protocol, and it
should just ignore cookies.
Or maybe this is a prophetic warning: we should just get rid of
no-Mule, despite the complaints we'll get from a few Western Europeans
and anybody who depends on high performance in true binary modes. :-)
_______________________________________________
XEmacs-Beta mailing list
XEmacs-Beta(a)xemacs.org
http://calypso.tux.org/cgi-bin/mailman/listinfo/xemacs-beta