>>>> "jwz" == Jamie Zawinski <jwz(a)jwz.org>
writes:
jwz> I reported this bug to Red Hat on the assumption that this was a bug in
jwz> tcsh or readline, but I suppose it's equally likely that it's a bug in
jwz> how xemacs is dealing with ptys (since the bug does not occur when
jwz> running tcsh in gnome-terminal.) I also assume they will completely
jwz> ignore my report without even looking at it, since emacs is involved.
jwz> So I'll report it to you guys as well:
jwz>
http://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=82610
jwz> Since upgrading from RH 7.2 to RH 8.0, I can no longer compose long
jwz> command lines in tcsh when running in an XEmacs *shell* buffer. I don't
jwz> know whether this is a bug in tcsh, or in readline, or what, but it does
jwz> not happen with bash.
jwz> It's not merely truncating the command line -- it's also apparently
jwz> leaving the un-read characters on the input buffer, and then
jwz> interpreting them as commands afterward! This is potentially
jwz> disasterous, and could easily lead to loss of files, if there were
jwz> redirections or something on the command line.
Historically, all Unix systems had a limit on the number of characters
you can input on the command line - generally 255. This is one of the
most stupid system limits imaginable.
/usr/include/sys/param.h:33:#define CANBSIZ MAX_CANON
/usr/include/bits/posix1_lim.h:50:#define _POSIX_MAX_CANON 255
/usr/include/bits/confname.h:29: _PC_MAX_CANON,
/usr/include/bits/confname.h:30:#define _PC_MAX_CANON _PC_MAX_CANON
/usr/include/linux/limits.h:11:#define MAX_CANON 255 /* size of the canonical input
queue */
You can still see this today on a Sun machine. Start /bin/sh in an
xterm, and type some long command line. I get something like this:
$ echo
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: not
found
It depends on your shell, and your system.
On my Linux system, the limit seems to have been raised to about 40k.
To reproduce this in an xterm, you have to get your shell into
canonical mode, e.g. `bash --noediting' or `unset edit' in tcsh.
Try building up a huge `echo' command - you probably won't be able to
input more than 40k.
My guess is that the Linux folks have raised the MAX_CANON limit, but
without changing the system header files. Perhaps because Linus and
Uli don't really talk much.
xemacs tries to circumvent the system limit. If the command line is
longer than 200 or so, it sends the command line in chunks of 200,
with a '^D' character in between (!). This insanity has mostly worked
for the past decade. "Usually" the '^D's are discarded.
csh has always been a little special, because '^D' also does file
completion. I remember that ten years ago,
unset filec
seemed to fix things when some user had a problem just like yours.
Things you could play with, to see whether it perturbs the problem space:
set/unset filec
stty eof '^D'
stty eof '^P' (some other random char)
set/unset autolist
set/unset edit
set term=dumb/vt100
Look at the tcsh binding for '^D'
Perhaps the system header files now return a value for MAX_CANON that
is too large?? On my system, fpathconf (0, _PC_MAX_CANON) still
returns 255.
If you just want to ameliorate the problem for yourself, you can
hack XEmacs' get_pty_max_bytes() :
int
get_pty_max_bytes (int fd)
{
return 40000; // or maybe 9999999999, or maybe 200
}
Let me know if any of these ideas work for you.
From libc docs:
----------------------------------------------------------------
In canonical input mode, the operating system provides input editing
facilities: some characters are interpreted specially to perform editing
operations within the current line of text, such as ERASE and KILL.
*Note Editing Characters::.
The constants `_POSIX_MAX_CANON' and `MAX_CANON' parameterize the
maximum number of bytes which may appear in a single line of canonical
input. *Note Limits for Files::. You are guaranteed a maximum line
length of at least `MAX_CANON' bytes, but the maximum might be larger,
and might even dynamically change size.
- Macro: int VEOF
This is the subscript for the EOF character in the special control
character array. `TERMIOS.c_cc[VEOF]' holds the character itself.
The EOF character is recognized only in canonical input mode. It
acts as a line terminator in the same way as a newline character,
but if the EOF character is typed at the beginning of a line it
causes `read' to return a byte count of zero, indicating
end-of-file. The EOF character itself is discarded.
Usually, the EOF character is `C-d'.
----------------------------------------------------------------
How can XEmacs work better? Here's an idea: When the input line is
bigger than 200, actually inspect the current settings of the tty to
see whether the shell has put it into canonical mode or not. If not
canonical, don't do the weird C-d glue thing.