Stephen J. Turnbull wrote:
Mike Kupfer writes:
> The thing that makes me nervous about this approach is that it could
> leave internal state messed up. Any calls that could cause an Lstream
> write would have to be done in a place where it's okay if processing is
> thrown back to top-level.
That's already true, since the current consequence is a crash due to
the default fatal handler on SIGPIPE. I don't really see a difference
here, it's a matter of philosophy whether it's nicer to the user to
crash or to leave them in an unknown state when we've screwed up
badly. Either way, we have to admit we've screwed up. ;-)
I don't see a huge difference here, either, but then I don't like either
choice. I think we can do better with EPIPE. It's true that unprepared
code won't do all the cleanup that we want, but I expect there would be
some sort of error passed back to the user. I'm guessing the worst we
would see are resource leaks (memory and zombie child processes). I'd
rather have some resource leaks than a crash. Or are there nasty side
effects from failing to do proper cleanup that I'm failing to consider?
But we want a specific definition of "bail out" for EPIPE.
Ie,
"assume the subprocess died untimely, so politely shutdown our end of
the pipes, finalize the process structures, and then report the error
to the user."
We know that that is the current definition of how to handle SIGPIPE.
I don't see any reason to suppose that is the definition of how to
handle an unknown error throughout all code that might now see an
EPIPE. Checking that seems like a pretty hairy task to me.
I'd need to review process-unix.c, process.c, and process-nt.c (not that
I know anything about the Windows API). nas.c sets a SIGPIPE handler,
but it doesn't appear to use Lstreams.
I don't see why I'd have to do an exhaustive code flow analysis. I
think that once the Lstream code starts seeing EPIPE, future calls will
continue to return EPIPE, or possibly some other error indicating that
the stream is dead. Eventually the code that needs to see the error
should see it, and it can clean up then. And if it doesn't, we just
have some resource leaks.
Or at least that's the theory. I grant that there are a lot of
suppositions here. I'm well aware that once I actually start coding and
testing, I could run into issues that would make me reconsider this
whole idea.
I'd be happy to
have an alternative patch to review, but my feeling is that a redesign
is a hairball that we should avoid right now, unless you think you
personally will get something out of this (eg, a better understanding
of parts of XEmacs you want to hack on in the future). I don't see a
guarantee that you'll get an accept-able patch out of it.
Okay, thanks for the warning. This isn't something I'd be likely to get
to anytime soon, given that I don't know when I'm even going to finish
the package updates (Gnus, MH-E) I've already started on.
N.B. There are other problems in the Lstream code IMO.
Specifically,
it's too hard to localize errors and exceptional conditions (such as
the presence of markers and extents) in streams when there are stacked
Lstreams (and stacked Lstreams are very common because of the need to
do Mule-coding in essentially all I/O contexts). If and when we do
something about that, we could revisit the SIGPIPE vs. EPIPE design
decision. If you like that option, we can put a comment in the code
to that effect.
I'd be happy to have this added to the list of issues to consider in any
potential Lstream redesign. I mean, given my current productivity, that
redesign might happen before I get to this.... :-/
One thing that the SIGPIPE approach does is to satisfy DRY.
I'll think about that, but I'm not convinced. process-unix.c and nas.c
each has its own handler. Maybe there's some twist related to calling
back into the event loop that I'm missing. But the cleanup that's
currently done after the longjmp in the signal handler could just as
easily be packaged up as a function. That's DRY enough for me.
mike
_______________________________________________
XEmacs-Patches mailing list
XEmacs-Patches(a)xemacs.org
http://lists.xemacs.org/mailman/listinfo/xemacs-patches