Andy Piper <andyp(a)parallax.co.uk> writes in xemacs-beta(a)xemacs.org:
> At 15:41 02/12/98 -0800, Martin Buchholz wrote:
>> I agree with Andy on this. If you don't have enough disk space for 2
>> copies of a package directory, your disk is about to explode anyways.
Even if true, it is no excuse for XEmacs package installation to puke
its guts out.
> One could argue that if your disk is likely to explode then installing
> XEmacs will make it do so anyway :)
Packages differ wildly in size. The top 4 account for about 33% total
disk usage. Depending upon what has been installed, that may be
higher. For me, I see 71MB in a fully bytecompiled xemacs-package
tree and 53MB (39.2MB XEmacs, 13.8MB Mule) for what I have installed[1].
The "Big 4" packages:
Gnus 6.3MB
Leim 6.0MB
Calc 5.6MB
SKK 5.6MB
= 23.5MB
Writing as an expert in filling up disks of any size (:-), I repeat
my contention that we have to take into account filling disks and
doing something graceful when that occurs. I have blown out the disk
I use for distribution building on numerous occasions.
To complete summarization of disk usage. For building I see the
following:
71MB XEmacs packages as of last night
45MB XEmacs 21.0-pre9
43MB XEmacs 21.2.4
For installation:
21.5MB Core
23.5MB "Steve" XEmacs package installation[2]
63.6MB Full XEmacs package installation
For comparison, XEmacs 20.4:
50.1MB (Compressed Lisp)
67.6MB (Straight Installation)
Based on the kind of software I have installed, I don't know whether
I'm relieved or disappointed that XEmacs is no longer anywhere close
in the running for largest software package I have installed. The
record is held by Oracle database server which is asking for 700MB
just for installation not counting the "at least 3" mount points it
wishes for storing database stuffs.
Footnotes:
[1] I have all of the "Big 4" installed, so that's 44% for me.
[2] I like very much the fact that without any Lisp .el compression,
I have a 10% smaller installation of 21.0 with more features that _I
want_ than 20.4 with Lisp .el compression.