Rick Campbell wrote:
If you're distributing an executable over the net, it should be
statically linked, period. Disk and memory are cheap, throwing out
program integrity to save disk and memory makes no sense.
IMHO, static linking isn't the panacea which this seems to imply.
You're just changing the layer at which the binary interfaces with the
system, and it's by no means certain that the kernel and protocol
layers are any more fixed than the shared library layer.
Any binary which has been statically linked against libX11 won't run
here unless the builder has used a version which was compiled with
XDM-AUTHORIZATION-1 support (which precludes shipping it from within
the US, legally at least).
Similar considerations apply to many other areas. System database
lookups (e.g. getpwnam, gethostby*) may be implemented using network
maps, PAM etc. Networking functions may require a libc (or libsocket)
with transparent proxy support. System data files (timezone, locale)
may be in different locations or different formats. Unix-domain
sockets may not be where you expect them to (moving .X11-unix from
/tmp to somewhere safe is quite high on my todo list).
To my mind, dynamic linking has less to do with efficiency than with
not having to recompile half of the OS distribution when something
needs to be changed.
--
Glynn Clements <glynn(a)sensei.co.uk>