On Mon, Nov 01, 1999 at 01:32:06AM +0100, Hrvoje Niksic wrote:
Then why store the hash-tables at all? You could just store a
hash-table's properties (weakness, hash function, etc.) and a list of
key-value pairs. After dumping, it is trivial to re-create the
hash-table -- see e.g. hash_table_instantiate().
It avoids special cases in the enumeration/dump/reload code. What is
actually dumped right now is the whole hentry array, which is a bunch
of key/values pairs. Only slighly less compact because some are
empty, the hash table being of course slightly bigger than the number
of values it contains. So this avoids special-casing at a low cost.
Or, am I mistaken in thinking that creating a new hash-table from
scratch is healthier and faster than "reorganizing" an existing one?
Healthier, I don't know. Faster, I'm not sure the difference is huge
(the only different lies in the presence of empty hentries with what a
specialized code would do vs. reorganize_hash_table()).
If you want to try doing that, I'll give you a hint: Create a new
description type (it's simply an enum in lrecord.h), use it for the
hentry pointer description in hash_table_description (instead of the
XD_STRUCT_PTR), and implement your packed dumping in the switchs.
Sincerely, I don't think the difference will be significant.
OG.