Hrvoje Niksic <hniksic(a)srce.hr> writes:
> Would it be difficult to have a separate "unsigned integer
type?".
> That could still go up to 2^30-1.
That type would have to be an lrecord. Draw your own conclusion.
No. In that case I want unsigned integers to be the "primitive type"
and would I want normal integers to be an lrecord. Buffer positions
could then still be represented by that new "unsigned integer
type". With an lrecord lisp type you could have 32-bit integers.
I am not even sure I want the lisp code to see the distinction.
Maybe (probably?) such a thing will be extremely ugly to implement, I
don't know. Maybe an even better idea would be to this for
characters.
No "scheme" will buy you 31-bit integers except the current
one, and
bignums. And bignums are hard.
lrecord integers. Of course you could also argue for lrecord chars,
presumably the actual value of chars will not be used that often,
presumably less then integers. Did anybody ever do profiling on this?
Maybe putting characters in lrecords for Mule is the way to go.
Maybe even with the ugly small-character/big-character thing I reposed
for unsigned/signed integers above.
I think it is less of a crime if the internal representation of stuff
changes depending on the Mule switch without changing any lisp visual
limits.
> Because I think editing a 1 Gbyte buffer in a Mule XEmacs will
be so
> slow that it doesn't matter much anyway.
Have you tried it? I have tried 600M buffers, and things worked
reasonably fast -- i.e. slow as hell, but workable in case of
emergency.
Ok we do better as I thought then.
I still don't understand your remark about O(n^2) operations,
though.
I thought some operations on mule buffers become O(n^2)? Or am I
confusing things here.
I am not a fan limiting the integer range and I hope some other
solution can be found but if really makes implementing that much
easier I won't loose any sleep over it.
Jan