This code in eval.c at l. 6016 gets flagged as 64-bit unsafe
(specifically on amd64 pointers and EMACS_INTs are 64 bits, while ints
are 32 bits). In fact, in C++ it's an error. Anybody know what
should be happening here? I would assume that all C integers that are
visible to Lisp are EMACS_INTs, but if there not changing the
declaration of val to EMACS_INT below would be a disaster.
static Lisp_Object
restore_int (Lisp_Object cons)
{
Lisp_Object opaque = XCAR (cons);
Lisp_Object lval = XCDR (cons);
int *addr = (int *) get_opaque_ptr (opaque);
int val;
if (INTP (lval))
val = XINT (lval);
else
{
val = (int) get_opaque_ptr (lval);
free_opaque_ptr (lval);
}
*addr = val;
free_opaque_ptr (opaque);
free_cons (cons);
return Qnil;
}
_______________________________________________
XEmacs-Beta mailing list
XEmacs-Beta(a)xemacs.org
http://calypso.tux.org/cgi-bin/mailman/listinfo/xemacs-beta