Raymond Toy wrote:
>>>>> "Glynn" == Glynn Clements
<glynn(a)sensei.co.uk> writes:
Glynn> Raymond Toy wrote:
>> One question, though, on the tail-recursive feature. It's nice to
>> have but is it really, really important? Elisp isn't (right?), and
>> we've been doing fine without proper tail recursion.
Glynn> We've been *surviving* without tail recursion optimisation.
Glynn> Ultimately, tail recursion can always be converted to explicit
Glynn> iteration, which is what you need to do in elisp if you don't want to
Glynn> take a performance hit.
Of course. I'm just curious how bad it really is. I assume that the
performance hit is not the lack of tail-recursivion, but the
exceedingly slow funcall's in elisp.
Yes and no. If function calls were faster, the tail-recursive
functions would be faster with or without tail recurion optimisation.
OTOH, they would still have O(n) memory complexity, which would rule
out their use for processing long lists.
Some discussions in comp.lang.lisp (or scheme?) mentioned that even
though tail-recursion is equivalent to iteration, it still takes a
toll on speed. 20% maybe or more? I don't remember the numbers from
the tests that were done.
The point about tail recursion is that it is equivalent to iteration.
You should be able to convert a tail-recursive function into a loop.
So certain forms of recursion are just `syntactic sugar' on top of
iteration.
If writing a function using tail-recursion is slower than using the
equivalent iterative form, then the optimisation isn't complete.
However, provided that it's `close enough', people can write code in
its natural form, knowing that any performance hit is
a) proportionate, rather than affecting the asymptotic time/space
complexity, and
b) likely to diminish over time.
Glynn> However, it's effectively a bug (IMHO) which
we've had to work around.
Glynn> It would be nice if algorithms which would naturally be implemented
Glynn> using tail recursion could be written in their natural form.
I guess my question is, if things were written recursively, how a big
a stack would we use and would that be big enough to matter?
Not theoretical questions, but real world numbers for xemacs. (No,
I'm not asking anyone to actually do this with xemacs. I wouldn't do
it. :-) )
Well, from a theoretical viewpoint, without optimised tail-recursion,
you can't write general-purpose list-scanning functions (mapcar, memq,
...) using tail-recursion, because eventually you'll get a case where
they run out of stack, whereas (explicitly) iterative versions
wouldn't.
In a sense, you end up with the worst of both worlds. People who don't
like functional languages don't like elisp because it looks like a
functional language. People who do like functional languages don't
like elisp because you can't use it like any other functional
language.
--
Glynn Clements <glynn(a)sensei.co.uk>