Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<blockquote> <p>I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?</p> </blockquote> <p>I think the previously posted answers already do a good job explaining this part. I'll only add that the "forcing" of a lazy sequence is an implicit -- paren-free! :-) -- function call; perhaps this way of thinking about it will make some things clearer. Also note that forcing a lazy sequence involves a hidden mutation -- the thunk being forced needs to produce a value, store it in a cache (mutation!) and throw away its executable code, which will not be required again (mutation again!).</p> <blockquote> <p>I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?</p> <p>What makes lazy sequences so efficient that they don't consume much stack?</p> <p>What resources do lazy sequences consume to do what it does?</p> </blockquote> <p>They don't consume stack, because they consume heap instead. A lazy sequence is a data structure, living on the heap, which contains a small bit of executable code which can be called to produce more of the data structure if/when that is required.</p> <blockquote> <p>How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?</p> </blockquote> <p>Firstly, as mentioned by dbyrne, you can very well get an SO when working with lazy sequences if the thunks themselves need to execute code with a very deeply nested call structure.</p> <p>However, in a certain sense you can use lazy seqs in place of tail recursion, and to the degree that this works for you you can say that they help in avoiding SOs. In fact, rather importantly, functions producing lazy sequences <em>should not</em> be tail recursive; the conservation of stack space with lazy seq producers arises from the aforementioned stack -> heap transfer and any attempts to write them in a tail recursive fashion will only break things.</p> <p>The key insight is that a lazy sequence is an object which, when first created, doesn't hold any items (as a strict sequence always does); when a function returns a lazy sequence, only this "lazy sequence object" is returned to the caller, before any forcing takes place. Thus the stack frame used up by the call which returned the lazy sequence is popped before any forcing takes place. Let's have a look at an example producer function:</p> <pre><code>(defn foo-producer [] ; not tail recursive... (lazy-seq (cons :foo ; because it returns the value of the cons call... (foo-producer)))) ; which wraps a non-tail self-call </code></pre> <p>This works because <code>lazy-seq</code> returns <em>immediately</em>, thus <code>(cons :foo (foo-producer))</code> also returns immediately and the stack frame used up by the outer call to <code>foo-producer</code> is immediately popped. The inner call to <code>foo-producer</code> is hidden in the <code>rest</code> part of the sequence, which is a thunk; if/when that thunk is forced, it will briefly use up its own frame on the stack, but then return immediately as described above etc.</p> <p>Chunking (mentioned by dbyrne) changes this picture very slightly, because a larger number of elements gets produced at each step, but the principle remains the same: each step used up some stack when the corresponding elements of the lazy seq are being produced, then that stack is reclaimed <em>before</em> more forcing takes place.</p> <blockquote> <p>In what scenarios are lazy sequences inefficient?</p> <p>In what scenarios are lazy sequences most efficient?</p> </blockquote> <p>There's no point to being lazy if you need to hold the entire thing at once anyway. A lazy sequence makes a heap allocation at every step when not chunked or at every chunk -- once every 32 steps -- when chunked; avoiding that can net you a performance gain in some situations.</p> <p>However, lazy sequences enable a pipelined mode of data processing:</p> <pre><code>(-&gt;&gt; (lazy-seq-producer) ; possibly (-&gt;&gt; (range) (a-lazy-seq-transformer-function) ; (filter even?) (another-transformer-function)) ; (map inc)) </code></pre> <p>Doing this in a strict way would allocate plenty of heap anyway, because you'd have to keep the intermediate results around to pass them to the next processing stage. Moreover, you'd need to keep the whole thing around, which is actually impossible in the case of <code>(range)</code> -- an infinite sequence! -- and when it is possible, it is usually inefficient.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload