Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>preallocate and fill. This reduced my runtime form about 1000 seconds to 20 seconds</p> <pre><code>data_stack = numpy.empty((321, 720, 1000), dtype=numpy.float32) for index in range(len(data)): data_stack[0:321,0:720,index] = data[index] </code></pre> <p>I was doing something similar. dstack is not a good solution to this problem. A colleague reminded me of the dynamic array problem and amortized analysis. <a href="http://anh.cs.luc.edu/363/notes/06A_Amortizing.html" rel="nofollow">http://anh.cs.luc.edu/363/notes/06A_Amortizing.html</a></p> <p>When you want to expand a dynamic array, you need to allocate a new array that can hold the original data and the new data. You then copy the old array into the new array and the new data into the new array. That is an expensive operation.</p> <p>Suppose you have an array of size 10 and you want to add 2 items to it one at a time. To add the first item you need to expand the array to size 11 and copy in 11 items (original 10 + 1 new item). To add the second item you need to expand the array to size 12 and copy in 12 items. If you knew ahead of time that you were adding 2 items you could have resized the array to 12 to start with and only copied 12 items instead of a total of 23. It turns out that doubling the size of the array every time you run out of space is a much more efficient solution.</p> <p>How this applies here: dstack doesn't double the size of the ndarray, it's meant to only allocate as much memory as needed. So each time you call dstack you are copying all of the data you have in your ndarray into a new ndarray with space for the new data. Notice that the time to dstack increases with each call.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload