Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>If you want a simple answer, it is that there is no simple answer. No amount of calling answers (and by implication people) "lazy" is going to help.</p> <blockquote> <p>How fast should I expect memory allocations to run (MB/s) on a standard modern CPU?</p> </blockquote> <p>At the speed at which the JVM can zero memory, assuming that the allocation does not trigger a garbage collection. If it does trigger garbage collection, it is impossible to predict without knowing what GC algorithm is used, the heap size and other parameters, and an analysis of the application's working set of non-garbage objects over the lifetime of the app.</p> <blockquote> <p>How does allocation size effect allocation rate?</p> </blockquote> <p>See above.</p> <blockquote> <p>What's the break-even point for number/size of allocations vs. re-use in a pool?</p> </blockquote> <p>If you want a simple answer, it is that there is no simple answer. </p> <p>The golden rule is, the bigger your heap is (up to the amount of physical memory available), the smaller the amortized cost of GC'ing a garbage object. With a fast copying garbage collector, the amortized cost of freeing a garbage object approaches zero as the heap gets larger. The cost of the GC is actually determined by (in simplistic terms) the number and size of non-garbage objects that the GC has to deal with. </p> <p>Under the assumption that your heap is large, the lifecycle cost of allocating and GC'ing a large object (in one GC cycle) approaches the cost of zeroing the memory when the object is allocated.</p> <p><strong>EDIT</strong>: If all you want is some simple numbers, write a simple application that allocates and discards large buffers and run it on your machine with various GC and heap parameters and see what happens. But beware that this is not going to give you a realistic answer because real GC costs depend on an application's non-garbage objects.</p> <p><em>I'm not going to write a benchmark for you because <strong>I know</strong> that it would give you bogus answers.</em></p> <p><strong>EDIT 2</strong>: In response to the OP's comments.</p> <blockquote> <p>So, I should expect allocations to run about as fast as System.arraycopy, or a fully JITed array initialization loop (about 1GB/s on my last bench, but I'm dubious of the result)? </p> </blockquote> <p>Theoretically yes. In practice, it is difficult to measure in a way that separates the allocation costs from the GC costs. </p> <blockquote> <p>By heap size, are you saying allocating a larger amount of memory for JVM use will actually reduce performance? </p> </blockquote> <p>No, I'm saying it is likely to <em>increase</em> performance. Significantly. (Provided that you don't run into OS-level virtual memory effects.)</p> <blockquote> <p>Allocations are just for arrays, and almost everything else in my code runs on the stack. It should simplify measuring and predicting performance.</p> </blockquote> <p>Maybe. Frankly, I think that you are not going to get much improvement by recycling buffers. </p> <p>But if you are intent on going down this path, create a buffer pool <em>interface</em> with two implementations. The first is a real thread-safe buffer pool that recycles buffers. The second is dummy pool which simply allocates a new buffer each time <code>alloc</code> is called, and treats <code>dispose</code> as a no-op. Finally, allow the application developer to choose between the pool implementations via a <code>setBufferPool</code> method and/or constructor parameters and/or runtime configuration properties. The application should also be able to supply a buffer pool class / instance of its own making.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload