Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>I post here relevant excerpts from a very enlightening and encouraging answer by Jon Masamitsu from Oracle, which I got from the HotSpot GC mailing list (hotspot-gc-use@openjdk.java.net) -- he works on HotSpot, so this is very good news indeed.</p> <p>At any rate, the question remains open for now (I can't credit myself for quoting an email :-) ), so please add your suggestions!</p> <p><em>Formatting: quotes from the original post are more heavily indented than Jon's response.</em></p> <blockquote> <blockquote> <blockquote> <blockquote> <p>It is our understanding that so long as the maximum free chunk size is larger than the young generation (assuming no humungous object allocation), every object promotion should succeed.</p> </blockquote> </blockquote> </blockquote> <p>To a very large degree this is correct. There are circumstances under which an object promoted from the young generation into the CMS generation will require more space in the CMS generation than it did in the young generation. I don't think this happens to a significant extent.</p> </blockquote> <p>The above is very encouraging, since we can definitely dedicate some spare memory to protect against the rare cases he describes, and it sounds like we'd be doing fine otherwise.</p> <p>&lt;--snip--></p> <blockquote> <blockquote> <blockquote> <blockquote> <p><strong>My question to you</strong>: assuming this all reflects a prolonged peak workload (workload at any given point in time in production will only be lower), does this sound like a valid approach? To what degree of reliability do you reckon we should be able to count on the maximum free chunk size statistic from the GC log?</p> </blockquote> </blockquote> </blockquote> <p>The maximum free chunk size is exact at the time GC prints it, but it can be stale by the time you read it and make your decisions.</p> </blockquote> <p>For our workloads, this metric is on a <em>very slow</em> downward spiral, so a little staleness won't hurt us.</p> <p>&lt;--snip--></p> <blockquote> <blockquote> <blockquote> <blockquote> <p>We are definitely open for suggestions, but request that they be limited to solutions available on HotSpot (No Azul for us, at least for now). Also, G1 by itself is no solution unless we can come up with a similar metric that will give us advance warning before Full GCs, or any GCs that significantly exceed our SLA (and these can occasionally occur).</p> </blockquote> </blockquote> </blockquote> <p>I think that the use of maximum free chunk size as your metric is a good choice. It is very conservative (which sounds like what you want) and not subject to odd mixtures of object sizes.</p> <p>For G1 I think you could use the number of completely free regions. I don't know if it is printed in any of the logs currently but it is probably a metric we maintain (or could easily). If the number of completely free regions decreases over time, it could signal that a full GC is coming.</p> <p>Jon</p> </blockquote> <p>Thank you Jon!</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload