Note that there are some explanatory texts on larger screens.

plurals
  1. POLocating code that is filling PermGen with dead Groovy code
    primarykey
    data
    text
    <p>We have had our glassfish instance go down every two weeks for a while with a <code>java.lang.OutOfMemoryError: PermGen space</code>. I increased the PermGen space to 512MB and startet dumping memory usage with <code>jstat -gc</code>. After two weeks I came up with the following graph that shows how the PermGen space is steadily increasing (the units on the x-axis are minutes, y-axis are KB). <img src="https://i.stack.imgur.com/j4LVr.png" alt="Graph of increasing PermGen usage"></p> <p>I tried googling around for some kind of profiling tool that could pinpoint the error and a thread here on SO mentioned jmap, which proved to be quite helpful. Out of the approximately 14000 lines dumped from <code>jmap -permstats $PID</code>, approximately 12500 contained <code>groovy/lang/GroovyClassLoader$InnerLoader</code>, pointing to some kind of memory leak from either our own Groovy code or Groovy itself. I have to point out that Groovy constitues less than 1% of the relevant codebase .</p> <p>Example output below:</p> <pre><code>class_loader classes bytes parent_loader alive? type &lt;bootstrap&gt; 3811 14830264 null live &lt;internal&gt; 0x00007f3aa7e19d20 20 164168 0x00007f3a9607f010 dead groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120 0x00007f3aa7c850d0 20 164168 0x00007f3a9607f010 dead groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120 0x00007f3aa5d15128 21 181072 0x00007f3a9607f010 dead groovy/lang/GroovyClassLoader$InnerLoader@0x00007f3a7afb4120 0x00007f3aad0b40e8 36 189816 0x00007f3a9d31fbf8 dead org/apache/jasper/servlet/JasperLoader@0x00007f3a7d0caf00 .... </code></pre> <p>So how can I proceed to find out more about what code is causing this?</p> <p>From <a href="http://groovy.dzone.com/news/groovyshell-and-memory-leaks" rel="nofollow noreferrer">this article</a> I infer that our Groovy code is dynamically creating classes somewhere. And from the the dump from jmap I can see that most of the dead objects/classes(?) have the same parent_loader, although I am unsure what that means in this context. I do not know how to proceed from here.</p> <h2>Addendum</h2> <p>For latecomers, it's worth pointing out that the <em>accepted answer does not fix the issue</em>. It simply extends the period needed before rebooting with a tenfold by not storing so much class info. What actually fixed our problems was getting rid of the code that generated it. We used the validation (Design by contract) framework <a href="http://oval.sourceforge.net/" rel="nofollow noreferrer">OVal</a> where one could script custom constraints using Groovy as annotations on methods and classes. Removing the annotations in favor of explicit pre- and post-conditions in plain Java was boring, but it got the job done. I suspect each time an OVal constraint was being checked a new anonymous class was being created and somehow the associated class data was causing a memory leak.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload