Note that there are some explanatory texts on larger screens.

plurals
  1. POWays to improve performance consistency
    text
    copied!<p>In the following example, one thread is sending "messages" via a ByteBuffer which is the consumer is taking. The best performance is very good but its not consistent.</p> <pre><code>public class Main { public static void main(String... args) throws IOException { for (int i = 0; i &lt; 10; i++) doTest(); } public static void doTest() { final ByteBuffer writeBuffer = ByteBuffer.allocateDirect(64 * 1024); final ByteBuffer readBuffer = writeBuffer.slice(); final AtomicInteger readCount = new PaddedAtomicInteger(); final AtomicInteger writeCount = new PaddedAtomicInteger(); for(int i=0;i&lt;3;i++) performTiming(writeBuffer, readBuffer, readCount, writeCount); System.out.println(); } private static void performTiming(ByteBuffer writeBuffer, final ByteBuffer readBuffer, final AtomicInteger readCount, final AtomicInteger writeCount) { writeBuffer.clear(); readBuffer.clear(); readCount.set(0); writeCount.set(0); Thread t = new Thread(new Runnable() { @Override public void run() { byte[] bytes = new byte[128]; while (!Thread.interrupted()) { int rc = readCount.get(), toRead; while ((toRead = writeCount.get() - rc) &lt;= 0) ; for (int i = 0; i &lt; toRead; i++) { byte len = readBuffer.get(); if (len == -1) { // rewind. readBuffer.clear(); // rc++; } else { int num = readBuffer.getInt(); if (num != rc) throw new AssertionError("Expected " + rc + " but got " + num) ; rc++; readBuffer.get(bytes, 0, len - 4); } } readCount.lazySet(rc); } } }); t.setDaemon(true); t.start(); Thread.yield(); long start = System.nanoTime(); int runs = 30 * 1000 * 1000; int len = 32; byte[] bytes = new byte[len - 4]; int wc = writeCount.get(); for (int i = 0; i &lt; runs; i++) { if (writeBuffer.remaining() &lt; len + 1) { // reader has to catch up. while (wc - readCount.get() &gt; 0) ; // rewind. writeBuffer.put((byte) -1); writeBuffer.clear(); } writeBuffer.put((byte) len); writeBuffer.putInt(i); writeBuffer.put(bytes); writeCount.lazySet(++wc); } // reader has to catch up. while (wc - readCount.get() &gt; 0) ; t.interrupt(); t.stop(); long time = System.nanoTime() - start; System.out.printf("Message rate was %.1f M/s offsets %d %d %d%n", runs * 1e3 / time , addressOf(readBuffer) - addressOf(writeBuffer) , addressOf(readCount) - addressOf(writeBuffer) , addressOf(writeCount) - addressOf(writeBuffer) ); } // assumes -XX:+UseCompressedOops. public static long addressOf(Object... o) { long offset = UNSAFE.arrayBaseOffset(o.getClass()); return UNSAFE.getInt(o, offset) * 8L; } public static final Unsafe UNSAFE = getUnsafe(); public static Unsafe getUnsafe() { try { Field field = Unsafe.class.getDeclaredField("theUnsafe"); field.setAccessible(true); return (Unsafe) field.get(null); } catch (Exception e) { throw new AssertionError(e); } } private static class PaddedAtomicInteger extends AtomicInteger { public long p2, p3, p4, p5, p6, p7; public long sum() { // return 0; return p2 + p3 + p4 + p5 + p6 + p7; } } } </code></pre> <p>prints the timings for the same block of data. The numbers at the end are the relative addresses of the objects which show they are layed out in cache the same each time. Running longer tests of 10 shows that a given combination produces the same performance repeatedly.</p> <pre><code>Message rate was 63.2 M/s offsets 136 200 264 Message rate was 80.4 M/s offsets 136 200 264 Message rate was 80.0 M/s offsets 136 200 264 Message rate was 81.9 M/s offsets 136 200 264 Message rate was 82.2 M/s offsets 136 200 264 Message rate was 82.5 M/s offsets 136 200 264 Message rate was 79.1 M/s offsets 136 200 264 Message rate was 82.4 M/s offsets 136 200 264 Message rate was 82.4 M/s offsets 136 200 264 Message rate was 34.7 M/s offsets 136 200 264 Message rate was 39.1 M/s offsets 136 200 264 Message rate was 39.0 M/s offsets 136 200 264 </code></pre> <p>Each set of buffers and counter are tested three times and those buffers appear to give similar results. SO I believe there is something about the way these buffers are layed out in memory I am not seeing.</p> <p>Is there anything which might give the higher performance more often? It looks like a cache collision but I can't see where this could be happening.</p> <p>BTW: <code>M/s</code> is millions of messages per second and is more than anyone is likely to need, but it would be good to understand how to make it consistently fast.</p> <hr> <p>EDIT: Using synchronized with wait and notify makes the result much more consistent. But not faster.</p> <pre><code>Message rate was 6.9 M/s Message rate was 7.8 M/s Message rate was 7.9 M/s Message rate was 6.7 M/s Message rate was 7.5 M/s Message rate was 7.7 M/s Message rate was 7.3 M/s Message rate was 7.9 M/s Message rate was 6.4 M/s Message rate was 7.8 M/s </code></pre> <hr> <p>EDIT: By using task set, I can make the performance consistent if I lock the two threads to change the same core.</p> <pre><code>Message rate was 35.1 M/s offsets 136 200 216 Message rate was 34.0 M/s offsets 136 200 216 Message rate was 35.4 M/s offsets 136 200 216 Message rate was 35.6 M/s offsets 136 200 216 Message rate was 37.0 M/s offsets 136 200 216 Message rate was 37.2 M/s offsets 136 200 216 Message rate was 37.1 M/s offsets 136 200 216 Message rate was 35.0 M/s offsets 136 200 216 Message rate was 37.1 M/s offsets 136 200 216 If I use any two logical threads on different cores, I get the inconsistent behaviour Message rate was 60.2 M/s offsets 136 200 216 Message rate was 68.7 M/s offsets 136 200 216 Message rate was 55.3 M/s offsets 136 200 216 Message rate was 39.2 M/s offsets 136 200 216 Message rate was 39.1 M/s offsets 136 200 216 Message rate was 37.5 M/s offsets 136 200 216 Message rate was 75.3 M/s offsets 136 200 216 Message rate was 73.8 M/s offsets 136 200 216 Message rate was 66.8 M/s offsets 136 200 216 </code></pre> <hr> <p>EDIT: It appears that triggering a GC will shift the behaviour. These show repeated test on the same buffer+counters with a manually trigger GC halfway.</p> <pre><code>faster after GC Message rate was 27.4 M/s offsets 136 200 216 Message rate was 27.8 M/s offsets 136 200 216 Message rate was 29.6 M/s offsets 136 200 216 Message rate was 27.7 M/s offsets 136 200 216 Message rate was 29.6 M/s offsets 136 200 216 [GC 14312K-&gt;1518K(244544K), 0.0003050 secs] [Full GC 1518K-&gt;1328K(244544K), 0.0068270 secs] Message rate was 34.7 M/s offsets 64 128 144 Message rate was 54.5 M/s offsets 64 128 144 Message rate was 54.1 M/s offsets 64 128 144 Message rate was 51.9 M/s offsets 64 128 144 Message rate was 57.2 M/s offsets 64 128 144 and slower Message rate was 61.1 M/s offsets 136 200 216 Message rate was 61.8 M/s offsets 136 200 216 Message rate was 60.5 M/s offsets 136 200 216 Message rate was 61.1 M/s offsets 136 200 216 [GC 35740K-&gt;1440K(244544K), 0.0018170 secs] [Full GC 1440K-&gt;1302K(244544K), 0.0071290 secs] Message rate was 53.9 M/s offsets 64 128 144 Message rate was 54.3 M/s offsets 64 128 144 Message rate was 50.8 M/s offsets 64 128 144 Message rate was 56.6 M/s offsets 64 128 144 Message rate was 56.0 M/s offsets 64 128 144 Message rate was 53.6 M/s offsets 64 128 144 </code></pre> <hr> <p>EDIT: Using @BegemoT's library to print the core id used I get the following on a 3.8 GHz i7 (home PC)</p> <p>Note: the offsets are incorrect by a factor of 8. As the heap size was small, the JVM doesn't multiply the reference by 8 like it does with a heap which is larger (but less than 32 GB).</p> <pre><code>writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 54.4 M/s offsets 3392 3904 4416 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#6] Message rate was 54.2 M/s offsets 3392 3904 4416 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 60.7 M/s offsets 3392 3904 4416 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 25.5 M/s offsets 1088 1600 2112 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 25.9 M/s offsets 1088 1600 2112 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 26.0 M/s offsets 1088 1600 2112 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 61.0 M/s offsets 1088 1600 2112 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 61.8 M/s offsets 1088 1600 2112 writer.currentCore() -&gt; Core[#0] reader.currentCore() -&gt; Core[#5] Message rate was 60.7 M/s offsets 1088 1600 2112 </code></pre> <p>You can see that the same logical threads are being used, but the performance varies, between runs, but not within a run (within a run the same objects are used)</p> <hr> <p>I have found the problem. It was a memory layout issue but I could see a simple way to resolve it. ByteBuffer cannot be extended so you can't add padding so I create an object I discard.</p> <pre><code> final ByteBuffer writeBuffer = ByteBuffer.allocateDirect(64 * 1024); final ByteBuffer readBuffer = writeBuffer.slice(); new PaddedAtomicInteger(); final AtomicInteger readCount = new PaddedAtomicInteger(); final AtomicInteger writeCount = new PaddedAtomicInteger(); </code></pre> <p>Without this extra padding (of the object which is not used), the results look like this on a 3.8 GHz i7.</p> <pre><code>Message rate was 38.5 M/s offsets 3392 3904 4416 Message rate was 54.7 M/s offsets 3392 3904 4416 Message rate was 59.4 M/s offsets 3392 3904 4416 Message rate was 54.3 M/s offsets 1088 1600 2112 Message rate was 56.3 M/s offsets 1088 1600 2112 Message rate was 56.6 M/s offsets 1088 1600 2112 Message rate was 28.0 M/s offsets 1088 1600 2112 Message rate was 28.1 M/s offsets 1088 1600 2112 Message rate was 28.0 M/s offsets 1088 1600 2112 Message rate was 17.4 M/s offsets 1088 1600 2112 Message rate was 17.4 M/s offsets 1088 1600 2112 Message rate was 17.4 M/s offsets 1088 1600 2112 Message rate was 54.5 M/s offsets 1088 1600 2112 Message rate was 54.2 M/s offsets 1088 1600 2112 Message rate was 55.1 M/s offsets 1088 1600 2112 Message rate was 25.5 M/s offsets 1088 1600 2112 Message rate was 25.6 M/s offsets 1088 1600 2112 Message rate was 25.6 M/s offsets 1088 1600 2112 Message rate was 56.6 M/s offsets 1088 1600 2112 Message rate was 54.7 M/s offsets 1088 1600 2112 Message rate was 54.4 M/s offsets 1088 1600 2112 Message rate was 57.0 M/s offsets 1088 1600 2112 Message rate was 55.9 M/s offsets 1088 1600 2112 Message rate was 56.3 M/s offsets 1088 1600 2112 Message rate was 51.4 M/s offsets 1088 1600 2112 Message rate was 56.6 M/s offsets 1088 1600 2112 Message rate was 56.1 M/s offsets 1088 1600 2112 Message rate was 46.4 M/s offsets 1088 1600 2112 Message rate was 46.4 M/s offsets 1088 1600 2112 Message rate was 47.4 M/s offsets 1088 1600 2112 </code></pre> <p>with the discarded padded object.</p> <pre><code>Message rate was 54.3 M/s offsets 3392 4416 4928 Message rate was 53.1 M/s offsets 3392 4416 4928 Message rate was 59.2 M/s offsets 3392 4416 4928 Message rate was 58.8 M/s offsets 1088 2112 2624 Message rate was 58.9 M/s offsets 1088 2112 2624 Message rate was 59.3 M/s offsets 1088 2112 2624 Message rate was 59.4 M/s offsets 1088 2112 2624 Message rate was 59.0 M/s offsets 1088 2112 2624 Message rate was 59.8 M/s offsets 1088 2112 2624 Message rate was 59.8 M/s offsets 1088 2112 2624 Message rate was 59.8 M/s offsets 1088 2112 2624 Message rate was 59.2 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.9 M/s offsets 1088 2112 2624 Message rate was 60.6 M/s offsets 1088 2112 2624 Message rate was 59.6 M/s offsets 1088 2112 2624 Message rate was 60.3 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.9 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.5 M/s offsets 1088 2112 2624 Message rate was 60.7 M/s offsets 1088 2112 2624 Message rate was 61.6 M/s offsets 1088 2112 2624 Message rate was 60.8 M/s offsets 1088 2112 2624 Message rate was 60.3 M/s offsets 1088 2112 2624 Message rate was 60.7 M/s offsets 1088 2112 2624 Message rate was 58.3 M/s offsets 1088 2112 2624 </code></pre> <p>Unfortunately there is always the risk that after a GC, the objects will not be laid out optimally. The only way to resolve this may be to add padding to the original class. :(</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload