Note that there are some explanatory texts on larger screens.

plurals
  1. POHow to produce the cpu cache effect in C and java?
    primarykey
    data
    text
    <p>In Ulrich Drepper's paper <a href="http://www.akkadia.org/drepper/cpumemory.pdf" rel="nofollow noreferrer">What every programmer should know about memory</a>, the 3rd part: CPU Caches, he shows a graph that shows the relationship between "working set" size and the cpu cycle consuming per operation (in this case, sequential reading). And there are two jumps in the graph which indicate the size of L1 cache and L2 cache. I wrote my own program to reproduce the effect in c. It just simply read a int[] array sequentially from head to tail, and I've tried different size of the array(from 1KB to 1MB). I plot the data into a graph and there is no jump, the graph is a straight line.</p> <p>My questions are:</p> <ol> <li>Is there something wrong with my method? What is the right way to produce the cpu cache effect(to see the jumps).</li> <li>I was thinking, if it is sequential read, then it should operate like this: When read the first element, it's a cache miss, and within the cache line size(64K), there will be hits. With the help of the prefetching, the latency of reading the next cache line will be hidden. It will contiguously read data into the L1 cache, even when the working set size is over the L1 cache size, it will evict the least recently used ones, and continue prefetch. So most of the cache misses will be hidden, the time consumed by fetch data from L2 will be hidden behind the reading activity, meaning they are operating at the same time. the assosiativity (8 way in my case) will hide the latency of reading data from L2. So, phenomenon of my program should be right, am I missing something?</li> <li>Is it possible to get the same effect in java?</li> </ol> <p>By the way, I am doing this in linux.</p> <hr> <h2>Edit 1</h2> <p>Thanks for Stephen C's suggestion, here are some additional Information: This is my code:</p> <pre><code>int *arrayInt; void initInt(long len) { int i; arrayInt = (int *)malloc(len * sizeof(int)); memset(arrayInt, 0, len * sizeof(int)); } long sreadInt(long len) { int sum = 0; struct timespec tsStart, tsEnd; initInt(len); clock_gettime(CLOCK_REALTIME, &amp;tsStart); for(i = 0; i &lt; len; i++) { sum += arrayInt[i]; } clock_gettime(CLOCK_REALTIME, &amp;tsEnd); free(arrayInt); return (tsEnd.tv_nsec - tsStart.tv_nsec) / len; } </code></pre> <p>In main() function, I've tried from 1KB to 100MB of the array size, still the same, average time consuming per element is 2 nanoseconds. I think the time is the access time of L1d.</p> <p>My cache size:</p> <p>L1d == 32k</p> <p>L2 == 256k</p> <p>L3 == 6144k</p> <hr> <h2>EDIT 2</h2> <p>I've changed my code to use a linked list.</p> <pre><code>// element type struct l { struct l *n; long int pad[NPAD]; // the NPAD could be changed, in my case I set it to 1 }; struct l *array; long globalSum; // for init the array void init(long len) { long i, j; struct l *ptr; array = (struct l*)malloc(sizeof(struct l)); ptr = array; for(j = 0; j &lt; NPAD; j++) { ptr-&gt;pad[j] = j; } ptr-&gt;n = NULL; for(i = 1; i &lt; len; i++) { ptr-&gt;n = (struct l*)malloc(sizeof(struct l)); ptr = ptr-&gt;n; for(j = 0; j &lt; NPAD; j++) { ptr-&gt;pad[j] = i + j; } ptr-&gt;n = NULL; } } // for free the array when operation is done void release() { struct l *ptr = array; struct l *tmp = NULL; while(ptr) { tmp = ptr; ptr = ptr-&gt;n; free(tmp); } } double sread(long len) { int i; long sum = 0; struct l *ptr; struct timespec tsStart, tsEnd; init(len); ptr = array; clock_gettime(CLOCK_REALTIME, &amp;tsStart); while(ptr) { for(i = 0; i &lt; NPAD; i++) { sum += ptr-&gt;pad[i]; } ptr = ptr-&gt;n; } clock_gettime(CLOCK_REALTIME, &amp;tsEnd); release(); globalSum += sum; return (double)(tsEnd.tv_nsec - tsStart.tv_nsec) / (double)len; } </code></pre> <p>At last, I will printf out the globalSum in order to avoid the compiler optimization. As you can see, it is still a sequential read, I've even tried up to 500MB of the array size, the average time per element is approximately 4 nanoseconds (maybe because it has to access the data 'pad' and the pointer 'n', two accesses), the same as 1KB of the array size. So, I think it is because the cache optimization like prefetch hide the latency very well, am I right? I will try a random access, and put the result on later.</p> <hr> <h2>EDIT 3</h2> <p>I've tried a random access to the linked list, this is the result: <img src="https://i.stack.imgur.com/z1nYl.jpg" alt="randomly access a linked list"></p> <p>the first red line is my L1 cache size, the second is L2. So we can see a little jump there. And some times the latency still be hidden well.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload