Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Unfortunately, I'm not sure if I'm understanding your question correctly, but I'll try to answer.</p> <p>Cache misses are caused by the processor requiring data that is scattered all over memory.</p> <p>One very common way of increasing cache hits is just organizing your data so that everything that is accessed sequentially is in the same region of memory. Judging by your explanation, I think this is most likely your problem; your <code>A</code> objects are scattered all over the place.</p> <p>If you're just calling regular <code>new</code> every single time you need to allocate an <code>A</code>, you'll probably end up with all of your <code>A</code> objects being scattered.</p> <p>You can create a custom allocator for objects that will be creating many times and accessed sequentially. This custom allocator could allocate a large number of objects and hand them out as requested. This may be similar to what you meant by reordering your data. </p> <p>It can take a bit of work to implement this, however, because you have to consider cases such as what happens when it runs out of objects, how it knows which objects have been handed out, and so on.</p> <pre><code>// This example is very simple. Instead of using new to create an Object, // the code can just call Allocate() and use the pointer returned. // This ensures that all Object instances reside in the same region of memory. struct CustomAllocator { CustomAllocator() : nextObject(cache) { } Object* Allocate() { return nextObject++; } Object* nextObject; Object cache[1024]; } </code></pre> <p>Another method involves caching operations that work on sequential data, but aren't performed sequentially. I think this is what you meant by having a separate vector.</p> <p>However, it's important to understand that your CPU doesn't just keep one section of memory in cache at a time. It keeps multiple sections of memory cached. </p> <p>If you're jumping back and forth between operations on data in one section and operations on data in another section, this most likely will not cause many cache hits; your CPU can and should keep both sections cached at the same time.</p> <p>If you're jumping between operations on 50 different sets of data, you'll probably encounter many cache misses. In this scenario, caching operations would be beneficial.</p> <p>In your case, I don't think caching operations will give you much benefit. Ensuring that all of your <code>A</code> objects reside in the same section of memory, however, probably will.</p> <p>Another thing to consider is threading, but this can get pretty complicated. If your thread is doing a lot of context switches, you may encounter a lot of cache misses.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload