Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>It spends 50% of its time in that function because you call that function a lot. Optimizing that one function will only yield incremental improvements to performance.</p> <p>Alternatively, just call the function less!</p> <p>You've already started down that path by setting up a spatial partitioning scheme (lookup <a href="http://en.wikipedia.org/wiki/Quadtree" rel="noreferrer">Quadtrees</a> to see a more advanced form of your technique).</p> <p>A second approach is to break your N*N loop into an incremental form and to use <strong>a CPU budget</strong>.</p> <p>You can allocate a CPU budget for each of the modules that want action during frame times (during Updates). Collision is one of these modules, AI might be another. </p> <p>Let's say you want to run your game at 60 fps. This means you have about 1/60 s = 0.0167 s of CPU time to burn between frames. No we can split those 0.0167 s between our modules. Let's give <strong>collision</strong> 30% of the budget: <strong>0.005 s</strong>.</p> <p>Now your collision algorithm knows that it can only spend 0.005 s working. So if it runs out of time, it will need to postpone some tasks for later - you will make the algorithm incremental. Code for achieving this can be as simple as:</p> <pre><code>const double CollisionBudget = 0.005; Collision[] _allPossibleCollisions; int _lastCheckedCollision; void HandleCollisions() { var startTime = HighPerformanceCounter.Now; if (_allPossibleCollisions == null || _lastCheckedCollision &gt;= _allPossibleCollisions.Length) { // Start a new series _allPossibleCollisions = GenerateAllPossibleCollisions(); _lastCheckedCollision = 0; } for (var i=_lastCheckedCollision; i&lt;_allPossibleCollisions.Length; i++) { // Don't go over the budget if (HighPerformanceCount.Now - startTime &gt; CollisionBudget) { break; } _lastCheckedCollision = i; if (CheckCollision(_allPossibleCollisions[i])) { HandleCollision(_allPossibleCollisions[i]); } } } </code></pre> <p>There, now it doesn't matter how fast the collision code is, it will be done as quickly as is possible <strong>without affecting the user's perceived performance</strong>. </p> <p>Benefits include:</p> <ul> <li>The algorithm is designed to run out of time, it just resumes on the next frame, so you don't have to worry about this particular edge case.</li> <li>CPU budgeting becomes more and more important as the number of advanced/time consuming algorithms increases. Think AI. So it's a good idea to implement such a system early on.</li> <li>Human response time is less than 30 Hz, your frame loop is running at 60 Hz. That gives the algorithm 30 frames to complete its work, so it's OK that it doesn't finish its work.</li> <li>Doing it this way gives <strong>stable</strong>, <strong>data-independent</strong> frame rates. </li> <li>It still benefits from performance optimizations to the collision algorithm itself.</li> <li>Collision algorithms are designed to track down the "sub frame" in which collisions happened. That is, <em>you will never</em> be so lucky as to catch a collision <em>just</em> as it happens - thinking you're doing so is lying to yourself. </li> </ul>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload