Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied! <h1><em>In general</em>, there is no way to determine the complexity of a given function</h1> <p>Warning! Wall of text incoming!</p> <h2>1. There are <a href="https://en.wikipedia.org/wiki/Collatz_conjecture" rel="noreferrer">very simple</a> algorithms that no one knows whether they even halt or not.</h2> <p>There is <a href="http://en.wikipedia.org/wiki/Halting_Problem" rel="noreferrer">no algorithm</a> that can decide whether a given program halts or not, if given a certain input. Calculating the computational complexity is an even harder problem since not only do we need to prove that the algorithm halts but we need to prove <em>how fast</em> it does so.</p> <pre class="lang-c prettyprint-override"><code>//The Collatz conjecture states that the sequence generated by the following // algorithm always reaches 1, for any initial positive integer. It has been // an open problem for 70+ years now. function col(n){ if (n == 1){ return 0; }else if (n % 2 == 0){ //even return 1 + col(n/2); }else{ //odd return 1 + col(3*n + 1); } } </code></pre> <h2>2. <a href="https://en.wikipedia.org/wiki/Ackermann_function" rel="noreferrer">Some algorithms</a> have weird and off-beat complexities</h2> <p>A general "complexity determining scheme" would easily get too complicated because of these guys</p> <pre class="lang-c prettyprint-override"><code>//The Ackermann function. One of the first examples of a non-primitive-recursive algorithm. function ack(m, n){ if(m == 0){ return n + 1; }else if( n == 0 ){ return ack(m-1, 1); }else{ return ack(m-1, ack(m, n-1)); } } function f(n){ return ack(n, n); } //f(1) = 3 //f(2) = 7 //f(3) = 61 //f(4) takes longer then your wildest dreams to terminate. </code></pre> <h2>3. <a href="https://en.wikipedia.org/wiki/McCarthy_91_function" rel="noreferrer">Some functions</a> are very simple but will confuse lots of kinds of static analysis attempts</h2> <pre class="lang-c prettyprint-override"><code>//Mc'Carthy's 91 function. Try guessing what it does without // running it or reading the Wikipedia page ;) function f91(n){ if(n &gt; 100){ return n - 10; }else{ return f91(f91(n + 11)); } } </code></pre> <hr> <p>That said, we still need a way to find the complexity of stuff, right? For loops are a simple and common pattern. Take your initial example:</p> <pre class="lang-c prettyprint-override"><code>for(i=0; i&lt;N; i++){ for(j=0; j&lt;i; j++){ print something } } </code></pre> <p>Since each <code>print something</code> is O(1), the time complexity of the algorithm will be determined by how many times we run that line. Well, as your TA mentioned, we do this by looking at the combinations in this case. The inner loop will run (N + (N-1) + ... + 1) times, for a total of (N+1)*N/2.</p> <p>Since we disregard constants we get O(N<sup>2</sup>).</p> <p>Now for the more tricky cases we can get more mathematical. Try to create a function whose value represents how long the algorithm takes to run, given the size N of the input. <strong>Often we can construct a recursive version of this function directly from the algorithm itself and so calculating the complexity becomes the problem of putting bounds on that function.</strong> We call this function a <strong>recurrence</strong></p> <p>For example:</p> <pre class="lang-c prettyprint-override"><code>function fib_like(n){ if(n &lt;= 1){ return 17; }else{ return 42 + fib_like(n-1) + fib_like(n-2); } } </code></pre> <p>it is easy to see that the running time, in terms of N, will be given by</p> <pre class="lang-c prettyprint-override"><code>T(N) = 1 if (N &lt;= 1) T(N) = T(N-1) + T(N-2) otherwise </code></pre> <p>Well, T(N) is just the good-old Fibonacci function. We can use induction to put some bounds on that.</p> <p>For, example, <strong>Lets prove, by induction, that T(N) &lt;= 2^n for all N (ie, T(N) is O(2^n))</strong></p> <ul> <li>base case: n = 0 or n = 1</li> </ul> <pre class="lang-none prettyprint-override"><code> T(0) = 1 &lt;= 1 = 2^0 T(1) = 1 &lt;= 2 = 2^1 </code></pre> <ul> <li>inductive case (n > 1):</li> </ul> <pre class="lang-none prettyprint-override"><code> T(N) = T(n-1) + T(n-2) aplying the inductive hypothesis in T(n-1) and T(n-2)... T(N) &lt;= 2^(n-1) + 2^(n-2) so.. T(N) &lt;= 2^(n-1) + 2^(n-1) &lt;= 2^n </code></pre> <p>(we can try doing something similar to prove the lower bound too)</p> <p><strong>In most cases, having a good guess on the final runtime of the function will allow you to easily solve recurrence problems with an induction proof.</strong> Of course, this requires you to be able to guess first - only lots of practice can help you here.</p> <p>And as f final note, I would like to point out about the <strong><a href="https://en.wikipedia.org/wiki/Master_theorem" rel="noreferrer">Master theorem</a>, the only rule for more difficult recurrence problems I can think of now that is commonly used.</strong> Use it when you have to deal with a tricky divide and conquer algorithm.</p> <hr> <p>Also, in your "if case" example, I would solve that by cheating and splitting it into two separate loops that don; t have an if inside.</p> <pre class="lang-c prettyprint-override"><code>for (int i = 0; i &lt; n; i++) { if (i % 2 ==0) { for (int j = i; j &lt; n; j++) { ... } } else { for (int j = 0; j &lt; i; j++) { ... } } } </code></pre> <p>Has the same runtime as</p> <pre class="lang-c prettyprint-override"><code>for (int i = 0; i &lt; n; i += 2) { for (int j = i; j &lt; n; j++) { ... } } for (int i = 1; i &lt; n; i+=2) { for (int j = 0; j &lt; i; j++) { ... } } </code></pre> <p>And each of the two parts can be easily seen to be O(N^2) for a total that is also O(N^2).</p> <p>Note that I used a good trick trick to get rid of the "if" here. <strong>There is no general rule for doing so, as shown by the Collatz algorithm example</strong></p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload