Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>I don't think your approach will work in general.</p> <p>The problem is that "big O" complexity is based on a limit as some scaling variable tends to infinity. For smaller values of that variable, the performance behavior can appear to fit a different curve entirely.</p> <p>The problem is that with an empirical approach you can never know if the scaling variable is large enough for the limit to be apparent in the results.</p> <p>Another problem is that if you implement this in Java / Scala, you have to go to considerable lengths to eliminate distortions and "noise" in your timings due to things like JVM warmup (e.g. class loading, JIT compilation, heap resizing) and garbage collection.</p> <p>Finally, nobody is going to place much trust in empirical estimates of complexity. Or at least, they wouldn't if they understood the mathematics of complexity analysis.</p> <hr> <p><strong>FOLLOWUP</strong></p> <p>In response to this comment:</p> <blockquote> <p>Your estimate's significance will improve drastically the more and larger samples you use.</p> </blockquote> <p>This is true, though my point is that you (Daniel) haven't factored this in.</p> <blockquote> <p>Also, runtime functions typically have special characteristics which can be exploited; for example, algorithms tend to not change their behaviour at some huge n.</p> </blockquote> <p>For simple cases, yes.</p> <p>For complicated cases and real world cases, that is a dubious assumption. For example:</p> <ul> <li><p>Suppose some algorithm uses a hash table with a large but fixed-sized primary hash array, and uses external lists to deal with collisions. For N (== number of entries) less than the size of the primary hash array, the behaviour of most operations will appear to be <code>O(1)</code>. The true <code>O(N)</code> behaviour can only be detected by curve fitting when N gets much larger than that.</p></li> <li><p>Suppose that the algorithm uses a lot of memory or network bandwidth. Typically, it will work well until you hit the resource limit, and then performance will tail off badly. How do you account for this? If it is part of the "empirical complexity", how do you make sure that you get to the transition point? If you want to exclude it, how do you do that?</p></li> </ul>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload