Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>If I understand the question, you are asking for algorithms that are theoretically better but practically worse in all situations. Therefore, one would not expect them to actually be used unless by mistake.</p> <p>One possible example is universal <a href="http://en.wikipedia.org/wiki/Memoization" rel="nofollow noreferrer">memoization</a>. Theoretically, all deterministic function calls should be memoized for all possible inputs. That way complex calculations could be replaced by simple table lookups. For a wide range of problems, this technique productively trades time for storage space. But suppose there were a central repository of the results of all possible inputs for all possible functions used by all of humanity's computers. The first time anyone anywhere did a calculation it would be the last time. All subsequent tries would result in a table lookup.</p> <p>But there are several reasons I can think of for not doing this:</p> <ol> <li><p>The memory space required to store all results would likely be impossibly large. It seems likely the number of needed bits would exceed the number of particles in the universe. (But even the task of estimating that number is daunting.)</p></li> <li><p>It would be difficult to construct an efficient algorithm for doing the memoiztion of that huge a problem space. </p></li> <li><p>The cost of communication with the central repository would likely exceed the benefit as the number of clients increase.</p></li> </ol> <p>I'm sure you can think of other problems.</p> <p>In fact, this sort of time/space trade-off is incredible common in practice. Ideally, all data would be stored in L1 cache, but because of size limitations you always need to put some data on disk or (horrors!) tape. Advancing technology reduces some of the pain of these trade-offs, but as I suggested above there are limits.</p> <hr> <p>In response to J.F. Sebastian's comment:</p> <p>Suppose that instead of a universal memoization repository, we consider a factorial repository. And it won't hold the results for all possible inputs. Rather it will be limited to results from <code>1</code> to <code>N!</code> Now it's easy to see that any computer that did factorials would benefit from looking up the result rather than doing the calculation. Even for calculating <code>(N+1)!</code> the lookup would be a huge win since that calculation would reduce to <code>N!(N+1)</code>.</p> <p>Now to make this "better" algorithm worse, we could either increase N or increase the number of computers using the repository.</p> <p>But I'm probably not understanding some subtlety of the question. They way I'm thinking of it, I keep coming up with examples that scale well until they don't. </p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload