Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><code>lexical_cast</code> is more general than the specific code you're using in Java and Python. It's not surprising that a general approach that works in many scenarios (lexical cast is little more than streaming out then back in to and from a temporary stream) ends up being slower than specific routines.</p> <p>(BTW, you may get better performance out of Java using the static version, <code>Integer.toString(int)</code>. [1])</p> <p>Finally, string parsing and deparsing is usually not that performance-sensitive, unless one is writing a compiler, in which case <code>lexical_cast</code> is probably too general-purpose, and integers etc. will be calculated as each digit is scanned.</p> <p>[1] Commenter "stepancheg" doubted my hint that the static version may give better performance. Here's the source I used:</p> <pre><code>public class Test { static int instanceCall(int i) { String s = new Integer(i).toString(); return s == null ? 0 : 1; } static int staticCall(int i) { String s = Integer.toString(i); return s == null ? 0 : 1; } public static void main(String[] args) { // count used to avoid dead code elimination int count = 0; // *** instance // Warmup calls for (int i = 0; i &lt; 100; ++i) count += instanceCall(i); long start = System.currentTimeMillis(); for (int i = 0; i &lt; 10000000; ++i) count += instanceCall(i); long finish = System.currentTimeMillis(); System.out.printf("10MM Time taken: %d ms\n", finish - start); // *** static // Warmup calls for (int i = 0; i &lt; 100; ++i) count += staticCall(i); start = System.currentTimeMillis(); for (int i = 0; i &lt; 10000000; ++i) count += staticCall(i); finish = System.currentTimeMillis(); System.out.printf("10MM Time taken: %d ms\n", finish - start); if (count == 42) System.out.println("bad result"); // prevent elimination of count } } </code></pre> <p>The runtimes, using JDK 1.6.0-14, server VM:</p> <pre><code>10MM Time taken: 688 ms 10MM Time taken: 547 ms </code></pre> <p>And in client VM:</p> <pre><code>10MM Time taken: 687 ms 10MM Time taken: 610 ms </code></pre> <p>Even though theoretically, escape analysis may permit allocation on the stack, and inlining may introduce all code (including copying) into the local method, permitting elimination of redundant copying, such analysis may take quite a lot of time and result in quite a bit of code space, which has other costs in code cache that don't justify themselves in real code, as opposed to microbenchmarks like seen here.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload