Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Let's get some data on this. I benchmarked everybody's techniques...</p> <pre><code>#!/usr/bin/env perl sub flag_in_loop { my $file = shift; open my $fh, $file; my $first = 1; while(&lt;$fh&gt;) { if( $first ) { $first = 0; } else { my $line = $_; } } return; } sub strip_before_loop { my $file = shift; open my $fh, $file; my $header = &lt;$fh&gt;; while(&lt;$fh&gt;) { my $line = $_; } return; } sub line_number_in_loop { my $file = shift; open my $fh, $file; while(&lt;$fh&gt;) { next if $. &lt; 2; my $line = $_; } return; } sub inc_in_loop { my $file = shift; open my $fh, $file; my $first; while(&lt;$fh&gt;) { $first++ or next; my $line = $_; } return; } sub slurp_to_array { my $file = shift; open my $fh, $file; my @array = &lt;$fh&gt;; shift @array; return; } my $Test_File = "/usr/share/dict/words"; print `wc $Test_File`; use Benchmark; timethese shift || -10, { flag_in_loop =&gt; sub { flag_in_loop($Test_File); }, strip_before_loop =&gt; sub { strip_before_loop($Test_File); }, line_number_in_loop =&gt; sub { line_number_in_loop($Test_File); }, inc_in_loop =&gt; sub { inc_in_loop($Test_File); }, slurp_to_array =&gt; sub { slurp_to_array($Test_File); }, }; </code></pre> <p>Since this is I/O which can be affected by forces beyond the ability of Benchmark.pm to adjust for, I ran them several times and checked I got the same results.</p> <p><code>/usr/share/dict/words</code> is a 2.4 meg file with about 240k very short lines. Since we're not processing the lines, line length shouldn't matter.</p> <p>I only did a tiny amount of work in each routine to emphasize the difference between the techniques. I wanted to do <em>some</em> work so as to produce a realistic upper limit on how much performance you're going to gain or lose by changing how you read files.</p> <p>I did this on a laptop with an SSD, but its still a laptop. As I/O speed increases, CPU time becomes more significant. Technique is even more important on a machine with fast I/O.</p> <p>Here's how many times each routine read the file per second.</p> <pre><code>slurp_to_array: 4.5/s line_number_in_loop: 13.0/s inc_in_loop: 15.5/s flag_in_loop: 15.8/s strip_before_loop: 19.9/s </code></pre> <p>I'm shocked to find that <code>my @array = &lt;$fh&gt;</code> is slowest by a huge margin. I would have thought it would be the fastest given all the work is happening inside the perl interpreter. However, it's the only one which allocates memory to hold all the lines and that probably accounts for the performance lag.</p> <p>Using <code>$.</code> is another surprise. Perhaps that's the cost of accessing a magic global, or perhaps its doing a numeric comparison.</p> <p>And, as predicted by algorithmic analysis, putting the header check code outside the loop is the fastest. But not by much. Probably not enough to worry about if you're using the next two fastest.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload