Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Why are so many people invoking <code>grep</code> in the pipeline and either using <code>bash</code>'s <code>for</code> loops or <code>xargs</code> on the back end when they have bloody <strong><code>awk</code></strong> right in the middle of things?</p> <p>First let's get rid of the whimsical use of grep:</p> <pre><code>svn st | awk '/^?/ { print $2 }' </code></pre> <p>Since <code>awk</code> allows you to filter lines based on regular expressions, the use of grep is entirely unnecessary. The regular expressions of <code>awk</code> aren't that different from <code>grep</code> (depending on which implementation of <code>awk</code> and which of <code>grep</code> you're using) so why add a whole new process and a whole new bottleneck in the pipeline?</p> <p>From there you already have two choices that would be shorter and more readable:</p> <pre><code># option 1 svn st | awk '/^?/ { print $2 }' | xargs rm -f # option 2 rm -f $(svn st | awk '/^?/ { print $2 }') </code></pre> <p>Now that second option will only work if your file list doesn't exceed the maximum command line size, so I recommend the xargs version.</p> <p>Or, perhaps even better, use <code>awk</code> again.</p> <pre><code>svn st | awk '/^?/ { system("rm -f $2") }' </code></pre> <p>This will be the functional equivalent of what you did above with the <code>for</code> loop. It's grotesquely inefficient, seeing as it will execute rm once per file, but it's at least more readable than your <code>for</code> loop example. It can be improved still farther, however. I won't go into full details here, but will instead give you comments as clues as to what the final solution would look like.</p> <pre><code>svn st | awk 'BEGIN{ /*set up an array*/ }; /^?/ { /*add $2 to the array*/ }; END{ /*system("rm -f ...") safe chunks of the array*/} </code></pre> <p>OK, so that last one is a bit of a mouthful and too much to type off as a routine one-liner. Since you have to "often" do this, however, it shouldn't be too bad to put this into a script:</p> <pre><code>#!/usr/bin/env awk BEGIN { /* set up your accumulator array */ } /^?/ { /* add $2 to the array */ } END { /* invoke system("rm -f") on safe chunks of the accumulator array */ } </code></pre> <p>Now your command line will be <code>svn st | myawkscript</code>.</p> <p>Now I'll warn you I'm not equipped to check all this (since I avoid SVN and CVS like I avoid MS-DOS -- and for much the same reason). You might have to monkey with the <code>#!</code> line in the script, for example, or with the regular expression you use to filter, but the general principle remains about the same. And me personally? I'd use <code>svn st | awk '/^?/ { print $2 }' | xargs rm -f</code> for something I do infrequently. I'd only do the full script for something I do several times a week or more.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload