Note that there are some explanatory texts on larger screens.

plurals
  1. POUsing named pipes with bash - Problem with data loss
    primarykey
    data
    text
    <p>Did some search online, found simple 'tutorials' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.</p> <p>[[Edit: found a much simpler solution, see reply to post. So the question I put forward is now academic - in case one might want a job server]]</p> <p>Using Ubuntu 10.04 with Linux 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux</p> <p>GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu).</p> <p>My bash function is:</p> <pre><code>function jqs { pipe=/tmp/__job_control_manager__ trap "rm -f $pipe; exit" EXIT SIGKILL if [[ ! -p "$pipe" ]]; then mkfifo "$pipe" fi while true do if read txt &lt;"$pipe" then echo "$(date +'%Y'): new text is [[$txt]]" if [[ "$txt" == 'quit' ]] then break fi fi done } </code></pre> <p>I run this in the background:</p> <pre><code>&gt; jqs&amp; [1] 5336 </code></pre> <p>And now I feed it:</p> <pre><code>for i in 1 2 3 4 5 6 7 8 do (echo aaa$i &gt; /tmp/__job_control_manager__ &amp;&amp; echo success$i &amp;) done </code></pre> <p>The output is inconsistent. I frequently don't get all success echoes. I get at most as many new text echos as success echoes, sometimes less.</p> <p>If I remove the '&amp;' from the 'feed', it seems to work, but I am blocked until the output is read. Hence me wanting to let sub-processes get blocked, but not the main process.</p> <p>The aim being to write a simple job control script so I can run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run.</p> <p>Full job manager below:</p> <pre><code>function jq_manage { export __gn__="$1" pipe=/tmp/__job_control_manager_"$__gn__"__ trap "rm -f $pipe" EXIT trap "break" SIGKILL if [[ ! -p "$pipe" ]]; then mkfifo "$pipe" fi while true do date jobs if (($(jobs | egrep "Running.*echo '%#_Group_#%_$__gn__'" | wc -l) &lt; $__jN__)) then echo "Waiting for new job" if read new_job &lt;"$pipe" then echo "new job is [[$new_job]]" if [[ "$new_job" == 'quit' ]] then break fi echo "In group $__gn__, starting job $new_job" eval "(echo '%#_Group_#%_$__gn__' &gt; /dev/null; $new_job) &amp;" fi else sleep 3 fi done } function jq { # __gn__ = first parameter to this function, the job group name (the pool within which to allocate __jN__ jobs) # __jN__ = second parameter to this function, the maximum of job numbers to run concurrently export __gn__="$1" shift export __jN__="$1" shift export __jq__=$(jobs | egrep "Running.*echo '%#_GroupQueue_#%_$__gn__'" | wc -l) if (($__jq__ '&lt;' 1)) then eval "(echo '%#_GroupQueue_#%_$__gn__' &gt; /dev/null; jq_manage $__gn__) &amp;" fi pipe=/tmp/__job_control_manager_"$__gn__"__ echo $@ &gt;$pipe } </code></pre> <p>Calling</p> <pre><code>jq &lt;name&gt; &lt;max processes&gt; &lt;command&gt; jq abc 2 sleep 20 </code></pre> <p>will start one process. That part works fine. Start a second one, fine. One by one by hand seem to work fine. But starting 10 in a loop seems to lose the system, as in the simpler example above.</p> <p>Any hints as to what I can do to solve this apparent loss of IPC data would be greatly appreciated.</p> <p>Regards, Alain.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload