Note that there are some explanatory texts on larger screens.

plurals
  1. POcombine/average multiple data files
    primarykey
    data
    text
    <p>I have a set of data files (say, e.g., "data####.dat", where #### = 0001,...,9999) which all have the common data structure with the same x-values in the first column and then a number of columns with different y-values.</p> <p><strong>data0001.dat:</strong></p> <pre><code>#A &lt; comment line with unique identifier 'A' #B1 &lt; this is a comment line that can/should be dropped 1 11 21 2 12 22 3 13 23 </code></pre> <p><strong>data0002.dat:</strong></p> <pre><code>#A &lt; comment line with unique identifier 'A' #B2 &lt; this is a comment line that can/should be dropped 1 13 23 2 12 22 3 11 21 </code></pre> <p>They basically originate from different runs of my program with different seeds and I now want to combine these partial results into one common histogram such that comment lines starting with "#A" (which are identical for all files) are retained and other comment lines are dropped. The first column stays and then all other columns should be averaged over all data files:</p> <p><strong>dataComb.dat:</strong></p> <pre><code>#A &lt; comment line with unique identifier 'A' 1 12 22 2 12 22 3 12 22 </code></pre> <p>where <code>12 = (11+13)/2 = (12+12)/2 = (13+11)/2</code> and <code>22 = (21+23)/2 = (22+22)/2 = (23+21)/2</code></p> <p>I already have a bash script (probably horrible code; but I'm not so experienced...) that does this job by running <code>./merge.sh data* &gt; dataComb.dat</code> in the command line. It also checks if all data files have the same number of columns and the same values in the first column.</p> <p><strong>merge.sh:</strong></p> <pre><code>#!/bin/bash if [ $# -lt 2 ]; then echo "at least two files please" exit 1; fi i=1 for file in "$@"; do cols[$i]=$(awk ' BEGIN {cols=0} $1 !~ /^#/ { if (cols==0) {cols=NF} else { if (cols!=NF) {cols=-1} } } END {print cols} ' ${file}) i=$((${i}+1)) done ncol=${cols[1]} for i in ${cols[@]}; do if [ $i -ne $ncol ]; then echo "mismatch in the number of columns" exit 1 fi done echo "#combined $# files" grep "^#A" $1 paste "$@" | awk " \$1 !~ /^#/ &amp;&amp; NF&gt;0 { flag=0 x=\$1 for (c=1; c&lt;${ncol}; c++) { y[c]=0. } i=1 while (i &lt;= NF) { if (\$i==x) { for (c=1; c&lt;${ncol}; c++) { y[c] += \$(i+c) } i+= ${ncol} } else { flag=1; i=NF+1; } } if (flag==0) { printf(\"%e \", x) for (c=1; c&lt;${ncol}; c++) { printf(\"%e \", y[c]/$#) } printf(\"\n\") } else { printf(\"# x -coordinate mismatch\n\") } }" exit 0 </code></pre> <p>My problem is that for a large number of data files it becomes slow quickly and at some point throws me a "Too many open files" error. I see that simply pasting all data files in one go (<code>paste "$@"</code>) is the issue here but doing it in batches and somehow introducing temp-files seems also not to be the ideal solution. I'd appreciate any help to make this more scalable while retaining the way the script is called, i.e., all data files passed as command-line arguments</p> <p>I decided to also post this in the python section since I am often told that it's very handy to deal with this kind of problems. I, however, have almost no experience with python but maybe this is the occasion to finally start learning it ;) </p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload