Note that there are some explanatory texts on larger screens.

plurals
  1. POData structures and algorithms for adaptive "uniform" mesh?
    primarykey
    data
    text
    <p>I need a data structure for storing float values at an uniformly sampled 3D mesh:</p> <p>x = x0 + ix*dx where 0 &lt;= ix &lt; nx</p> <p>y = y0 + iy*dy where 0 &lt;= iy &lt; ny</p> <p>z = z0 + iz*dz where 0 &lt;= iz &lt; nz</p> <p>Up to now I have used my Array class:</p> <pre><code>Array3D&lt;float&gt; A(nx, ny,nz); A(0,0,0) = 0.0f; // ix = iy = iz = 0 </code></pre> <p>Internally it stores the float values as an 1D array with nx * ny * nz elements.</p> <p>However now I need to represent an mesh with more values than I have RAM, e.g. nx = ny = nz = 2000.</p> <p>I think many neighbour nodes in such an mesh may have similar values so I was thinking if there was some simple way that I could "coarsen" the mesh adaptively.</p> <p>For instance if the 8 (ix,iy,iz) nodes of an cell in this mesh have values that are less than 5% apart; they are "removed" and replaced by just one value; the mean of the 8 values.</p> <p>How could I implement such a data structure in a simple and efficient way?</p> <p>EDIT: thanks Ante for suggesting lossy compression. I think this could work the following way:</p> <pre><code>#define BLOCK_SIZE 64 struct CompressedArray3D { CompressedArray3D(int ni, int nj, int nk) { NI = ni/BLOCK_SIZE + 1; NJ = nj/BLOCK_SIZE + 1; NK = nk/BLOCK_SIZE + 1; blocks = new float*[NI*NJ*NK]; compressedSize = new unsigned int[NI*NJ*NK]; } void setBlock(int I, int J, int K, float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE]) { unsigned int csize; blocks[I*NJ*NK + J*NK + K] = compress(values, csize); compressedSize[I*NJ*NK + J*NK + K] = csize; } float getValue(int i, int j, int k) { int I = i/BLOCK_SIZE; int J = j/BLOCK_SIZE; int K = k/BLOCK_SIZE; int ii = i - I*BLOCK_SIZE; int jj = j - J*BLOCK_SIZE; int kk = k - K*BLOCK_SIZE; float *compressedBlock = blocks[I*NJ*NK + J*NK + K]; unsigned int csize = compressedSize[I*NJ*NK + J*NK + K]; float values[BLOCK_SIZE][BLOCK_SIZE][BLOCK_SIZE]; decompress(compressedBlock, csize, values); return values[ii][jj][kk]; } // number of blocks: int NI, NJ, NK; // number of samples: int ni, nj, nk; float** blocks; unsigned int* compressedSize; }; </code></pre> <p>For this to be useful I need a lossy compression that is:</p> <ul> <li>extremely fast, also on small datasets (e.g. 64x64x64)</li> <li>compress quite hard > 3x, never mind if it looses quite a bit of info.</li> </ul> <p>Any good candidates?</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload