Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>you need to do a self join on that data set. In hive that would look like, more or less</p> <pre><code>select dist(P1.x,P1.y,P2.x, P2.y) from points P1 join points P2 on (True) where P1.x &lt; P2.x or (P1.x = P2.x and P1.y &lt; P2.y) </code></pre> <p>The function dist would need to be implemented using other hive functions or written in Java and added as a UDF. Also I am not sure about the True constant but you can write 0=0 to the same effect. The where clause is to avoid computing the same distances twice or 0 distances. The question is: would hive optimize this the way you can do programming carefully in hadoop? I am not sure. This is a sketch in hadoop</p> <pre><code>map(x,y) { for i in 1:N #number of points emit(i, (x,y)) reduce (i, X) p1 = X[i] for j in i:N emit(dist(X[i], X[j])) </code></pre> <p>For this to work you need X to get to the reducer sorted in some order, for instance by x and then by y using secondary sort keys (that do not affect the grouping). This way every reducer gets a copy of all the points and works on a column of the distance matrix you are trying to generate. The memory requirements are minimal. You could trade some communication for memory by re-organizing the computation so that every reducer computes a square submatrix of the final matrix, knowing only two subsets of the points and calculating the distances among all of them. To achieve this, you need to make explicit the order of your points, say you are storing i, x, y</p> <pre><code>map(i,x,y) { for j in 1:N/k #k is size of submatrix emit((i/k, j), ("row", (x,y))) emit((j, i/k), ("col", (x,y))) reduce ((a,b), Z) split Z in rows X and cols Y for x in X for y in Y emit(dist(x,y)) </code></pre> <p>In this case you can see that the map phase emits only 2*N*N/k points, whereas the previous algorithm emitted N^2. Here we have (N/k)^2 reducers vs N for the other one. Each reducer has to hold k values in memory (using the secondary key technique to have all the rows get to the reducer before all the columns), vs only 2 before. So you see there are tradeoffs and for the second algorithm you can use the parameter k for perf tuning. </p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload