Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Cassandra contains a few classes which are sufficient to integrate with Hadoop:</p> <ul> <li><code>ColumnFamilyInputFormat</code> - This is an input for a Map function which can read all rows from a single CF in when using Cassandra's random partitioner, or it can read a row range when used with Cassandra's ordered partitioner. Cassandra cluster has ring form, where each ring part is responsible for concrete key range. Main task of Input Format is to divide Map input into data parts which can be processed in parallel - those are called <code>InputSplits</code>. In Cassandra case this is simple - each ring range has one master node, and this means that Input Format will create one <code>InputSplit</code> for each ring element, and it will result in one Map task. Now we would like to execute our Map task on the same host where data is stored. Each <code>InputSplit</code> remembers IP address of its ring part - this is the IP address of Cassandra node responsible to this particular key range. <code>JobTracker</code> will create Map tasks form <code>InputSplits</code> and assign them to <code>TaskTracker</code> for execution. <code>JobTracker</code> will try to find <code>TaskTracker</code> which has the same IP address as <code>InputSplit</code> - basically we have to start <code>TaskTracker</code> on Cassandra host, and this will guarantee data locality.</li> <li><code>ColumnFamilyOutputFormat</code> - this configures context for Reduce function. So that the results can be stored in Cassandra</li> <li>Results from all Map functions has to be combined together before they can be passed to reduce function - this is called shuffle. It uses local file system - from Cassandra perspective nothing has to be done here, we just need to configure path to local temp directory. Also there is no need to replace this solution with something else (like persisting in Cassandra) - this data does not have to be replicated, Map tasks are idempotent.</li> </ul> <p>Basically using provided Hadoop integration gives up possibility to execute Map job on hosts where data resides, and Reduce function can store results back into Cassandra - it's all that I need. </p> <p>There are two possibilities to execute Map-Reduce:</p> <ul> <li><code>org.apache.hadoop.mapreduce.Job</code> - this class simulates Hadoop in one process. It executes Map-Resuce task and does not require any additional services/dependencies, it needs only access to temp directory to store results from map job for shuffle. Basically we have to call few setters on Job class, which contain things like class names for Map task, Reduce task, input format, Cassandra connection, when setup is done <code>job.waitForCompletion(true)</code> has to be called - it starts Map-Reduce task and waits for results. This solution can be used to quickly get into Hadoop world, and for testing. It will not scale (single process), and it will fetch data over network, but still - it will be fine for beginning.</li> <li>Real Hadoop cluster - I did not set it up yet, but as I understood, Map-Reduce jobs from previous example will work just fine. We need additionally HDFS which will be used to distribute jars containing Map-Reduce classes in Hadoop cluster. </li> </ul>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload