Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>here is the kicker:</p> <blockquote> <p>org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.</p> </blockquote> <p>i'd been having similar issues. i used stop-all.sh to shut down hadoop. i guess it was foolish of me to think this would properly save the data in my HDFS.</p> <p>but as far as i can tell from what appears to be the appropriate code chunk in the hadoop-daemon.sh script, this is not the case - it just kills the processes:</p> <pre><code>(stop) if [ -f $pid ]; then if kill -0 `cat $pid` &gt; /dev/null 2&gt;&amp;1; then echo stopping $command kill `cat $pid` else echo no $command to stop fi else echo no $command to stop fi </code></pre> <p>did you look to see if the directory it's complaining about existed? i checked and mine did not, although there was an (empty!) data folder in there here I imagine data might have once lived.</p> <p>so my guess was that what we need to do is configure Hadoop such that our namenode and datanode are NOT stored in a tmp directory. there is some possibility that the OS is doing maintenance and getting rid of these files. either that hadoop figures you don't care about them anymore because you wouldn't have left them in a tmp directory if you did, and you wouldn't be restarting your machine in the middle of a map-reduce job. I don't really think this <em>should</em> happen (i mean, that's not how <em>i</em> would design things) but it seemed like a good guess.</p> <p>so, based on this site <a href="http://wiki.datameer.com/display/DAS11/Hadoop+configuration+file+templates">http://wiki.datameer.com/display/DAS11/Hadoop+configuration+file+templates</a> i edited my conf/hdfs-site.xml file to point to the following paths (obviously, make your own directories as you see fit):</p> <pre><code>&lt;property&gt; &lt;name&gt;dfs.name.dir&lt;/name&gt; &lt;value&gt;/hadoopstorage/name/&lt;/value&gt; &lt;/property&gt; &lt;property&gt; &lt;name&gt;dfs.data.dir&lt;/name&gt; &lt;value&gt;/hadoopstorage/data/&lt;/value&gt; &lt;/property&gt; </code></pre> <p>Did this, formatted the new namenode (sadly, data loss seems inevitable in this situation), stopped and started hadoop with the shell scripts, restarted the machine, and my files were still there...</p> <p>YMMV...hope this works for you! i'm on OS X but i don't think you should have dissimilar results.</p> <p>J</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload