Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>As I'm sure you know, <em>all</em> files have at least one hard link (in the parent directory).</p> <p>To answer the question in your first paragraph (finding files that don't have <em>additional</em> hardlinks), you'll need to distinguish between directories and everything else. Assuming you have GNU Coreutils, you can use:</p> <pre><code>stat '%h' filename </code></pre> <p>to determine the number of hard links for a given file name. Otherwise you can parse the output of <code>ls -ld filename</code> -- which should work, but <code>ls</code> output isn't really meant to be machine-readable.</p> <p>For anything other than a directory, if the number of links is greater than 1, there's a hard link to it somewhere.</p> <p>A directory, on the other hand, will always have the usual one link from its parent, plus one for its own <code>.</code> entry, plus one for the <code>..</code> entry of <em>each</em> of its immediate subdirectories. So you'll have to determine how many links it would have in the absence of any additional hard links, and compare that to the number it actually has.</p> <p>You can avoid doing this if you happen to know that you're on a system that forbids hard links to directories. (I'm not sure whether that restriction is typically imposed by the OS or by each filesystem.)</p> <p>But that doesn't solve the problem in your second paragraph, creating a list of unique files within a directory. Knowing that the plain file <code>foo</code> has a link count greater than 1 <em>doesn't</em> tell you whether it's unique in the current directory; the other hard links could be in different directories (they merely have to be in the same filesystem).</p> <p>To do that, you can do something like:</p> <pre><code>stat -c '%i %n' * </code></pre> <p>which prints the inode number and name for each file in the current directory. You can then filter out duplicate inode numbers to get unique entries. This is basically what <a href="https://stackoverflow.com/a/16283363/827263">glenn jackman's answer</a> says. Of course <code>*</code> doesn't actually match <em>everything</em> in the current directory; it skips files whose names start with <code>.</code>, and it can cause problems if some files have special characters (like space) in their names. That may not matter to you, but if it does (assuming GNU find):</p> <pre><code>find . -maxdepth 1 -print0 | xargs -0 stat -c '%i %n' </code></pre> <p>(That will still cause problems if any file names contain newline characters, which is actually legal.)</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload