Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Ideally there'd be a utility somewhere to do this, but I'm not aware of one. However, it's not too hard to do this "by hand" in a reasonably efficient way. I'll assume that you already have a <code>Query</code> and/or <code>Filter</code> object that you can use to define the subset of interest.</p> <p>First, build a list in memory of all of the document IDs in your index subset. You can use <code>IndexSearcher.search(Query, Filter, HitCollector)</code> to do this very quickly; the <code>HitCollector</code> <a href="http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/HitCollector.html" rel="noreferrer">documentation</a> includes an example that seems like it ought to work, or you can use some other container to store your doc IDs.</p> <p>Next, initialize an empty HashMap (or whatever) to map terms to total frequency, and populate the map by invoking one of the <code>IndexReader.getTermFreqVector</code> methods for every document and field of interest. The three-argument form seems simpler, but either should be just fine. For the three-argument form, you'd make a <code>TermVectorMapper</code> whose <code>map</code> method checks if <code>term</code> is in the map, associates it with <code>frequency</code> if not, or adds <code>frequency</code> to the existing value if so. Be sure to use the same <code>TermVectorMapper</code> object across all of the calls to <code>getTermFreqVector</code> in this pass, rather than instantiating a new one for each document in the loop. You can also speed things up quite a bit by overriding <code>isIgnoringPositions()</code> and <code>isIgnoringOffsets()</code>; your object should return <code>true</code> for both of those. It looks like your <code>TermVectorMapper</code> might also be forced to define a <code>setExpectations</code> method, but that one doesn't need to do anything.</p> <p>Once you've built your map, just sort the map items by descending frequency and read off however many top terms you like. If you know in advance how many terms you want, you might prefer to do some kind of fancy heap-based algorithm to find the top <em>k</em> items in linear time instead of using an O(<em>n</em> log <em>n</em>) sort. I imagine the plain old sort will be plenty fast in practice. But it's up to you.</p> <p>If you prefer, you can combine the first two stages by having your <code>HitCollector</code> invoke <code>getTermFreqVector</code> directly. This should certainly produce equally correct results, and intuitively seems like it would be simpler and better, but the docs seem to warn that doing so is likely to be quite a bit slower than the two-pass approach (on same page as the HitCollector example above). Or I could be misinterpreting their warning. If you're feeling ambitious you could try it both ways, compare, and let us know.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload