Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>At some point your query analysis doesn't match with your Document analysis. </p> <p>Most likely you are internally using Lucene's <a href="http://lucene.apache.org/core/3_6_0/api/core/org/apache/lucene/analysis/standard/StandardAnalyzer.html" rel="nofollow">StandardAnalyzer</a> on the query parsing but not at index time, as detonated by:</p> <pre><code>@SearchableMetaData(name="ordering_name", index=Index.NOT_ANALYZED)) </code></pre> <p>The <a href="http://lucene.apache.org/core/3_6_0/api/all/org/apache/lucene/analysis/standard/StandardTokenizer.html" rel="nofollow">StandardTokenizer</a> used inside this analyzer considers the character <code>/</code> as a word boundary (such as space would be), producing the tokens <code>n</code> and <code>a</code>. Later on, the token <code>a</code> is removed by a <a href="http://lucene.apache.org/core/3_6_0/api/all/org/apache/lucene/analysis/StopFilter.html" rel="nofollow">StopFilter</a>. </p> <p>The following code is an example for this explanation (the input is <code>"c/d e/f n/a"</code>):</p> <pre><code>Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_36); TokenStream tokenStream = analyzer.tokenStream("CONTENT", new StringReader("c/d e/f n/a")); CharTermAttribute term = tokenStream.getAttribute(CharTermAttribute.class); PositionIncrementAttribute position = tokenStream.getAttribute(PositionIncrementAttribute.class); int pos = 0; while (tokenStream.incrementToken()) { String termStr = term.toString(); int incr = position.getPositionIncrement(); if (incr == 0 ) { System.out.print(" [" + termStr + "]"); } else { pos += incr; System.out.println(" " + pos + ": [" + termStr +"]"); } } </code></pre> <p>You'll see the following extracted tokens:</p> <pre><code> 1: [c] 2: [d] 3: [e] 4: [f] 5: [n] </code></pre> <p>Notice that the expected position 6: with token <code>a</code> is missing. As you can see, Lucene's <a href="http://lucene.apache.org/core/3_6_0/api/all/org/apache/lucene/queryParser/QueryParser.html" rel="nofollow">QueryParser</a> also performs this tokenization:</p> <pre><code>QueryParser parser = new QueryParser(Version.LUCENE_36, "content", new StandardAnalyzer(Version.LUCENE_36)); System.out.println(parser.parse("+n/a*")); </code></pre> <p>The output is:</p> <pre><code>+content:n </code></pre> <p>EDIT: The solution would be to use <a href="http://lucene.apache.org/core/3_6_0/api/core/org/apache/lucene/analysis/WhitespaceAnalyzer.html" rel="nofollow">WhitespaceAnalyzer</a>, and set the field to ANALYZED. The following code is a proof of concept under Lucene:</p> <pre><code>IndexWriter writer = new IndexWriter(new RAMDirectory(), new IndexWriterConfig(Version.LUCENE_36, new WhitespaceAnalyzer(Version.LUCENE_36))); Document doc = new Document(); doc.add(new Field("content","Temp 0 New n/a", Store.YES, Index.ANALYZED)); writer.addDocument(doc); writer.commit(); IndexReader reader = IndexReader.open(writer, true); IndexSearcher searcher = new IndexSearcher(reader); BooleanQuery query = new BooleanQuery(); QueryParser parser = new QueryParser(Version.LUCENE_36, "content", new WhitespaceAnalyzer(Version.LUCENE_36)); TopDocs docs = searcher.search(parser.parse("+n/a"), 10); System.out.println(docs.totalHits); writer.close(); </code></pre> <p>The output is: <code>1</code>.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload