Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<h3>Assumptions</h3> <p>Given your example "French", I will work under the assumption that the feature is a word that appears in the document. Also, since you mention that "French kiss" is not relevant, I will further assume that in your case, a feature is a word used in a particular sense. For example, if "pool" is a feature, you may say that documents mentioning swimming pools are relevant, but those talking about pool (the sport, like snooker or billiards) are not relevant.</p> <ul> <li><strong>Note</strong>: Although <em>word sense disambiguation</em> (WSD) methods would work, they require too much effort, and is an overkill for this purpose.</li> </ul> <h3>Suggestion: <em>localized</em> language model + bootstrapping</h3> <p>Think of it this way: You don't have an incomplete training set, but a smaller training set. The idea is to use this small training data to build bigger training data. This is <strong><em>bootstrapping</em></strong>.</p> <p>For each occurrence of your feature in the training data, build a language model based only on the words surrounding it. You don't need to build a model for the entire document. Ideally, just the sentences containing the feature should suffice. This is what I am calling a <strong><em>localized language model</em></strong> (LLM).</p> <p>Build two such LLMs from your training data (let's call it T_0): one for pertinent documents, say M1, and another for irrelevant documents, say M0. Now, to build a bigger training data, classify documents based on M1 and M0. For every new document <em>d</em>, if <em>d</em> does not contain the feature-word, it will automatically be added as a "bad" document. If <em>d</em> contains the feature-word, then consider a local window around this word in <em>d</em> (the same window size that you used to build the LLMs), and compute the perplexity of this sequence of words with M0 and M1. Classify the document as belonging to the class which gives lower perplexity.</p> <p>To formalize, the pseudo-code is:</p> <pre><code>T_0 := initial training set (consisting of relevant/irrelevant documents) D0 := additional data to be bootstrapped N := iterations for bootstrapping for i = 0 to N-1 T_i+1 := empty training set Build M0 and M1 as discussed above using a window-size w for d in D0 if feature-word not in d then add d to irrelevant documents of T_i+1 else compute perplexity scores P0 and P1 corresponding to M0 and M1 using window size w around the feature-word in d. if P0 &lt; P1 - delta add d to irrelevant documents of T_i+1 else if P1 &lt; P0 - delta add d to relevant documents of T_i+1 else do not use d in T_i+1 end end end Select a small random sample from relevant and irrelevant documents in T_i+1, and (re)classify them manually if required. end </code></pre> <ul> <li>T_N is your final training set. In this above bootstrapping, the parameter <em>delta</em> needs to be determined with experiments on some <em>held-out</em> data (also called <em>development</em> data).</li> <li>The manual reclassification on a small sample is done so that the noise during this bootstrapping is not accumulated through all the <em>N</em> iterations.</li> </ul>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload