Note that there are some explanatory texts on larger screens.

plurals
  1. POHow to efficiently feed FAST ESP with gigabytes of data with .NET
    primarykey
    data
    text
    <p>This will be a tricky question but I will try anyway: our task is to feed Microsoft FAST ESP with gigabytes of data. The final amount of indexed data is somewhere in the neighborhood of 50-60GB.</p> <p>FAST has a .NET API but core components are written in Python (processing pipelines to index documents). The challenge is to reliably communicate with the system while feeding it gigabytes of data for indexing.</p> <p><strong>The problems that arise with FAST here are:</strong></p> <blockquote> <ol> <li><p>the system is quirky when it is fed too much data at once as it wants to reindex its data during which the system remains unreachable for hours. Unacceptable.</p></li> <li><p>it is not an option to queue up all data and serially feed one item at a time since this will take too long (several days).</p></li> <li><p>when an item cannot be be indexed by FAST the client has to re-feed the item. For this to work, the system is supposed to call a callback method to inform the client about the failure. However, whenever the system times out the feeding client is unable to react to the timeout because that callback is never called. Hence the client is starving. Data is in the queue but cannot be passed along to the system. The queue collapses. Data is lost. You get the idea.</p></li> </ol> </blockquote> <p><strong>Notes:</strong></p> <blockquote> <ol> <li>feeding an item can take seconds for a small item and up to 5-8 hours for a single large item.</li> <li>the items being indexed are both binary and text based.</li> <li>the goal is for the full indexing to take "only" 48-72h, i.e. it must happen over the weekend.</li> <li><strong>The FAST document processing pipelines (Python code) here have around 30 stages each. There are a total of 27 pipelines as of this writing.</strong></li> </ol> </blockquote> <p><strong>In summary:</strong></p> <blockquote> <p>The major challenge is to feed the system with items, big and small, at just the right speed (not too fast because it might collapse or run into memory issues; not too slow because this will take too long), simultaneously, in a parallel manner like asynchronously running threads. In my opinion there has to be an algorithm that decides when to feed what items and how many at once. Parallel programming comes to mind.</p> <p>There could also be multiple "queues" where each queue (process) is dedicated to certain-sized items which are loaded in a queue and then fed one by one (in worker threads). </p> </blockquote> <p>I am curious if anyone has ever done anything like this, or how how you would go about a problem like this.</p> <blockquote> <p><strong>EDIT: Again, I am not looking to "fix" FAST ESP or improve its inner workings. The challenge is to effectively use it!</strong></p> </blockquote>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload