Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>I think you could approach this problem differently</p> <p>You do not need to scan the file this many times, you could create a db, like in <a href="http://www.mongodb.org/" rel="nofollow">mongo</a> or <a href="http://www.mysql.com/" rel="nofollow">mysql</a>, and for each word you find, you fetch the db for it and then adds on some "counter" field. </p> <p>You could ask me "but then I will have to scan my database a lot and it could take a lot more". Well, sure you wouldn't ask this, but it won't take more time because databases are focused in IO, besides you could always <a href="http://en.wikipedia.org/wiki/Index_%28database%29" rel="nofollow">index it</a>.</p> <hr> <p><strong>EDIT:</strong> There is no way to delimit at all?? Let's say that where you have the a Word.name string you really holds a (not simple) regex. Could the regex contain the \n? Well, if the regex can contain any value, you should estimate the maximum size of string the regex can fetch, double it, and scan the file by that ammount of chars but moving the cursor by that number.</p> <p>Lets say your estimate of the maximum your regex could fetch it is like 20 chars nad your file has from 0 to 30000 chars. You pass each regex you have from 0 to 40 chars, then again from 20 to 60, from 40 to 80, etc...</p> <p>You should also hold the position you found of your smaller regex so it wouldn't repeat it.</p> <p>Finally, this solution seems to be not worth the effort, your problem may have a greater solution based on what that regexes are, but it will be faster than invoke scan Words.count times your your 300Mb string.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload