Note that there are some explanatory texts on larger screens.

plurals
  1. PORuby, Mongodb, Anemone: web crawler with possible memory leak?
    primarykey
    data
    text
    <p>I began to learn about web crawlers recently and I built a sample crawler with Ruby, <a href="http://anemone.rubyforge.org/" rel="noreferrer">Anemone</a>, and <a href="http://www.mongodb.org/" rel="noreferrer">Mongodb</a> for storage. I'm testing the crawler on a massive public website with possibly billions of links.</p> <p>The crawler.rb is indexing the correct information, although when I check the memory use in activity monitor it shows the memory constantly growing. I have only run the crawler for about 6-7 hours and the memory is showing 1.38GB for mongod and 1.37GB for the Ruby process. It seems to be growing about 100MB every hour or so. </p> <p>It seems that I might have a memory leak? Is their a more optimal way I can achieve the same crawl without the memory escalating out of control so that it can run longer?</p> <pre class="lang-rb prettyprint-override"><code># Sample web_crawler.rb with Anemone, Mongodb and Ruby. require 'anemone' # do not store the page's body. module Anemone class Page def to_hash {'url' =&gt; @url.to_s, 'links' =&gt; links.map(&amp;:to_s), 'code' =&gt; @code, 'visited' =&gt; @visited, 'depth' =&gt; @depth, 'referer' =&gt; @referer.to_s, 'fetched' =&gt; @fetched} end def self.from_hash(hash) page = self.new(URI(hash['url'])) {'@links' =&gt; hash['links'].map { |link| URI(link) }, '@code' =&gt; hash['code'].to_i, '@visited' =&gt; hash['visited'], '@depth' =&gt; hash['depth'].to_i, '@referer' =&gt; hash['referer'], '@fetched' =&gt; hash['fetched'] }.each do |var, value| page.instance_variable_set(var, value) end page end end end Anemone.crawl("http://www.example.com/", :discard_page_bodies =&gt; true, :threads =&gt; 1, :obey_robots_txt =&gt; true, :user_agent =&gt; "Example - Web Crawler", :large_scale_crawl =&gt; true) do | anemone | anemone.storage = Anemone::Storage.MongoDB #only crawl pages that contain /example in url anemone.focus_crawl do |page| links = page.links.delete_if do |link| (link.to_s =~ /example/).nil? end end # only process pages in the /example directory anemone.on_pages_like(/example/) do | page | regex = /some type of regex/ example = page.doc.css('#example_div').inner_html.gsub(regex,'') rescue next # Save to text file if !example.nil? and example != "" open('example.txt', 'a') { |f| f.puts "#{example}"} end page.discard_doc! end end </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload