Note that there are some explanatory texts on larger screens.

plurals
  1. POLink Scraping with requests, bs4. Getting Warning: unresponsive script
    text
    copied!<p>Im trying to collect all links from a webpage using requests, Beautifulsoup4 and SoupStrainer in Python3.3. For writing my code im using Komodo Edit 8.0 and also let my scripts run in Komodo Edit. So far everything works fine but on some webpages it occurs that im getting a popup with the following Warning</p> <pre><code>Warning unresponsive script A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete. Script: viewbufferbase:797 </code></pre> <p>Then i can chose if i want to continue or stop the script.</p> <p>Here a little code snippet:</p> <pre><code>try: r = requests.get(adress, headers=headers) soup = BeautifulSoup(r.text, parse_only=SoupStrainer('a', href=True)) for link in soup.find_all('a'): #some code except requests.exceptions.RequestException as e: print(e) </code></pre> <p>My question is what is causing this error. Is it my python script that is taking too long on a webpage or is it a script on the webpage im scraping? I cant think of the latter because technically im not executing the scripts on the page right? Or can it maybe be my bad internet-connection?</p> <p>Oh and another little question, with the above code snippet am im downloading pictures or just the plain html-code? Because sometimes when i look into my connection status for me its way too much data that im receiving just for requesting plain html code? If so, how can I avoid downloading such stuff and how is it possible in general to avoid downloads with requests, because sometimes it can be that my program ends on a download page.</p> <p>Many Thanks!</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload