Note that there are some explanatory texts on larger screens.

plurals
  1. PODownloading a picture via urllib and python
    text
    copied!<p>So I'm trying to make a Python script that downloads webcomics and puts them in a folder on my desktop. I've found a few similar programs on here that do something similar, but nothing quite like what I need. The one that I found most similar is right here (<a href="http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images" rel="noreferrer">http://bytes.com/topic/python/answers/850927-problem-using-urllib-download-images</a>). I tried using this code:</p> <pre><code>&gt;&gt;&gt; import urllib &gt;&gt;&gt; image = urllib.URLopener() &gt;&gt;&gt; image.retrieve("http://www.gunnerkrigg.com//comics/00000001.jpg","00000001.jpg") ('00000001.jpg', &lt;httplib.HTTPMessage instance at 0x1457a80&gt;) </code></pre> <p>I then searched my computer for a file "00000001.jpg", but all I found was the cached picture of it. I'm not even sure it saved the file to my computer. Once I understand how to get the file downloaded, I think I know how to handle the rest. Essentially just use a for loop and split the string at the '00000000'.'jpg' and increment the '00000000' up to the largest number, which I would have to somehow determine. Any reccomendations on the best way to do this or how to download the file correctly?</p> <p>Thanks!</p> <p>EDIT 6/15/10</p> <p>Here is the completed script, it saves the files to any directory you choose. For some odd reason, the files weren't downloading and they just did. Any suggestions on how to clean it up would be much appreciated. I'm currently working out how to find out many comics exist on the site so I can get just the latest one, rather than having the program quit after a certain number of exceptions are raised.</p> <pre><code>import urllib import os comicCounter=len(os.listdir('/file'))+1 # reads the number of files in the folder to start downloading at the next comic errorCount=0 def download_comic(url,comicName): """ download a comic in the form of url = http://www.example.com comicName = '00000000.jpg' """ image=urllib.URLopener() image.retrieve(url,comicName) # download comicName at URL while comicCounter &lt;= 1000: # not the most elegant solution os.chdir('/file') # set where files download to try: if comicCounter &lt; 10: # needed to break into 10^n segments because comic names are a set of zeros followed by a number comicNumber=str('0000000'+str(comicCounter)) # string containing the eight digit comic number comicName=str(comicNumber+".jpg") # string containing the file name url=str("http://www.gunnerkrigg.com//comics/"+comicName) # creates the URL for the comic comicCounter+=1 # increments the comic counter to go to the next comic, must be before the download in case the download raises an exception download_comic(url,comicName) # uses the function defined above to download the comic print url if 10 &lt;= comicCounter &lt; 100: comicNumber=str('000000'+str(comicCounter)) comicName=str(comicNumber+".jpg") url=str("http://www.gunnerkrigg.com//comics/"+comicName) comicCounter+=1 download_comic(url,comicName) print url if 100 &lt;= comicCounter &lt; 1000: comicNumber=str('00000'+str(comicCounter)) comicName=str(comicNumber+".jpg") url=str("http://www.gunnerkrigg.com//comics/"+comicName) comicCounter+=1 download_comic(url,comicName) print url else: # quit the program if any number outside this range shows up quit except IOError: # urllib raises an IOError for a 404 error, when the comic doesn't exist errorCount+=1 # add one to the error count if errorCount&gt;3: # if more than three errors occur during downloading, quit the program break else: print str("comic"+ ' ' + str(comicCounter) + ' ' + "does not exist") # otherwise say that the certain comic number doesn't exist print "all comics are up to date" # prints if all comics are downloaded </code></pre>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload