2016-11-05 83 views
1

我正在關注「使用Python自動執行無聊任務」一書,我試圖創建一個可同時從http://xkcd.com 下載多個漫畫的程序,但遇到了一些問題。我正在複製與本書完全相同的程序。Python中的多線程

這裏是我的代碼:

# multidownloadXkcd.py - Downloads XKCD comics using multiple threads. 

import requests, os ,bs4, threading 

os.chdir('c:\\users\\patty\\desktop') 
os.makedirs('xkcd', exist_ok=True) # store comics on ./xkcd 

def downloadXkcd(startComic, endComic):    
    for urlNumber in range(startComic, endComic):     
     #Download the page 
     print('Downloading page http://xkcd.com/%s...' %(urlNumber)) 
     res = requests.get('http://xkcd.com/%s' % (urlNumber)) 
     res.raise_for_status() 

     soup= bs4.BeautifulSoup(res.text, "html.parser")   

     #Find the URL of the comic image. 
     comicElem = soup.select('#comic img') 
     if comicElem == []: 
      print('Could not find comic image.') 
     else: 
      comicUrl = comicElem[0].get('src') 
      #Download the image. 
      print('Downloading image %s...' % (comicUrl)) 
      res = requests.get(comicUrl, "html.parser") 
      res.raise_for_status() 

      #Save the image to ./xkcd. 
      imageFile = open(os.path.join('xkcd', os.path.basename(comicUrl)), 'wb') 
      for chunk in res.iter_content(100000): 
       imageFile.write(chunk) 
      imageFile.close() 

downloadThreads = []    # a list of all the Thread objects 
for i in range(0,1400, 100):  # loops 14 times, creates 14 threads 
    downloadThread = threading.Thread(target=downloadXkcd, args=(i, i + 99)) 
    downloadThreads.append(downloadThread) 
    downloadThread.start() 

# Wait for all threads to end. 
for downloadThread in downloadThreads: 
    downloadThread.join() 
print('Done.') 

,我發現了以下異常:

Exception in thread Thread-1: 
Traceback (most recent call last): 
    File "C:\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner 
    self.run() 
    File "C:\Python\Python35\lib\threading.py", line 862, in run 
    self._target(*self._args, **self._kwargs) 
    File "C:\Users\PATTY\PycharmProjects\CH15_TASKS\practice.py", line 13, in downloadXkcd 
    res.raise_for_status() 
    File "C:\Python\Python35\lib\site-packages\requests\models.py", line 862, in raise_for_status 
    raise HTTPError(http_error_msg, response=self) 
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://xkcd.com/0 
Exception in thread Thread-2: 
Traceback (most recent call last): 
    File "C:\Python\Python35\lib\threading.py", line 914, in _bootstrap_inner 
    self.run() 
    File "C:\Python\Python35\lib\threading.py", line 862, in run 
    self._target(*self._args, **self._kwargs) 
    File "C:\Users\PATTY\PycharmProjects\CH15_TASKS\practice.py", line 25, in downloadXkcd 
    res = requests.get(comicUrl, "html.parser") 
    File "C:\Python\Python35\lib\site-packages\requests\api.py", line 70, in get 
    return request('get', url, params=params, **kwargs) 
    File "C:\Python\Python35\lib\site-packages\requests\api.py", line 56, in request 
    return session.request(method=method, url=url, **kwargs) 
    File "C:\Python\Python35\lib\site-packages\requests\sessions.py", line 461, in request 
    prep = self.prepare_request(req) 
    File "C:\Python\Python35\lib\site-packages\requests\sessions.py", line 394, in prepare_request 
    hooks=merge_hooks(request.hooks, self.hooks), 
    File "C:\Python\Python35\lib\site-packages\requests\models.py", line 294, in prepare 
    self.prepare_url(url, params) 
    File "C:\Python\Python35\lib\site-packages\requests\models.py", line 354, in prepare_url 
    raise MissingSchema(error) 
requests.exceptions.MissingSchema: Invalid URL '//imgs.xkcd.com/comics/family_circus.jpg': No schema supplied. Perhaps you meant http:////imgs.xkcd.com/comics/family_circus.jpg? 

這是說,該網址是無效的,但每當我複製粘貼此URL到webrowser似乎有效。任何人都知道我該如何解決這個問題謝謝

+5

修復你的網址。僅僅因爲你的瀏覽器爲你修復它並不是有效的。 – spectras

+3

問題是''屬性的'src'標籤沒有指定'http://'或'https://',這在瀏覽器中有效,而不是'requests'。請參閱http://stackoverflow.com/questions/30770213/no-schema-supplied-and-other-errors-with-using-requests-get。 – charlierproctor

+0

謝謝它現在正在工作。 – tadm123

回答

2

是的,正如@spectras所說,只是因爲你的網址修復你的網址並不意味着這是有效的。 嘗試使用「http://www」。在它之前,並試圖看看它的工作。