2016-07-05 112 views
0

我已經編寫了下載圖像的程序,用於形成soup.io.退出無限循環

from bs4 import BeautifulSoup as soup 
import urllib.request 
import requests 
import os.path 

login = "test-site" #input('Enter soup login:') 
website = "http://" + login + ".soup.io" 
path = 'images' 

if not os.path.exists(path): 
    os.makedirs(path) 

openWebsite = soup(urllib.request.urlopen(website), 'html.parser') 
imageLink = openWebsite.find_all(name="div", attrs={'class': 'imagecontainer'}) 

i = 1 
for src in imageLink: 
    temp = src.find('img')['src'] 
    img_data = requests.get(temp).content 
    if temp.find('.gif') != -1: 
     filename = os.path.join(path, str(i) + '.gif') 
     with open(filename, 'wb') as handler: 
      handler.write(img_data) 
     i += 1 
    elif temp.find('.jpeg') != -1: 
     filename = os.path.join(path, str(i) + '.jpeg') 
     with open(filename, 'wb') as handler: 
      handler.write(img_data) 
     i += 1 
    else: 
     filename = os.path.join(path, str(i) + '.png') 
     with open(filename, 'wb') as handler: 
      handler.write(img_data) 
     i += 1 

nextPage = openWebsite.find_all(name="a", attrs={'class': 'more keephash'}) 

while str(nextPage): 
    for item in nextPage: 
     nextPageLink = website+item['href'] 

     for j in nextPageLink: 
      openWebsite = soup(urllib.request.urlopen(nextPageLink), "html.parser") 
      imageLink = openWebsite.find_all(name="div", attrs={'class': 'imagecontainer'}) 
      nextPage = openWebsite.find_all(name="a", attrs={'class': 'more keephash'}) 

      for g in nextPage: 
       nextPageLink = website + g['href'] 

      for src in imageLink: 
       temp = src.find('img')['src'] 
       img_data = requests.get(temp).content 
       if temp.find('.gif') != -1: 
        filename = os.path.join(path, str(i) + '.gif') 
        with open(filename, 'wb') as handler: 
         handler.write(img_data) 
        i += 1 
       elif temp.find('.jpeg') != -1: 
        filename = os.path.join(path, str(i) + '.jpeg') 
        with open(filename, 'wb') as handler: 
         handler.write(img_data) 
        i += 1 
       else: 
        filename = os.path.join(path, str(i) + '.png') 
        with open(filename, 'wb') as handler: 
         handler.write(img_data) 
        i += 1 

在每個頁面上顯示20個圖像。在每個頁面上,我都會抓取「更多」鏈接到較舊的頁面(nextPageLink),並在每個圖像保存在循環中後打開它。我的問題是我的程序在最後一頁上循環(Where is no「More」鏈接),並從那裏反覆下載圖像。我試圖將nextPageLink分配給一個名爲previousPage的新變量,然後使用if語句對其進行比較 - 如果鏈接相同,我想設置nextPage = False,但它不起作用 - nextPageLink不再更新,因爲那裏在網站上沒有鏈接,所以我無法正確比較它。

+1

'if(scrape('more')== not found){break}',基本上是僞代碼。如果您未能找到「更多」鏈接,那麼請跳出循環。 –

回答

1

作爲@Marc B建議,我的問題是,我沒有檢查nextPage是否爲空。所以解決方案很簡單:

if openWebsite.find_all(name="a", attrs={'class': 'more keephash'}) == []: 
    break