2017-07-01 36 views
1

請幫助瞭解錯誤是什麼。 轉到頁面... /?start = 0,/?start = 25,/?start = 50 僅從最後一頁(50)收集數據。 我的代碼:Scrapy:不從所有頁面收集數據

from scrapy import FormRequest 
from scrapy import Request 
import scrapy 
from scrapy.spiders import CrawlSpider 

from ..items import GetDomainsItem 


def pages_range(start, step): 
    stop = 50 
    r = start 
    while r <= stop: 
     yield r 
     r += step 

class GetUrlDelDomSpider(CrawlSpider): 
    name = 'get_domains' 

    allowed_domains = ["member.expireddomains.net"] 

    paginate = pages_range(0, 25) 

    start_urls = list(map(lambda i: 'https://member.expireddomains.net/domains/expiredcom201612/?start=%s' % i, paginate)) 
    def start_requests(self): 
     for start_url in self.start_urls: 
      yield Request(start_url, dont_filter=True) 

    def parse(self, response): 
     yield FormRequest.from_response(response, 
             formnumber=1, 
             formdata={'login': 'xxx', 'password': '*****', 'rememberme': '1'}, 
             callback=self.parse_login, 
             dont_filter=True) 
    def parse_login(self, response): 
     if b'The supplied login information are unknown.' not in response.body: 
      item = GetDomainsItem() 
      for each in response.selector.css('table.base1 tbody '): 
       item['domain'] = each.xpath('tr/td[@class="field_domain"]/a/text()').extract() 
       return item 

感謝您的幫助。

回答

2

parse_login方法return item打破循環:

for each in response.selector.css('table.base1 tbody '): 
    item['domain'] = each.xpath('tr/td[@class="field_domain"]/a/text()').extract() 
    return item 
    ^

所以,你應該創建一個項目,產量它在你的每個循環迭代:

for each in response.selector.css('table.base1 tbody '): 
    item = GetDomainsItem() 
    item['domain'] = each.xpath('tr/td[@class="field_domain"]/a/text()').extract() 
    yield item