2015-06-19 39 views
2

我正在使用scrapy收集一些數據。我的scrapy程序在一次會話中收集100個元素。我需要將其限制爲50或任何隨機數。我怎樣才能做到這一點?歡迎任何解決方案。在此先感謝限制scrapy可以收集多少元素

# -*- coding: utf-8 -*- 
import re 
import scrapy 


class DmozItem(scrapy.Item): 
    # define the fields for your item here like: 
    link = scrapy.Field() 
    attr = scrapy.Field() 
    title = scrapy.Field() 
    tag = scrapy.Field() 


class DmozSpider(scrapy.Spider): 
    name = "dmoz" 
    allowed_domains = ["raleigh.craigslist.org"] 
    start_urls = [ 
     "http://raleigh.craigslist.org/search/bab" 
    ] 

    BASE_URL = 'http://raleigh.craigslist.org/' 

    def parse(self, response): 
     links = response.xpath('//a[@class="hdrlnk"]/@href').extract() 
     for link in links: 
      absolute_url = self.BASE_URL + link 
      yield scrapy.Request(absolute_url, callback=self.parse_attr) 

    def parse_attr(self, response): 
     match = re.search(r"(\w+)\.html", response.url) 
     if match: 
      item_id = match.group(1) 
      url = self.BASE_URL + "reply/ral/bab/" + item_id 

      item = DmozItem() 
      item["link"] = response.url 
      item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract()) 
      item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0]) 
      return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact) 

    def parse_contact(self, response): 
     item = response.meta['item'] 
     item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract()) 
     return item 

回答

2

這是CloseSpider extensionCLOSESPIDER_ITEMCOUNT設置是爲製作:

的整數,指定數量的條目。如果物品和物品通過物品 管道後,蜘蛛的 比該金額多,則蜘蛛將被關閉,其原因爲 closespider_itemcount。如果爲零(或未設置),蜘蛛將不會按傳遞項目的數量關閉 。

相關問題