2015-06-19 76 views
1

我正在使用scrapy從craiglist收集一些電子郵件,當我運行它時,它會返回空白行.csv文件。我能夠提取標題,標記和鏈接。只有電子郵件是問題。下面是代碼:Scrapy不收集數據

所有的
# -*- coding: utf-8 -*- 
import re 
import scrapy 
from scrapy.http import Request 


# item class included here 
class DmozItem(scrapy.Item): 
    # define the fields for your item here like: 
    link = scrapy.Field() 
    attr = scrapy.Field() 
    title = scrapy.Field() 
    tag = scrapy.Field() 

class DmozSpider(scrapy.Spider): 
    name = "dmoz" 
    allowed_domains = ["craigslist.org"] 
    start_urls = [ 
    "http://raleigh.craigslist.org/bab/5038434567.html" 
    ] 

    BASE_URL = 'http://raleigh.craigslist.org/' 

    def parse(self, response): 
     links = response.xpath('//a[@class="hdrlnk"]/@href').extract() 
     for link in links: 
      absolute_url = self.BASE_URL + link 
      yield scrapy.Request(absolute_url, callback=self.parse_attr) 

    def parse_attr(self, response): 
     match = re.search(r"(\w+)\.html", response.url) 
     if match: 
      item_id = match.group(1) 
      url = self.BASE_URL + "reply/nos/vgm/" + item_id 

      item = DmozItem() 
      item["link"] = response.url 
      item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract()) 
      item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0]) 
      return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact) 

    def parse_contact(self, response): 
     item = response.meta['item'] 
     item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract()) 
     return item 
+1

以'start_urls'開頭的默認回調函數是'parse()',而不是'parse_contact()'。另外,'start_urls'中定義的URL中沒有郵件,所以你的xpath不匹配任何東西。您是否閱讀過[Scrapy教程](http://doc.scrapy.org/en/latest/intro/tutorial.html)?這些東西在那裏解釋。 – bosnjak

+0

迄今爲止,這段代碼爲我工作,但最近兩天似乎在craiglist上修改了一些東西。你能否添加工作代碼?在此先感謝 –

+0

@ArkanKalu您需要提供您的蜘蛛的完整代碼。 – alecxe

回答

1

首先,你的意思是有start_urls在目錄頁:http://raleigh.craigslist.org/search/bab

此外,據我所知,額外的請求得到一封電子郵件應該去reply/ral/bab/而不是reply/nos/vgm/

而且,如果沒有ATTR組,你得到以下行錯誤:

item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()) 

爲我工作的完整代碼:

item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0]) 

將其替換

# -*- coding: utf-8 -*- 
import re 
import scrapy 


class DmozItem(scrapy.Item): 
    # define the fields for your item here like: 
    link = scrapy.Field() 
    attr = scrapy.Field() 
    title = scrapy.Field() 
    tag = scrapy.Field() 


class DmozSpider(scrapy.Spider): 
    name = "dmoz" 
    allowed_domains = ["raleigh.craigslist.org"] 
    start_urls = [ 
     "http://raleigh.craigslist.org/search/bab" 
    ] 

    BASE_URL = 'http://raleigh.craigslist.org/' 

    def parse(self, response): 
     links = response.xpath('//a[@class="hdrlnk"]/@href').extract() 
     for link in links: 
      absolute_url = self.BASE_URL + link 
      yield scrapy.Request(absolute_url, callback=self.parse_attr) 

    def parse_attr(self, response): 
     match = re.search(r"(\w+)\.html", response.url) 
     if match: 
      item_id = match.group(1) 
      url = self.BASE_URL + "reply/ral/bab/" + item_id 

      item = DmozItem() 
      item["link"] = response.url 
      item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract()) 
      item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()) 
      return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact) 

    def parse_contact(self, response): 
     item = response.meta['item'] 
     item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract()) 
     return item 
+0

感謝代碼工作正常!我如何限制scrapy提取50行? –

+0

@ArkanKalu歡迎您,不要在評論中解決新問題,如果您遇到困難,請考慮另外創建一個問題。謝謝。 – alecxe