2012-08-30 73 views
3

我在Scrapy中編寫了一個蜘蛛,它基本上做得很好,並且完全按照它應該做的事情做。 問題是我需要對它做一些小的改動,並且我嘗試了幾種沒有成功的方法(例如修改InitSpider)。下面是腳本現在該怎麼辦:在Scrapy中初始化CrawlSpider

  • 爬行的起始URL http://www.example.de/index/search?method=simple
  • 現在進入網址http://www.example.de/index/search?filter=homepage
  • 從這裏開始的抓取在規則定義的模式

所以基本上所有需要改變的是在兩者之間調用一個URL。我寧願不用BaseSpider重寫整個東西,所以我希望有人對如何實現這一點有一個想法:)

如果您需要任何其他信息,請讓我知道。您可以在下面找到當前腳本。

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 
from scrapy.http import Request 
from example.items import ExampleItem 
from scrapy.contrib.loader.processor import TakeFirst 
import re 
import urllib 

take_first = TakeFirst() 

class ExampleSpider(CrawlSpider): 
    name = "example" 
    allowed_domains = ["example.de"] 

    start_url = "http://www.example.de/index/search?method=simple" 
    start_urls = [start_url] 

    rules = (
     # http://www.example.de/index/search?page=2 
     # http://www.example.de/index/search?page=1&tab=direct 
     Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*$',)), callback='parse_item', follow=True), 
     Rule(SgmlLinkExtractor(allow=('\/index\/search\?page=\d*&tab=direct',)), callback='parse_item', follow=True), 
    ) 

    def parse_item(self, response): 
     hxs = HtmlXPathSelector(response) 

     # fetch all company entries 
     companies = hxs.select("//ul[contains(@class, 'directresults')]/li[contains(@id, 'entry')]") 
     items = [] 

     for company in companies: 
      item = ExampleItem() 
      item['name'] = take_first(company.select(".//span[@class='fn']/text()").extract()) 
      item['address'] = company.select(".//p[@class='data track']/text()").extract() 
      item['website'] = take_first(company.select(".//p[@class='customurl track']/a/@href").extract()) 

      # we try to fetch the number directly from the page (only works for premium entries) 
      item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/text()").extract()) 

      if not item['telephone']: 
       # if we cannot fetch the number it has been encoded on the client and hidden in the rel="" 
       item['telephone'] = take_first(company.select(".//p[@class='numericdata track']/a/@rel").extract()) 

      items.append(item) 
     return items 

編輯

這裏是我的嘗試與InitSpider:https://gist.github.com/150b30eaa97e0518673a 我從這裏的想法:Crawling with an authenticated session in Scrapy

正如你可以看到,它仍然CrawlSpider繼承,但我做到了核心Scrapy文件的一些更改(不是我最喜歡的方法)。我讓CrawlSpider繼承自InitSpider而不是BaseSpider(source)。

到目前爲止,這種方法很有效,但蜘蛛只是在第一頁之後停止,而不是拾取所有其他的蜘蛛。

而且,這種方法似乎是完全沒有必要對我:)

回答

2

好吧,我找到了自己的解決方案,它實際上是比我最初以爲:)

這裏是簡化的腳本要簡單得多:

#!/usr/bin/python 
# -*- coding: utf-8 -*- 

from scrapy.spider import BaseSpider 
from scrapy.http import Request 
from scrapy import log 
from scrapy.selector import HtmlXPathSelector 
from example.items import ExampleItem 
from scrapy.contrib.loader.processor import TakeFirst 
import re 
import urllib 

take_first = TakeFirst() 

class ExampleSpider(BaseSpider): 
    name = "ExampleNew" 
    allowed_domains = ["www.example.de"] 

    start_page = "http://www.example.de/index/search?method=simple" 
    direct_page = "http://www.example.de/index/search?page=1&tab=direct" 
    filter_page = "http://www.example.de/index/search?filter=homepage" 

    def start_requests(self): 
     """This function is called before crawling starts.""" 
     return [Request(url=self.start_page, callback=self.request_direct_tab)] 

    def request_direct_tab(self, response): 
     return [Request(url=self.direct_page, callback=self.request_filter)] 

    def request_filter(self, response): 
     return [Request(url=self.filter_page, callback=self.parse_item)] 

    def parse_item(self, response): 
     hxs = HtmlXPathSelector(response) 

     # fetch the items you need and yield them like this: 
     # yield item 

     # fetch the next pages to scrape 
     for url in hxs.select("//div[@class='limiter']/a/@href").extract(): 
      absolute_url = "http://www.example.de" + url    
      yield Request(absolute_url, callback=self.parse_item) 

正如您所看到的,我現在正在使用BaseSpider,並在最後生成新的請求。在開始時,我只是簡單地瀏覽一下在抓取開始之前需要做的所有不同的請求。

我希望這對某人有幫助:)如果您有任何問題,我會很樂意回答他們。