2015-06-21 66 views
5

我正在使用兩階段爬網來聚合Scrapy的日常數據。第一階段從索引頁面生成URL列表,第二階段將列表中每個URL的HTML寫入Kafka主題。Scrapy`ReactorNotRestartable`:一個類運行兩個(或更多)蜘蛛

kafka cluster for Scrapy crawler

雖然抓取的兩個組成部分是相關的,我想他們是獨立的:在url_generator將運行一個計劃任務,每天一次,page_requester將連續運行,處理URL的時候可用。爲了「禮貌」,我會調整DOWNLOAD_DELAY,以便履帶車在24小時內完成,但是對現場施加的負荷最小。

我創建了一個CrawlerRunner類具有功能生成的URL和檢索HTML:

from twisted.internet import reactor 
from scrapy.crawler import Crawler 
from scrapy import log, signals 
from scrapy_somesite.spiders.create_urls_spider import CreateSomeSiteUrlList 
from scrapy_somesite.spiders.crawl_urls_spider import SomeSiteRetrievePages 
from scrapy.utils.project import get_project_settings 
import os 
import sys 

class CrawlerRunner: 

    def __init__(self): 
     sys.path.append(os.path.join(os.path.curdir, "crawl/somesite")) 
     os.environ['SCRAPY_SETTINGS_MODULE'] = 'scrapy_somesite.settings' 
     self.settings = get_project_settings() 
     log.start() 

    def create_urls(self): 
     spider = CreateSomeSiteUrlList() 
     crawler_create_urls = Crawler(self.settings) 
     crawler_create_urls.signals.connect(reactor.stop, signal=signals.spider_closed) 
     crawler_create_urls.configure() 
     crawler_create_urls.crawl(spider) 
     crawler_create_urls.start() 
     reactor.run() 

    def crawl_urls(self): 
     spider = SomeSiteRetrievePages() 
     crawler_crawl_urls = Crawler(self.settings) 
     crawler_crawl_urls.signals.connect(reactor.stop, signal=signals.spider_closed) 
     crawler_crawl_urls.configure() 
     crawler_crawl_urls.crawl(spider) 
     crawler_crawl_urls.start() 
     reactor.run() 

當我實例化類,我能夠成功執行其自己的任一功能,但遺憾的是,我無法執行在一起:當它試圖在crawl_urls函數來執行reactor.run()

from crawl.somesite import crawler_runner 

cr = crawler_runner.CrawlerRunner() 

cr.create_urls() 
cr.crawl_urls() 

第二個函數調用生成twisted.internet.error.ReactorNotRestartable

我想知道這個代碼是否有一個簡單的修復方法(例如運行兩個單獨的Twisted reactor的方法),或者如果有更好的方法來構建這個項目。

回答

6

通過保持反應器打開直到所有蜘蛛已停止運行,可以在一個反應​​器內運行多個蜘蛛。這是通過將所有運行蜘蛛的列表,而不是執行reactor.stop()直到這個列表是空的實現:執行

import sys 
import os 
from scrapy.utils.project import get_project_settings 
from scrapy_somesite.spiders.create_urls_spider import Spider1 
from scrapy_somesite.spiders.crawl_urls_spider import Spider2 

from scrapy import signals, log 
from twisted.internet import reactor 
from scrapy.crawler import Crawler 

class CrawlRunner: 

    def __init__(self): 
     self.running_crawlers = [] 

    def spider_closing(self, spider): 
     log.msg("Spider closed: %s" % spider, level=log.INFO) 
     self.running_crawlers.remove(spider) 
     if not self.running_crawlers: 
      reactor.stop() 

    def run(self): 

     sys.path.append(os.path.join(os.path.curdir, "crawl/somesite")) 
     os.environ['SCRAPY_SETTINGS_MODULE'] = 'scrapy_somesite.settings' 
     settings = get_project_settings() 
     log.start(loglevel=log.DEBUG) 

     to_crawl = [Spider1, Spider2] 

     for spider in to_crawl: 

      crawler = Crawler(settings) 
      crawler_obj = spider() 
      self.running_crawlers.append(crawler_obj) 

      crawler.signals.connect(self.spider_closing, signal=signals.spider_closed) 
      crawler.configure() 
      crawler.crawl(crawler_obj) 
      crawler.start() 

     reactor.run() 

類:

from crawl.somesite.crawl import CrawlRunner 

cr = CrawlRunner() 
cr.run() 

該解決方案是基於一個blogpost by Kiran Koduru

+0

有沒有辦法在運行時將爬行器添加到reactor?如何做到這一點reactor.run()阻塞? –

+1

感謝信用:) –