2014-09-05 24 views
2

我正在開發一個Scrapy項目,我在其上編寫了一個DOWNLOADER MIDDLEWARE以避免向已經在數據庫中的URL發出請求。Scrapy - 獲取內部蜘蛛變量下載MIDDLEWARE __init__

DOWNLOADER_MIDDLEWARES = { 
    'imobotS.utilities.RandomUserAgentMiddleware': 400, 
    'imobotS.utilities.DupFilterMiddleware': 500, 
    'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None, 
} 

的想法是連接和__init__負載當前存儲在數據庫中的所有URL的不同名單,並提高IgnoreRequests如果刮出項目已經是DB。

class DuplicateFilterMiddleware(object): 

    def __init__(self): 
     connection = pymongo.Connection('localhost', 12345) 
     self.db = connection['my_db'] 
     self.db.authenticate('scott', '*****') 

     self.url_set = self.db.ad.find({'site': 'WEBSITE_NAME'}).distinct('url') 

    def process_request(self, request, spider): 
     print "%s - process Request URL: %s" % (spider._site_name, request.url) 
     if request.url in self.url_set: 
      raise IgnoreRequest("Duplicate --db-- item found: %s" % request.url) 
     else: 
      return None 

所以,我想通過限制對WEBSITE_NAME初始化定義的URL_LIST,有沒有辦法來識別下載中間件__init__方法中目前蜘蛛的名字嗎?

回答

1

您可以移動process_request下的url集合,並檢查您是否先前已經獲取了它。

class DuplicateFilterMiddleware(object): 

    def __init__(self): 
     connection = pymongo.Connection('localhost', 12345) 
     self.db = connection['my_db'] 
     self.db.authenticate('scott', '*****') 

     self.url_sets = {} 

    def process_request(self, request, spider): 
     if not self.url_sets.get(spider._site_name): 
      self.url_sets[spider._site_name] = self.db.ad.find({'site': spider._site_name}).distinct('url') 

     print "%s - process Request URL: %s" % (spider._site_name, request.url) 
     if request.url in self.url_sets[spider._site_name]: 
      raise IgnoreRequest("Duplicate --db-- item found: %s" % request.url) 
     else: 
      return None