2017-02-13 27 views
0

我試圖抓取的路徑登錄到meta屬性:Scrapy request.meta更新正確

import scrapy 
from scrapy.linkextractors import LinkExtractor 

class ExampleSpider(scrapy.Spider): 
    name = "example" 
    allowed_domains = ["www.iana.org"] 
    start_urls = ['http://www.iana.org/'] 
    request_path_css = dict(
     main_menu = r'#home-panel-domains > h2', 
     domain_names = r'#main_right > p', 
    ) 

    def links(self, response, restrict_css=None): 
     lex = LinkExtractor(
      allow_domains=self.allowed_domains, 
      restrict_css=restrict_css) 
     return lex.extract_links(response) 

    def requests(self, response, css, cb, append=True): 
     links = [link for link in self.links(response, css)] 
     for link in links: 
      request = scrapy.Request(
       url=link.url, 
       callback=cb) 
      if append: 
       request.meta['req_path'] = response.meta['req_path'] 
       request.meta['req_path'].append(dict(txt=link.text, url=link.url)) 
      else: 
       request.meta['req_path'] = [dict(txt=link.text, url=link.url)] 
      yield request 

    def parse(self, response): 
     #self.logger.warn('## Request path: %s', response.meta['req_path']) 
     css = self.request_path_css['main_menu'] 
     return self.requests(response, css, self.domain_names, False) 

    def domain_names(self, response): 
     #self.logger.warn('## Request path: %s', response.meta['req_path']) 
     css = self.request_path_css['domain_names'] 
     return self.requests(response, css, self.domain_names_parser) 

    def domain_names_parser(self, response): 
     self.logger.warn('## Request path: %s', response.meta['req_path']) 

輸出:

$ scrapy crawl -L WARN example 
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 
2017-02-13 11:06:38 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 

這不是我所期待的,因爲我想只需要response.meta['req_path'][1]中的最後一個網址,但是所有來自最後一頁的網址都會以某種方式進入列表。

換句話說,預期的輸出例如爲:

[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}] 
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}] 
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}] 
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}] 
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}] 
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}] 

回答

0

你的第二個請求後,當您分析http://www.iana.org/domains,並呼籲self.requests()append=True(因爲它是默認的),這條線:

request.meta['req_path'] = response.meta['req_path'] 

確實不復制的列表。取而代之的是對原始列表的引用。然後,追加(原來的名單!)與下一行:

request.meta['req_path'].append(dict(txt=link.text, url=link.url)) 

在下一循環迭代,你又得到了非常相同的原始列表的引用(現在已經有兩個項目),並追加再次,等等。

你想要做的是爲每個請求創建一個新的列表。

request.meta['req_path'] = response.meta['req_path'] + [dict(txt=link.text, url=link.url)] 

request.meta['req_path'] = response.meta['req_path'].copy() 

,或者你可以通過這樣做節省了線路:您可以通過添加.copy()到第一線,例如做