2014-03-01 27 views
4

我可以在start_urls中使用變量嗎?請參考下面的腳本:Scrapy中的變量

這個腳本做工精細:

from scrapy.spider import Spider 
from scrapy.selector import Selector 
from example.items import ExampleItem 

class ExampleSpider(Spider): 
name = "example" 
allowed_domains = ["example.com"] 
start_urls = [ 

"http://www.example.com/search-keywords=['0750692995']", 
"http://www.example.com/search-keywords=['0205343929']", 
"http://www.example.com/search-keywords=['0874367379']", 

] 

def parse(self, response): 
    hxs = Selector(response) 
    item = ExampleItem() 
    item['url'] = response.url 
    item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract() 
    item['title'] = hxs.select("//span[@class='title L']/text()").extract() 
    return item 

不過,我想是這樣的:

from scrapy.spider import Spider 
from scrapy.selector import Selector 
from example.items import ExampleItem 

class ExampleSpider(Spider): 
name = "example" 
allowed_domains = ["example.com"] 
pro_id = ["0750692995", "0205343929", "0874367379"] ***(I added this line) 
start_urls = [ 

"http://www.example.com/search-keywords=['pro_id']", ***(and I changed this line) 

] 

def parse(self, response): 
    hxs = Selector(response) 
    item = ExampleItem() 
    item['url'] = response.url 
    item['price'] = hxs.select("//li[@class='mpbold']/a/text()").extract() 
    item['title'] = hxs.select("//span[@class='title L']/text()").extract() 
    return item 

我想運行此腳本由拉pro_id數爲start_urls起作用的一種一個。有沒有辦法做到這一點?我運行腳本,但URL仍然像這樣「http://www.example.com/search-keywords=['pro_id']」不是「http://www.example.com/search-keywords=0750692995」。劇本應該如何?感謝您的幫助。

編輯:T的建議改變進行@保羅後,會出現以下錯誤

2014-03-02 08:39:44+0700 [example] ERROR: Obtaining request from start requests 
    Traceback (most recent call last): 
     File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1192, in run 
     self.mainLoop() 
     File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1201, in mainLoop 
     self.runUntilCurrent() 
     File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent 
     call.func(*call.args, **call.kw) 
     File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\utils\reactor.py", line 41, in __call__ 
     return self._func(*self._a, **self._kw) 
    --- <exception caught here> --- 
     File "C:\Python27\lib\site-packages\scrapy-0.22.2-py2.7.egg\scrapy\core\engine.py", line 111, in _next_request 

     request = next(slot.start_requests) 
     File "C:\Users\S\desktop\example\example\spiders\example_spider.py", line 13, in start_requests 
     yield Request(self.start_urls_base % pro_id, dont_filter=True) 
    exceptions.NameError: global name 'Request' is not defined 
+0

加上'從scrapy.http.request進口Request'以解決製造@保羅to後發生錯誤。建議的變化。 – Talvalin

回答

5

的一種方式喲這樣做是重寫蜘蛛的start_requests()方法:

class ExampleSpider(Spider): 
    name = "example" 
    allowed_domains = ["example.com"] 
    pro_ids = ["0750692995", "0205343929", "0874367379"] 
    start_urls_base = "http://www.example.com/search-keywords=['%s']" 

    def start_requests(self): 
     for pro_id in self.pro_ids: 
      yield Request(self.start_urls_base % pro_id, dont_filter=True) 
0

首先,你必須導入要求

from scrapy.http import Request 

此後您可以按照建議Pa UL

def start_requests(self): 
    for pro_id in self.pro_ids: 
     yield Request(self.start_urls_base % pro_id, dont_filter=True) 
0

我認爲你可以使用一個for循環解決它,就像下面:

​​