2013-02-10 59 views
0

我只是第一次嘗試填充一個項目,而一頁一頁地傳輸。scrapy:通過幾個解析和收集數據的運輸項目

它在每個循環中工作,性別信息也正確到達parse_3,但g2不適合響應url的類別,並且g1(第一個類別級別)始終是列表的最後一個元素parse_sub ...

當然,我做錯了什麼,但我找不到問題,如果有人能解釋我是如何工作的,那將是非常好的。

最佳, 傑克

class xspider(BaseSpider): 
    name = 'x' 
    allowed_domains = ['x.com'] 
    start_urls = ['http://www.x.com'] 

    def parse(self, response): 
     hxs = HtmlXPathSelector(response) 
     maincats = hxs.select('//ul[@class="Nav"]/li/a/@href').extract()[1:3] 
     for maincat in maincats: 
      item = catItem() 
      if 'men' in maincat: 
       item['gender'] = 'men' 
       maincat = 'http://www.x.com' + maincat 
       request = Request(maincat, callback=self.parse_sub) 
       request.meta['item'] = item 
      if 'woman' in maincat: 
       item['gender'] = [] 
       item['gender'] = 'woman' 
       maincat = 'http://www.x.com' + maincat 
       request = Request(maincat, callback=self.parse_sub) 
       request.meta['item'] = item 
      yield request 

    def parse_sub(self, response): 
     i = 0 
     hxs = HtmlXPathSelector(response) 
     subcats = hxs.select('//ul[@class="sub Sprite"]/li/a/@href').extract()[0:5] 
     text = hxs.select('//ul[@class="sub Sprite"]/li/a/span/text()').extract()[0:5] 
     for item in text: 
      item = response.meta['item'] 
      subcat = 'http://www.x.com' + subcats[i] 
      request = Request(subcat, callback=self.parse_subcat) 
      item['g1'] = text[i] 
      item['gender'] = response.request.meta['item'] 
      i = i + 1 
      request.meta['item'] = item 
      yield request 

    def parse_subcat(self, response): 
     hxs = HtmlXPathSelector(response) 
     test = hxs.select('//ul[@class="sub"]/li/a').extract() 
     for s in test: 
      item = response.meta['item'] 
      item['g2'] = [] 
      item['g2'] = hxs.select('//span[@class="Active Sprite"]/text()').extract()[0] 
      s = s.encode('utf-8','ignore') 
      link = s[s.find('href="')+6:][:s[s.find('href="')+6:].find('/"')] 
      link = 'http://www.x.com/' + str(link) + '/' 
      request = Request(link, callback=self.parse_3) 
      request.meta['item'] = item 
      yield request 

    def parse_3(self, response): 
     item = response.meta['item'] 
     print item 
+1

這將是有益知道你是實際的域爬行。你真的在爬行x.com嗎? – 2013-02-10 18:38:59

+0

否:),但它應該沒關係,問題似乎是URL列表中最後一個元素被取消並覆蓋之前的信息:( – 2013-02-10 23:16:54

+0

嘗試將'yield item'而不是'print item'看看,如果你在scrapy上調試我們會在屏幕上打印yield項目 – user2134226 2013-02-11 01:33:39

回答

2
def parse_subcat(self, response): 
    hxs = HtmlXPathSelector(response) 
    test = hxs.select('//ul[@class="sub"]/li/a').extract() 
    for s in test: 
     item = response.meta['item'] 
     item['g2'] = [] 
     item['g2'] = hxs.select('//span[@class="Active Sprite"]/text()').extract()[0] 
     s = s.encode('utf-8','ignore') 
     link = s[s.find('href="')+6:][:s[s.find('href="')+6:].find('/"')] 
     link = 'http://www.x.com/' + str(link) + '/' 
     request = Request(link, callback=self.parse_3) 
     request.meta['item'] = item 
     yield request 

響應不包含元,但請求,以便 insted的的item = response.meta['item'] 應該item = response.request.meta['item']

相關問題