2013-07-31 61 views
5

我試圖部署一個帶有四個蜘蛛的爬蟲。一個蜘蛛的使用XMLFeedSpider並運行從殼和scrapyd罰款,但其他人使用BaseSpider和scrapyd運行時都給予這個錯誤,但是從外殼運行Scrapy蜘蛛時出現Scrapyd初始化錯誤

TypeError: init() got an unexpected keyword argument '_job'

運行正常從我讀過這算什麼指出我的蜘蛛中的init函數有問題,但我似乎無法解決問題。我不需要一個初始化函數,如果我完全刪除它,我仍然得到錯誤!

我的蜘蛛看起來像這樣

from scrapy import log 
from scrapy.spider import BaseSpider 
from scrapy.selector import XmlXPathSelector 
from betfeeds_master.items import Odds 
# Parameters 
MYGLOBAL = 39 
class homeSpider(BaseSpider): 
    name = "home" 
    #con = None 

    allowed_domains = ["www.myhome.com"] 
    start_urls = [ 
     "http://www.myhome.com/oddxml.aspx?lang=en&subscriber=mysubscriber", 
    ] 
    def parse(self, response): 

     items = [] 

     traceCompetition = "" 

     xxs = XmlXPathSelector(response) 
     oddsobjects = xxs.select("//OO[OddsType='3W' and Sport='Football']") 
     for oddsobject in oddsobjects: 
      item = Odds() 
      item['competition'] = ''.join(oddsobject.select('Tournament/text()').extract()) 
      if traceCompetition != item['competition']: 
       log.msg('Processing %s' % (item['competition']))    #print item['competition'] 
       traceCompetition = item['competition'] 
      item['matchDate'] = ''.join(oddsobject.select('Date/text()').extract()) 
      item['homeTeam'] = ''.join(oddsobject.select('OddsData/HomeTeam/text()').extract()) 
      item['awayTeam'] = ''.join(oddsobject.select('OddsData/AwayTeam/text()').extract()) 
      item['lastUpdated'] = '' 
      item['bookie'] = MYGLOBAL 
      item['home'] = ''.join(oddsobject.select('OddsData/HomeOdds/text()').extract()) 
      item['draw'] = ''.join(oddsobject.select('OddsData/DrawOdds/text()').extract()) 
      item['away'] = ''.join(oddsobject.select('OddsData/AwayOdds/text()').extract()) 

      items.append(item) 

     return items 

我可以把一個使用的初始化函數中的蜘蛛,但我得到完全相同的錯誤。

def __init__(self, *args, **kwargs): 
    super(homeSpider, self).__init__(*args, **kwargs) 
    pass 

爲什麼會發生這種情況,我該如何解決它?

+2

您是否在其他蜘蛛中定義了__init__方法?問題可能是你不接受'** kwargs' .. – alecxe

+0

'XMLFeedSpider'不會覆蓋'BaseSpider',所以我不明白爲什麼這些蜘蛛會觸發這個錯誤。 (https://github.com/scrapy/scrapy/blob/master/scrapy/contrib/spiders/feed.py)。你能發佈更完整的堆棧跟蹤嗎? –

回答

4

好答案被alecx給出:

我的初始化函數爲:

def __init__(self, domain_name): 

爲了給scrapyd蛋中工作,它應該是:

def __init__(self, domain_name, **kwargs): 

考慮您將domain_name作爲強制參數傳遞

+0

thx!這解決了我的問題。 – Pullie