2016-08-04 203 views
0

上運行Scrapy我想爬在服務器端,但我的Python它不是那麼好...終端服務器

我的源是這麼好,如果我運行它mylaptop終端上,但在運行時,它會錯它的服務器端

這裏我的源代碼上

from scrapy.spider import BaseSpider 
from scrapy.selector import HtmlXPathSelector 
from thehack.items import NowItem 
import time 

class MySpider(BaseSpider): 
    name = "nowhere" 
    allowed_domains = ["n0where.net"] 
    start_urls = ["https://n0where.net/"] 

    def parse(self, response): 
     for article in response.css('.loop-panel'): 
      item = NowItem() 
      item['title'] = article.css('.article-title::text').extract_first() 
      item['link'] = article.css('.loop-panel>a::attr(href)').extract_first() 
      item['body'] ='' .join(article.css('.excerpt p::text').extract()).strip() 
      #date ga kepake 
      #item['date'] = article.css('[itemprop="datePublished"]::attr(content)').extract_first() 
      yield item 
      time.sleep(5) 

錯行說

ERROR: Spider error processing <GET https://n0where.net/> 
Traceback (most recent call last): 
    File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 824, in runUntilCurrent 
    call.func(*call.args, **call.kw) 
    File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 638, in _tick 
    taskObj._oneWorkUnit() 
    File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 484, in _oneWorkUnit 
    result = next(self._iterator) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 57, in <genexpr> 
    work = (callable(elem, *args, **named) for elem in iterable) 
--- <exception caught here> --- 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 96, in iter_errback 
    yield next(it) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/offsite.py", line 26, in process_spider_output 
    for x in result: 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/referer.py", line 22, in <genexpr> 
    return (_set_referer(r) for r in result or()) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/urllength.py", line 33, in <genexpr> 
    return (r for r in result or() if _filter(r)) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spidermiddleware/depth.py", line 50, in <genexpr> 
    return (r for r in result or() if _filter(r)) 
    File "/home/admin/nowhere/thehack/spiders/thehack_spider.py", line 14, in parse 
    item['title'] = article.css('.article-title::text').extract_first() 
exceptions.AttributeError: 'SelectorList' object has no attribute 'extract_first' 

有沒有人知道如何解決它的隊友? 非常感謝:)

回答

0

似乎你的scrapy版本已過時。 scrapy.Selector方法.extract_first()僅在scrapy 1.1中添加,因此您希望升級服務器上的scrapy包。

+0

隊友我嘗試_sudo PIP安裝--upgrade scrapy_ 找來導致 **回滾LXML 命令的卸載 「的/ usr/bin中/ Python的-u -c」 進口setuptools的,記號化; __ __文件='/ tmp/pip-build-WJUVpy/lxml/setup.py'; exec(compile(getattr(tokenize,'open',open)(__ file __).read().export('\ r \ n','\ n' ),__file__,'exec'))「install --record /tmp/pip-yn9nU9-record/install-record.txt --single-version-external-managed -compile」失敗,錯誤代碼1在/ tmp/pip-build-WJUVpy/lxml/** 你能給我另一個建議隊友嗎? – jethow

+0

@jethow什麼是你的服務器上運行的發行版?在Ubuntu上,你可以嘗試'apt install python-scrapy',版本1.1應該在ubuntu的版本庫中。 – Granitosaurus

+0

我使用的是Ubuntu 14.04.4 mate,我試過它... 它說_E:無法找到包Scrapy_ – jethow