以下document,我可以從Python腳本運行scrapy,但無法獲得scrapy結果。關於從Python腳本內運行Scrapy的困惑
這是我的蜘蛛:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from items import DmozItem
class DmozSpider(BaseSpider):
name = "douban"
allowed_domains = ["example.com"]
start_urls = [
"http://www.example.com/group/xxx/discussion"
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
rows = hxs.select("//table[@class='olt']/tr/td[@class='title']/a")
items = []
# print sites
for row in rows:
item = DmozItem()
item["title"] = row.select('text()').extract()[0]
item["link"] = row.select('@href').extract()[0]
items.append(item)
return items
通知的最後一行,我嘗試使用返回的解析結果,如果我跑:
scrapy crawl douban
終端可以打印返回結果
但我無法從Python腳本獲取返回結果。這裏是我的Python腳本:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from spiders.dmoz_spider import DmozSpider
from scrapy.xlib.pydispatch import dispatcher
def stop_reactor():
reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = DmozSpider(domain='www.douban.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg("------------>Running reactor")
result = reactor.run()
print result
log.msg("------------>Running stoped")
我試圖在reactor.run()
得到結果,但它返回任何結果,
我怎樣才能得到的結果?
請問您在哪裏放腳本?在scrapy項目中,還是在蜘蛛文件夾中,還是什麼? –
交叉引用[這個答案](http://stackoverflow.com/a/27744766/771848) - 應該給你一個關於如何從腳本運行Scrapy的詳細概述。 – alecxe