2014-07-10 92 views
0

甩統計當我運行在scrapy教程提供的例子,我可以看到在標準輸出打印日誌:無法看清scrapy

2014-07-10 16:08:21+0100 [pubs] INFO: Spider opened 
2014-07-10 16:08:21+0100 [pubs] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2014-07-10 16:08:21+0100 [pubs] INFO: Closing spider (finished) 
2014-07-10 16:08:21+0100 [pubs] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 471, 
'downloader/request_count': 2, 
'downloader/request_method_count/GET': 2, 
'downloader/response_bytes': 3897, 
'downloader/response_count': 2, 
'downloader/response_status_count/200': 1, 
'downloader/response_status_count/302': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2014, 7, 10, 15, 8, 21, 970741), 
'item_scraped_count': 1, 
'response_received_count': 1, 
'scheduler/dequeued': 2, 
'scheduler/dequeued/memory': 2, 
'scheduler/enqueued': 2, 
'scheduler/enqueued/memory': 2, 
'start_time': datetime.datetime(2014, 7, 10, 15, 8, 21, 584373)} 
2014-07-10 16:08:21+0100 [pubs] INFO: Spider closed (finished) 

然而,當我更改設置「FEED_URI」導出結果文件到S3,我沒有看到任何地方的統計數據。我試過打印crawler.stats.spider_stats,但它仍然是空的。有任何想法嗎?

+0

看到各種'LOG_'設置:http://doc.scrapy.org/en /latest/topics/settings.html#std:setting-LOG_FILE –

回答

0

即使'LOG_ENABLED'和'DUMP_STATS'設置爲true,我也無法讓scrapy轉儲統計信息。然而,我發現一種解決方法,通過在我的反應器模擬的末尾添加這行代碼手動傾倒統計:

log.msg("Dumping Scrapy stats:\n" + pprint.pformat(crawler.stats.get_stats()))