2016-11-18 42 views
0

Scrapy統計顯示像這樣在運行代碼如何從Scrapy獲取已經獲取的URL數(request_count)?

2016-11-18 06:41:38 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 656, 
'downloader/request_count': 2, 
'downloader/request_method_count/GET': 2, 
'downloader/response_bytes': 2661, 
'downloader/response_count': 2, 
'downloader/response_status_count/200': 2, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 11, 18, 14, 41, 38, 759760), 
'item_scraped_count': 2, 
'log_count/DEBUG': 5, 
'log_count/INFO': 7, 
'response_received_count': 2, 
'scheduler/dequeued': 2, 
'scheduler/dequeued/memory': 2, 
'scheduler/enqueued': 2, 
'scheduler/enqueued/memory': 2, 
'start_time': datetime.datetime(2016, 11, 18, 14, 41, 37, 807590)} 

我的目標是訪問process_responseresponse_countrequest_count或蜘蛛的任何方法。

我想關閉蜘蛛一次N個網址被我的Spider抓取。

回答

1

,如果你想關閉蜘蛛取決於完成的請求數,我建議使用[CLOSESPIDER_PAGECOUNTsettings.py:(https://doc.scrapy.org/en/latest/topics/extensions.html#closespider-pagecount

settings.py

CLOSESPIDER_PAGECOUNT= 20 # so end after 20 pages have been crawled 

不過如果你想訪問蜘蛛內的Scrapy Stats,你可以這樣做:

self.crawler.stats.get_value('my_stat_name') # change it to `response_count` or `request_count`