2016-04-20 89 views
3

我有一套25,000+個網址,需要我們進行抓取。我一直看到,在大約22,000個網址之後,抓取速度急劇下降。抓取速度急劇減慢

看看這些日誌行得到一些觀點:

2016-04-18 00:14:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:15:06 [scrapy] INFO: Crawled 5324 pages (at 5324 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:16:06 [scrapy] INFO: Crawled 9475 pages (at 4151 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:17:06 [scrapy] INFO: Crawled 14416 pages (at 4941 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:18:07 [scrapy] INFO: Crawled 20575 pages (at 6159 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:19:06 [scrapy] INFO: Crawled 22036 pages (at 1461 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:20:06 [scrapy] INFO: Crawled 22106 pages (at 70 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:21:06 [scrapy] INFO: Crawled 22146 pages (at 40 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:22:06 [scrapy] INFO: Crawled 22189 pages (at 43 pages/min), scraped 0 items (at 0 items/min) 
2016-04-18 00:23:06 [scrapy] INFO: Crawled 22229 pages (at 40 pages/min), scraped 0 items (at 0 items/min) 

我這裏還有我的設置

# -*- coding: utf-8 -*- 

BOT_NAME = 'crawler' 

SPIDER_MODULES = ['crawler.spiders'] 
NEWSPIDER_MODULE = 'crawler.spiders' 

CONCURRENT_REQUESTS = 10 
REACTOR_THREADPOOL_MAXSIZE = 100 
LOG_LEVEL = 'INFO' 
COOKIES_ENABLED = False 
RETRY_ENABLED = False 
DOWNLOAD_TIMEOUT = 15 
DNSCACHE_ENABLED = True 
DNSCACHE_SIZE = 1024000 
DNS_TIMEOUT = 10 
DOWNLOAD_MAXSIZE = 1024000 # 10 MB 
DOWNLOAD_WARNSIZE = 819200 # 8 MB 
REDIRECT_MAX_TIMES = 3 
METAREFRESH_MAXDELAY = 10 
ROBOTSTXT_OBEY = True 
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36' #Chrome 41 

DEPTH_PRIORITY = 1 
SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue' 
SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue' 

#DOWNLOAD_DELAY = 1 
#AUTOTHROTTLE_ENABLED = True 
HTTPCACHE_ENABLED = True 
HTTPCACHE_EXPIRATION_SECS = 604800 # 7 days 
COMPRESSION_ENABLED = True 

DOWNLOADER_MIDDLEWARES = { 
    'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100, 
    'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300, 
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350, 
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400, 
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550, 
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580, 
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590, 
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600, 
    'crawler.middlewares.RandomizeProxies': 740, 
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750, 
    'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830, 
    'scrapy.downloadermiddlewares.stats.DownloaderStats': 850, 
    'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900, 
} 

PROXY_LIST = '/etc/scrapyd/proxy_list.txt' 
  • 內存和CPU的消耗低於10%
  • tcptrack顯示沒有不尋常的網絡活動
  • iostat顯示可忽略的磁盤I/O \

我能看什麼來調試?

+0

你試圖改變你的日誌記錄級別是否有任何意外的是怎麼回事? – DuckPuncher

+4

這些網址在同一個網站上嗎?也許你在22,000次點擊後被那個/那些網站限制了速度?嘗試從多個不同的IP地址進行搜索,看看它是不是更快。嘗試讓這些網站將你的IP列入白名單。 (我認爲你自己的ISP或網絡本身不是速率限制的你)。 – smci

+0

TCP連接在使用後是否關閉? – ozOli

回答

0

原來的問題是一個特定的域導致積壓。 URL隊列將被填滿並等待來自這些域的響應。由於每個IP /域只允許有一個請求被允許,所以每次處理一個請求。

我開始記錄我的代理服務器,並追蹤他們的輸出結果,結果顯示爲白天。

我不會想通了這一點沒有評論上面的對話 - 感謝@DuckPuncher和@smci