2016-02-16 109 views
0

當使用Scrapy規則和LinkExtractor時,匹配我的正則表達式的頁面中找到的鏈接不完全正確。我可能失去了一些東西很明顯,但我沒有看到它......Scrapy LinkExtractor鏈接損壞不正確

的所有環節,從符合我的正則表達式是正確的頁面拉,但有看似添加到年底A「=」符號鏈接。我究竟做錯了什麼?

URL刮:

http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00 

一個鏈接的例子我想抓取:

<a href="playrh.cgi?3986">Durant, Kevin</a> 

我的規則/鏈接提取/正則表達式:

rules = [ # <a href="playrh.cgi?3986">Durant, Kevin</a> 
    Rule(LinkExtractor(r'playrh\.cgi\?[0-9]{4}$'), 
     callback='parse_player', 
     follow=False 
    ) 
] 

擦傷的URL(從parse_player響應對象服用):

'http://rotoguru1.com/cgi-bin/playrh.cgi?4496=' 

通知額外 '=' 附加在URL的結尾!

謝謝!

+0

修剪流氓「=」號你可以加'parse_player'高清的代碼?你的規則似乎正確。 – Jan

+0

當然,由於鏈接沒有正確回來,它只是打印出URL。請參閱下面的答案。我還會補充一點,我覺得這很奇怪,因爲刮刀應該看到的地方沒有'='。它看起來不知從何處... – BigWinston

+0

你可以添加日誌嗎?也許它正在重定向? – Granitosaurus

回答

0

下面的代碼片段作品從鏈接

... 

rules = [ 
    Rule(LinkExtractor(r'playrh\.cgi\?[0-9]{4}'), 
     process_links='process_links', 
     callback='parse_player', 
     follow=False 
    ) 
] 

... 

def process_links(self, links): 
    for link in links: 
     link.url = link.url.replace('=','') 
    return links 

... 
0

當然,這是我的日誌...

據我所知,沒有發生重定向,但那個討厭的「=」正在對到年底或請求URL這樣或那樣的.. 。

我現在正在探索'鏈接處理'爲解決方法,但想深究其中。

謝謝!

Testing started at 10:24 AM ... 
pydev debugger: process 1352 is connecting 

Connected to pydev debugger (build 143.1919) 
2016-02-17 10:24:57,789: INFO >> Scrapy 1.0.3 started (bot: Scraper) 
2016-02-17 10:24:57,789: INFO >> Optional features available: ssl, http11 
2016-02-17 10:24:57,790: INFO >> Overridden settings: {'NEWSPIDER_MODULE': 'Scraper.spiders', 'LOG_ENABLED': False, 'SPIDER_MODULES': ['Scraper.spiders'], 'CONCURRENT_REQUESTS': 128, 'BOT_NAME': 'Scraper'} 
2016-02-17 10:24:57,904: INFO >> Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState 
2016-02-17 10:24:58,384: INFO >> Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2016-02-17 10:24:58,388: INFO >> Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2016-02-17 10:24:58,417: INFO >> Enabled item pipelines: MongoOutPipeline 
2016-02-17 10:24:58,420: INFO >> Spider opened 
2016-02-17 10:24:58,424: INFO >> Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-02-17 10:24:58,427: DEBUG >> spider_opened (NbaRotoGuruDfsPerformanceSpider) : 'NbaRotoGuruDfsPerformanceSpider' 
2016-02-17 10:24:58,428: DEBUG >> Telnet console listening on 127.0.0.1:6023 
2016-02-17 10:24:59,957: DEBUG >> Crawled (200) <GET http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00> (referer: None) 
2016-02-17 10:25:01,130: DEBUG >> Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4496=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00) 
**********************************>> CUT OUT ABOUT 550 LINES HERE FOR BREVITY (Just links same as directly above/below) *********************************>> 
2016-02-17 10:25:28,983: DEBUG >> Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4632=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00) 
2016-02-17 10:25:28,987: DEBUG >> Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?3527=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00) 
2016-02-17 10:25:29,400: DEBUG >> Crawled (200) <GET http://rotoguru1.com/cgi-bin/playrh.cgi?4564=> (referer: http://rotoguru1.com/cgi-bin/hstats.cgi?pos=0&sort=1&game=k&colA=0&daypt=0&xavg=3&show=1&fltr=00) 
2016-02-17 10:25:29,581: INFO >> Closing spider (finished) 
2016-02-17 10:25:29,585: INFO >> Dumping Scrapy stats: 
{'downloader/request_bytes': 194884, 
'downloader/request_count': 570, 
'downloader/request_method_count/GET': 570, 
'downloader/response_bytes': 5886991, 
'downloader/response_count': 570, 
'downloader/response_status_count/200': 570, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 2, 17, 15, 25, 29, 582000), 
'log_count/DEBUG': 572, 
'log_count/INFO': 7, 
'request_depth_max': 1, 
'response_received_count': 570, 
'scheduler/dequeued': 570, 
'scheduler/dequeued/memory': 570, 
'scheduler/enqueued': 570, 
'scheduler/enqueued/memory': 570, 
'start_time': datetime.datetime(2016, 2, 17, 15, 24, 58, 424000)} 
2016-02-17 10:25:29,585: INFO >> Spider closed (finished) 
Process finished with exit code 0