2016-03-28 85 views
1

Scrapy與我的代理ip協同工作,但不支持http請求,但不支持http s請求。Scrapy代理ip不能與https一起使用,返回'ssl握手失敗'

我知道我的代理IP正在與HTTP,因爲我測試通過發送請求到http://ipinfo.io/ip

2016-03-28 12:10:42 [scrapy] DEBUG: Crawled (200) <GET http://ipinfo.io/ip> (referer: http://www.google.com) 
2016-03-28 12:10:42 [root] INFO: *** TEST, WHAT IS MY IP: *** 
107.183.7.XX 

我知道它不是用HTTPS請求工作,因爲此錯誤消息:

2016-03-28 12:10:55 [scrapy] DEBUG: Gave up retrying <GET https://www.my-company-url.com> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'ssl23_read', 'ssl handshake failure')]>] 

settings.py包含:

DOWNLOADER_MIDDLEWARES = { 
    'crystalball.middlewares.ProxyMiddleware': 100, 
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110 
} 

crystalball.middlewares.ProxyMiddleware包含:

import base64 

class ProxyMiddleware(object): 

    def process_request(self, request, spider): 
     request.meta['proxy'] = "https://107.183.X.XX:55555" 
     proxy_user_pass = "hXXbp3:LitSwDXX99" 
     encoded_user_pass = base64.encodestring(proxy_user_pass) 
     request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass 

有關我應該試驗下一步的建議嗎?

備註:此SO帖子上的解決方案無效:Scrapy and proxies

回答

3

罪魁禍首是base64.encodestring(),它將不需要的新行\n字符添加到請求的代理授權標頭的值中。

解決的辦法是簡單地將strip()關閉,即\n

改變這一行:

request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass 

要這樣:

request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass.strip()