1
我刮一個XML站點地圖包含特殊字符,如é,導致scrapy:有特殊字符處理的URL
ERROR: Spider error processing <GET [URL with '%C3%A9' instead of 'é']>
我如何獲得Scrapy保持原來的網址不變,即用它的特殊性格?
Scrapy == 1.3.3
的Python 3.5.2 == (我需要堅持這些版本)
更新:每https://stackoverflow.com/a/17082272/6170115 正如我能夠以正確的得到網址字符使用unquote
:
用法示例:
>>> from urllib.parse import unquote
>>> unquote('ros%C3%A9')
'rosé'
我也試過我自己的詢價uest子類,而safe_url_string
但我結束了:
UnicodeEncodeError: 'ascii' codec can't encode character '\xf9' in position 25: ordinal not in range(128)
完全回溯:
[scrapy.core.scraper] ERROR: Error downloading <GET [URL with characters like ù]>
Traceback (most recent call last):
File "/usr/share/anaconda3/lib/python3.5/site-packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/share/anaconda3/lib/python3.5/site-packages/twisted/python/failure.py", line 393, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
result = f(*args, **kw)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request
return handler.download_request(request, spider)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 61, in download_request
return agent.download_request(request)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 260, in download_request
agent = self._get_agent(request, timeout)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/handlers/http11.py", line 241, in _get_agent
scheme = _parse(request.url)[0]
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 37, in _parse
return _parsed_url_args(parsed)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 19, in _parsed_url_args
path = b(path)
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/core/downloader/webclient.py", line 17, in <lambda>
b = lambda s: to_bytes(s, encoding='ascii')
File "/usr/share/anaconda3/lib/python3.5/site-packages/scrapy/utils/python.py", line 120, in to_bytes
return text.encode(encoding, errors)
UnicodeEncodeError: 'ascii' codec can't encode character '\xf9' in position 25: ordinal not in range(128)
任何提示嗎?
請看看我的[回覆](https://stackoverflow.com/questions/42445087/force-python-scrapy-not-to-encode-url)類似的問題。也許你可以將這種技術應用到你的用例中。 –
真正的問題是在這裏:https://stackoverflow.com/questions/47563095/json-url-sometimes-returns-a-null-response並回答:https://stackoverflow.com/a/47564798/6170115 – happyspace