2013-02-13 60 views
5

此外殼命令的robots.txt成功HTTP 403錯誤檢索與機械化

$ curl -A "Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)" http://fifa-infinity.com/robots.txt 

並打印的robots.txt。省略用戶代理選項會導致服務器發生403錯誤。檢查robots.txt文件顯示允許抓取http://www.fifa-infinity.com/board下的內容。但是,下面的失敗(Python代碼):

import logging 
import mechanize 
from mechanize import Browser 

ua = 'Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)' 
br = Browser() 
br.addheaders = [('User-Agent', ua)] 
br.set_debug_http(True) 
br.set_debug_responses(True) 
logging.getLogger('mechanize').setLevel(logging.DEBUG) 
br.open('http://www.fifa-infinity.com/robots.txt') 

我的控制檯上的輸出是:

No handlers could be found for logger "mechanize.cookies" 
send: 'GET /robots.txt HTTP/1.1\r\nAccept-Encoding: identity\r\nHost: www.fifa-infinity.com\r\nConnection: close\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0 (compatible;)\r\n\r\n' 
reply: 'HTTP/1.1 403 Bad Behavior\r\n' 
header: Date: Wed, 13 Feb 2013 15:37:16 GMT 
header: Server: Apache 
header: X-Powered-By: PHP/5.2.17 
header: Vary: User-Agent,Accept-Encoding 
header: Connection: close 
header: Transfer-Encoding: chunked 
header: Content-Type: text/html 
Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 203, in open 
    return self._mech_open(url, data, timeout=timeout) 
    File "/home/moshev/Projects/forumscrawler/lib/python2.7/site-packages/mechanize/_mechanize.py", line 255, in _mech_open 
    raise response 
mechanize._response.httperror_seek_wrapper: HTTP Error 403: Bad Behavior 

奇怪的是,使用curl沒有設置用戶代理結果「403:禁止」,而比「403:不良行爲」。

我在某種程度上做錯了什麼,或者這是機械化/ urllib2中的錯誤?我不明白如何簡單地讓robots.txt成爲「不良行爲」?

+0

和頭的另一個例子嗅探壞了。另一方面,服務器正在更多地查看UA代理,檢查curl發送的是什麼頭,將它們與'mechanize'正在使用的內容進行比較,調整,重複。這是*不是* python問題。 – 2013-02-13 15:48:58

+0

此問題與[urllib2.HTTPError:HTTP Error 403:Forbidden]非常類似(https://stackoverflow.com/questions/13303449/urllib2-httperror-http-error-403-forbidden/46213623#46213623) – djinn 2017-11-06 16:31:30

回答

9

經實驗驗證,您需要添加Accept標頭來指定可接受的內容類型(只要存在「Accept」標頭,任何類型都可以)。例如,它會改變工作後:

br.addheaders = [('User-Agent', ua)] 

到:

br.addheaders = [('User-Agent', ua), ('Accept', '*/*')] 
+0

謝謝,就是這樣! – Moshev 2013-02-14 07:35:23

+0

我希望早些時候見過這樣的事......這會爲我節省幾個小時的工作!謝謝Hui! – 2015-06-11 02:31:34