2014-08-29 30 views
0
import mechanize 
br = mechanize.Browser() 
url = 'http://nseindia.com' 
br.oprn(url) 

和錯誤是機械化,BS4,urllib的,urlib2無法打開nseindia.com

Traceback (most recent call last): 
    File "<input>", line 1, in <module> 
    File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 203, in 
open 
    return self._mech_open(url, data, timeout=timeout) 
    File "/usr/local/lib/python2.7/dist-packages/mechanize/_mechanize.py", line 255, in 
_mech_open 
    raise response 
httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt 

我是嘗試都覺得像....

br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0. 
1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')] 

br.set_handle_equiv(False) 


br.set_handle_equiv(False) 
+0

有答案有助於解決這個問題?如果是的話,考慮接受答案,謝謝。 – alecxe 2015-02-12 08:31:56

回答

1

你需要傳遞Accept表頭:

import mechanize 

br = mechanize.Browser() 

br.addheaders = [ 
    ('User-Agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.143 Safari/537.36'), 
    ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8') 
] 

url = 'http://nseindia.com' 
br.open(url) 

然後,就證明它是工作,分析與BeautifulSoup的響應,並獲取頁面標題:

soup = BeautifulSoup(br.response()) 
print soup.title.text 

打印:

NSE - National Stock Exchange of India Ltd.