2012-04-12 115 views
1

我有一個程序,抓取網址存儲在數據庫中的內容。我正在使用beautifulsoup,urllib2來抓取內容。當我輸出結果時,我發現程序崩潰時(它看起來像)403錯誤。那麼如何防止我的程序崩潰在403/404等錯誤?蟒蛇,urllib2,在404錯誤崩潰

相關輸出:

Traceback (most recent call last): 
    File "web_content.py", line 29, in <module> 
    grab_text(row) 
    File "web_content.py", line 21, in grab_text 
    f = urllib2.urlopen(row) 
    File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen 
    return _opener.open(url, data, timeout) 
    File "/usr/lib/python2.7/urllib2.py", line 400, in open 
    response = meth(req, response) 
    File "/usr/lib/python2.7/urllib2.py", line 513, in http_response 
    'http', request, response, code, msg, hdrs) 
    File "/usr/lib/python2.7/urllib2.py", line 438, in error 
    return self._call_chain(*args) 
    File "/usr/lib/python2.7/urllib2.py", line 372, in _call_chain 
    result = func(*args) 
    File "/usr/lib/python2.7/urllib2.py", line 521, in http_error_default 
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) 
urllib2.HTTPError: HTTP Error 403: Forbidden 
+1

您可能想要使用例外 – Asterisk 2012-04-12 05:29:14

+0

@Asterisk我明白了。 Python新手。謝謝! – yayu 2012-04-12 05:31:15

回答