2009-01-17 53 views
1

我正在處理某些從美味中吸引網址的內容,然後使用這些網址發現關聯的Feed。處理給予BeautifulSoup的錯誤鏈接的最佳方法是什麼?

但是,一些美味的書籤不是html鏈接,並導致BS到barf。基本上,我想扔掉一個鏈接,如果BS提取它,它看起來不像HTML。

現在,這就是我所得到的。

trillian:Documents jauderho$ ./d2o.py "green data center" 
processing http://www.greenm3.com/ 
processing http://www.eweek.com/c/a/Green-IT/How-to-Create-an-EnergyEfficient-Green-Data-Center/?kc=rss 
Traceback (most recent call last): 
    File "./d2o.py", line 53, in <module> 
    get_feed_links(d_links) 
    File "./d2o.py", line 43, in get_feed_links 
    soup = BeautifulSoup(html) 
    File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1499, in __init__ 
    BeautifulStoneSoup.__init__(self, *args, **kwargs) 
    File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1230, in __init__ 
    self._feed(isHTML=isHTML) 
    File "/Library/Python/2.5/site-packages/BeautifulSoup.py", line 1263, in _feed 
    self.builder.feed(markup) 
    File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 108, in feed 
    self.goahead(0) 
    File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 150, in goahead 
    k = self.parse_endtag(i) 
    File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 314, in parse_endtag 
    self.error("bad end tag: %r" % (rawdata[i:j],)) 
    File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/HTMLParser.py", line 115, in error 
    raise HTMLParseError(message, self.getpos()) 
HTMLParser.HTMLParseError: bad end tag: u'</b />', at line 739, column 1 

更新:

Jehiah的答案的伎倆。作爲參考,這裏的一些代碼來獲取內容類型:

def check_for_html(link): 
    out = urllib.urlopen(link) 
    return out.info().getheader('Content-Type') 

回答

3

我簡單總結我BeautifulSoup處理,並尋找HTMLParser.HTMLParseError例外

import HTMLParser,BeautifulSoup 
try: 
    soup = BeautifulSoup.BeautifulSoup(raw_html) 
    for a in soup.findAll('a'): 
     href = a.['href'] 
     .... 
except HTMLParser.HTMLParseError: 
    print "failed to parse",url 

,但進一步的是,你可以檢查的內容類型抓取頁面時的響應,並確保其類似text/htmlapplication/xml+xhtml或類似的東西,甚至在嘗試解析它之前。這應該會阻止大多數錯誤。

相關問題