2013-08-21 39 views
1

所以我做了一個與Python和它的一些庫的網絡刮板......它去到給定的網站,並獲得該網站上的鏈接的所有鏈接和文本。我篩選了結果,因此我只在該網站上打印外部鏈接。Python網絡刮板,與不同的文本相同的鏈接,計數

的代碼如下所示:

import urllib 
import re 
import mechanize 
from bs4 import BeautifulSoup 
import urlparse 
import cookielib 
from urlparse import urlsplit 
from publicsuffix import PublicSuffixList 

link = "http://www.ananda-pur.de/23.html" 

newesturlDict = {} 
baseAdrInsArray = [] 



br = mechanize.Browser() 
cj = cookielib.LWPCookieJar() 
br.set_cookiejar(cj) 
br.set_handle_robots(False) 
br.set_handle_equiv(False) 
br.set_handle_redirect(True) 
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) 
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')] 
page = br.open(link, timeout=10) 


for linkins in br.links(): 

    newesturl = urlparse.urljoin(linkins.base_url, linkins.url) 

    linkTxt = linkins.text 
    baseAdrIns = linkins.base_url 

    if baseAdrIns not in baseAdrInsArray: 
     baseAdrInsArray.append(baseAdrIns) 

    netLocation = urlsplit(baseAdrIns) 
    psl = PublicSuffixList() 
    publicAddress = psl.get_public_suffix(netLocation.netloc) 

    if publicAddress not in newesturl: 

     if newesturl not in newesturlDict: 
      newesturlDict[newesturl,linkTxt] = 1 
     if newesturl in newesturlDict: 
      newesturlDict[newesturl,linkTxt] += 1 

newesturlCount = sorted(newesturlDict.items(),key=lambda(k,v):(v,k),reverse=True) 
for newesturlC in newesturlCount: 
    print baseAdrInsArray[0]," - ",newesturlC[0],"- count: ", newesturlC[1] 

,並打印出的結果是這樣的:

http://www.ananda-pur.de/23.html - ('http://www.yogibhajan.com/', 'http://www.yogibhajan.com') - count: 1 
http://www.ananda-pur.de/23.html - ('http://www.kundalini-yoga-zentrum-berlin.de/', 'http://www.kundalini-yoga-zentrum-berlin.de') - count: 1 
http://www.ananda-pur.de/23.html - ('http://www.kriteachings.org/', 'http://www.sat-nam-rasayan.de') - count: 1 
http://www.ananda-pur.de/23.html - ('http://www.kriteachings.org/', 'http://www.kriteachings.org') - count: 1 
http://www.ananda-pur.de/23.html - ('http://www.kriteachings.org/', 'http://www.gurudevsnr.com') - count: 1 
http://www.ananda-pur.de/23.html - ('http://www.kriteachings.org/', 'http://www.3ho.de') - count: 1 

我的問題是具有不同的文字那些相同的鏈接。根據印刷例如,假設站點有4個環節http://www.kriteachings.org/但你可以看到,每一個的4個環節有不同的text:1是http://www.sat-nam-rasayan.de,第二是http://www.kriteachings.org,第三個是http://www.gurudevsnr.com和4日是http://www.3ho.de

我想要得到的打印結果我可以看到給定頁面上有多少時間鏈接,但是如果有不同的鏈接文本,它只會附加到來自同一鏈接的其他文本。要進入這個例子的時候,我想獲得打印這樣的:

http://www.ananda-pur.de/23.html - http://www.yogibhajan.com/ - http://www.yogibhajan.com - count: 1 
http://www.ananda-pur.de/23.html - http://www.kundalini-yoga-zentrum-berlin.de - http://www.kundalini-yoga-zentrum-berlin.de - count: 1 
http://www.ananda-pur.de/23.html - http://www.kriteachings.org/ - http://www.sat-nam-rasayan.de, http://www.kriteachings.org, http://www.gurudevsnr.com, http://www.3ho.de - count: 4 

解釋:

(第一個鏈接中給出頁,第二是建立鏈接,第三個環節是 實際上可以文本該公司成立鏈接,和第4項是

我的主要問題是,有多少次 鏈接出現在特定網站),我不知道如何比較?!,排序?或告訴程序這是相同的鏈接,並且它應該附加不同的文本。

這樣的事情甚至可能沒有太多的代碼?我是蟒蛇nooby所以我有點失去..

任何幫助或建議,歡迎

回答

1

收集鏈接到一本字典,收集鏈接文本和處理數:

import cookielib 

import mechanize 


base_url = "http://www.ananda-pur.de/23.html" 

br = mechanize.Browser() 
cj = cookielib.LWPCookieJar() 
br.set_cookiejar(cj) 
br.set_handle_robots(False) 
br.set_handle_equiv(False) 
br.set_handle_redirect(True) 
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) 
br.addheaders = [('User-agent', 
        'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')] 
page = br.open(base_url, timeout=10) 

links = {} 
for link in br.links(): 
    if link.url not in links: 
     links[link.url] = {'count': 1, 'texts': [link.text]} 
    else: 
     links[link.url]['count'] += 1 
     links[link.url]['texts'].append(link.text) 

# printing 
for link, data in links.iteritems(): 
    print "%s - %s - %s - %d" % (base_url, link, ",".join(data['texts']), data['count']) 

打印:

http://www.ananda-pur.de/23.html - index.html - Zadekstr 11,12351 Berlin, - 2 
http://www.ananda-pur.de/23.html - 28.html - Das Team - 1 
http://www.ananda-pur.de/23.html - http://www.yogibhajan.com/ - http://www.yogibhajan.com - 1 
http://www.ananda-pur.de/23.html - 24.html - Kontakt - 1 
http://www.ananda-pur.de/23.html - 25.html - Impressum - 1 
http://www.ananda-pur.de/23.html - http://www.kriteachings.org/ - http://www.kriteachings.org,http://www.gurudevsnr.com,http://www.sat-nam-rasayan.de,http://www.3ho.de - 4 
http://www.ananda-pur.de/23.html - http://www.kundalini-yoga-zentrum-berlin.de/ - http://www.kundalini-yoga-zentrum-berlin.de - 1 
http://www.ananda-pur.de/23.html - 3.html - Ergo Oranien 155 - 1 
http://www.ananda-pur.de/23.html - 2.html - Physio Bänsch 36 - 1 
http://www.ananda-pur.de/23.html - 13.html - Stellenangebote - 1 
http://www.ananda-pur.de/23.html - 23.html - Links - 1 
+0

是啊,看起來像解決方案......只有我需要的是忽視這些內在聯繫。但是,這不是一個問題,我敢肯定,我可以實現你比如我的代碼......會嘗試現在 – dzordz

+0

當然,你可以檢查'link.url.startswith('http://')'是否繼續循環,如果不是。 – alecxe

+0

我已經設法讓它工作!謝謝! – dzordz

相關問題