你好,我做了一個沒有遞歸版本的這似乎獲得在同一個域的所有鏈接。
下面的代碼我測試過使用代碼中包含的問題。當我已經解決了這個問題,遞歸版本接下來的問題是hitting the recursion depth limit所以我改寫它,所以它在迭代的方式運行,代碼和結果如下:
from bs4 import BeautifulSoup
import requests
import tldextract
def print_domain_info(d):
print "Main Domain:{0} \nSub Domain:{1} \nSuffix:{2}".format(d.domain,d.subdomain,d.suffix)
SEARCHED_URLS = []
problem = [ "http://Noelkd.neocities.org/", "http://youpi.neocities.org/"]
while problem:
# Get a link from the stack of links
link = problem.pop()
# Check we haven't been to this address before
if link in SEARCHED_URLS:
continue
# We don't want to come back here again after this point
SEARCHED_URLS.append(link)
# Try and get the website
try:
req = requests.get(link)
except:
# If its not working i don't care for it
print "borked website found: {0}".format(link)
continue
# Now we get to this point worth printing something
print "Trying to parse:{0}".format(link)
print "Status Code:{0} Thats: {1}".format(req.status_code, "A-OK" if req.status_code == 200 else "SOMTHINGS UP")
# Get the domain info
dInfo = tldextract.extract(link)
print_domain_info(dInfo)
# I like utf-8
data = req.text.encode("utf-8")
print "Lenght Of Data Retrived:{0}".format(len(data)) # More info
soup = BeautifulSoup(data) # This was here before so i left it.
print "Found {0} link{1}".format(len(soup.find_all('a')),"s" if len(soup.find_all('a')) > 1 else "")
FOUND_THIS_ITERATION = [] # Getting the same links over and over was boring
found_links = [x for x in soup.find_all('a') if x.get('href') not in SEARCHED_URLS] # Find me all the links i don't got
for href in found_links:
href = href.get('href') # You wrote this seems to work well
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == dInfo.domain: # JUST FINDING STUFF ON SAME DOMAIN RIGHT?!
if href not in FOUND_THIS_ITERATION: # I'ma check you out next time
print "Check out this link: {0}".format(href)
print_domain_info(link_domain)
FOUND_THIS_ITERATION.append(href)
problem.append(href)
else: # I got you already
print "DUPE LINK!"
else:
print "Not on same domain moving on"
# Count down
print "We have {0} more sites to search".format(len(problem))
if problem:
continue
else:
print "Its been fun"
print "Lets see the URLS we've visited:"
for url in SEARCHED_URLS:
print url
它打印,很多其他的日誌記錄後大量的新聞網站!
發生了什麼事情是該腳本彈出一個尚未訪問的網站列表的值,然後它將獲取該網頁上所有位於同一個域中的鏈接。如果這些鏈接是我們未訪問過的頁面,我們將鏈接添加到要訪問的鏈接列表。在我們這樣做之後,我們彈出下一頁並再次執行相同的操作,直到沒有剩下的頁面可以訪問。
認爲這就是你要找的東西,如果這不符合你的意願或任何人可以改善請留下評論回到我們的評論。
請不要打電話給你的清單'清單'。這是一個內置的名稱:http://docs.python.org/2/library/functions.html – NPE
是的,我改變了它,但沒有輸出。 – Alisha
當你將一些調試輸出放入你的'scrap()'[* sic。*]函數時會發生什麼? – Johnsyweb