2016-06-13 54 views
1

我試圖爲團隊的每個頁面獲得所有遊戲結果。到目前爲止,我能夠得到所有對手1和對手2並得分結果。但我不知道如何獲得下一頁來獲取其餘的數據。我會找到下一頁並將它放在while循環中嗎?這裏是鏈接到隊我想如何在python 3中使用Beautifulsoup從下一頁獲取文本?

http://www.gosugamers.net/counterstrike/teams/7397-natus-vincere/matches

這是我迄今爲止,它得到所有團隊已進行的比賽和得分僅在第一頁。

def all_match_outcomes(): 

    for match_outcomes in match_history_url(): 
     rest_server(True) 
     page = requests.get(match_outcomes).content 
     soup = BeautifulSoup(page, 'html.parser') 

     team_name_element = soup.select_one('div.teamNameHolder') 
     team_name = team_name_element.find('h1').text.replace('- Team Overview', '') 

     for match_outcome in soup.select('table.simple.gamelist.profilelist tr'): 
      opp1 = match_outcome.find('span', {'class': 'opp1'}).text 
      opp2 = match_outcome.find('span', {'class': 'opp2'}).text 

      opp1_score = match_outcome.find('span', {'class': 'hscore'}).text 
      opp2_score = match_outcome.find('span', {'class': 'ascore'}).text 

      if match_outcome(True): # If teams have past matches 
       print(team_name, '%s %s:%s %s' % (opp1, opp1_score, opp2_score, opp2)) 

回答

0

獲取最後一個頁碼並逐頁迭代,直到您點擊最後一頁。

完整的工作代碼:

import re 

import requests 
from bs4 import BeautifulSoup 

url = "http://www.gosugamers.net/counterstrike/teams/7397-natus-vincere/matches" 

with requests.Session() as session: 
    response = session.get(url) 
    soup = BeautifulSoup(response.content, "html.parser") 

    # locate the last page link 
    last_page_link = soup.find("span", text="Last").parent["href"] 
    # extract the last page number 
    last_page_number = int(re.search(r"page=(\d+)$", last_page_link).group(1)) 

    print("Processing page number 1") 
    # TODO: extract data 

    # iterate over all pages starting from page 2 (since we are already on the page 1) 
    for page_number in range(2, last_page_number+1): 
     print("Processing page number %d" % page_number) 

     link = "http://www.gosugamers.net/counterstrike/teams/7397-natus-vincere/matches?page=%d" % page_number 
     response = session.get(link) 

     soup = BeautifulSoup(response.content, "html.parser") 

     # TODO: extract data 
+0

時,有沒有更多的頁面要經過它會崩潰會發生什麼? – DJRodrigue

+0

@DJRodrigue nope,我們通過範圍(2,last_page_number + 1)''循環中的'page_number'來限制它從最小到最大頁面。 – alecxe

+0

它似乎給我和錯誤: last_page_link = soup.find(「span」,text =「Last」)。parent ['href'] AttributeError:'NoneType'對象沒有屬性'parent' – DJRodrigue

相關問題