2016-08-19 117 views
0

我想下載所有的csv文件,任何想法我如何做到這一點?從URL下載所有csv文件

from bs4 import BeautifulSoup 
import requests 
url = requests.get('http://www.football-data.co.uk/englandm.php').text 
soup = BeautifulSoup(url) 
for link in soup.findAll("a"): 
    print link.get("href") 
+0

你的意思是你想下載所有的csv文件,從一個頁面鏈接?我認爲遍歷所有鏈接並檢查文件擴展名不是一個壞主意。 – martijnn2008

回答

0

像這樣的東西應該工作:

from bs4 import BeautifulSoup 
from time import sleep 
import requests 


if __name__ == '__main__': 
    url = requests.get('http://www.football-data.co.uk/englandm.php').text 
    soup = BeautifulSoup(url) 
    for link in soup.findAll("a"): 
     current_link = link.get("href") 
     if current_link.endswith('csv'): 
      print('Found CSV: ' + current_link) 
      print('Downloading %s' % current_link) 
      sleep(10) 
      response = requests.get('http://www.football-data.co.uk/%s' % current_link, stream=True) 
      fn = current_link.split('/')[0] + '_' + current_link.split('/')[1] + '_' + current_link.split('/')[2] 
      with open(fn, "wb") as handle: 
       for data in response.iter_content(): 
        handle.write(data) 
0

你只需要過濾的HREFs您可以用CSS選擇一個做 [HREF $ = CSV]這會發現href的結尾在.csv然後加入到每個基址,請求並最終寫入內容:

from bs4 import BeautifulSoup 
import requests 
from urlparse import urljoin 
from os.path import basename 

base = "http://www.football-data.co.uk/" 
url = requests.get('http://www.football-data.co.uk/englandm.php').text 
soup = BeautifulSoup(url) 
for link in (urljoin(base, a["href"]) for a in soup.select("a[href$=.csv]")): 
    with open(basename(link), "w") as f: 
     f.writelines(requests.get(link)) 

這將給你五個文件,E0.csv,E1.csv,E2.csv,E3.csv,E4.csv與所有的數據裏面。