我對python非常陌生,並且已經從頭開始研究以下代碼兩週來抓取本地文件。大概有將近一百個小時,我可以儘可能多地學習Python,版本,導入包如lxml,bs4,請求,urllib,os,glob等等。本地HTML文件刮擦Urllib和BeautifulSoup
我毫無希望地停留在第一部分中,在一個目錄中獲取12,000個具有奇怪名稱的HTML文件,以加載和分析BeautifulSoup。我想將所有這些數據都放到一個csv文件中,或者只是爲了輸出,所以我可以使用剪貼板將它複製到一個文件中。
import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
#THIS LOCAL FILE WORKS PERFECTLY. I HAVE 12,000 HTML FILES IN THIS DIRECTORY TO PROCESS. HOW?
#my_url = 'file://127.0.0.1/C:\\My Web Sites\\BioFachURLS\\www.organic-bio.com\\en\\company\\1-SUNRISE-FARMS.html'
my_url = 'http://www.organic-bio.com/en/company/23694-MARTHOMI-ALLERGY-FREE-FOODS-GMBH'
uClient = uReq(my_url)
page_html = uClient.read()
uClient.close()
# html parsing
page_soup = soup(page_html, "html.parser")
# grabs each field
contactname = page_soup.findAll("td", {"itemprop": "name"})
contactstreetaddress = page_soup.findAll("td", {"itemprop": "streetAddress"})
contactpostalcode = page_soup.findAll("td", {"itemprop": "postalCode"})
contactaddressregion = page_soup.findAll("td", {"itemprop": "addressRegion"})
contactaddresscountry = page_soup.findAll("td", {"itemprop": "addressCountry"})
contactfax = page_soup.findAll("td", {"itemprop": "faxNumber"})
contactemail = page_soup.findAll("td", {"itemprop": "email"})
contactphone = page_soup.findAll("td", {"itemprop": "telephone"})
contacturl = page_soup.findAll("a", {"itemprop": "url"})
#Outputs as text without tags
Company = contactname[0].text
Address = contactstreetaddress[0].text
Zip = contactpostalcode[0].text
Region = contactaddressregion[0].text
Country = contactaddresscountry[0].text
Fax = contactfax[0].text
Email = contactemail[0].text
Phone = contactphone[0].text
URL = contacturl[0].text
#Prints with comma delimiters
print(Company + ', ' + Address + ', ' + Zip + ', ' + Region + ', ' + Country + ', ' + Fax + ', ' + Email + ', ' + URL)
歡迎來到堆棧溢出!你可以學習[如何提出一個好問題](http://stackoverflow.com/help/how-to-ask)並創建一個[Minimal,Complete,and Verifiable](http://stackoverflow.com/help/) mcve)的例子。這使我們更容易幫助你。 –