我想刮與beautifulsoup網站這樣:webscraping和保存JSON作爲結果
從主頁上的40類只是名稱
然後去每一個類別,如(startupstash .COM/ideageneration /),並且其中會有一些子類
現在去每個子類假設第一個startupstash.com/resource/milanote/並採取內容細節
4.對於所有40個類別+子類別數量+每個子類別詳細信息,也是如此。
請有人能提供我一個想法如何approach..or法beautifulsoup..or可能code..i嘗試下來的東西
import requests
from bs4 import BeautifulSoup
headers={'User-Agent':'Mozilla/5.0'}
base_url="http://startupstash.com/"
req_home_page=requests.get(base_url,headers=headers)
soup=BeautifulSoup(req_home_page.text, "html5lib")
links_tag=soup.find_all('li', {'class':'categories-menu-item'})
titles_tag=soup.find_all('span',{'class':'name'})
links,titles=[],[]
for link in links_tag:
links.append(link.a.get('href'))
#print(links)
for title in titles_tag:
titles.append(title.getText())
print("HOME PAGE TITLES ARE \n",titles)
#HOME PAGE RESULT TITLE FINISH HERE
for i in range(0,len(links)):
req_inside_page = requests.get(links[i],headers=headers)
page_store =BeautifulSoup(req_inside_page.text, "html5lib")
jump_to_next=page_store.find_all('div', { 'class' : 'company-listing more' })
nextlinks=[]
for div in jump_to_next:
nextlinks.append(div.a.get("href"))
print("DETAIL OF THE LINKS IN EVERY CATEGORIES SCRAPPED HERE \n",nextlinks) #SCRAPPED THE WEBSITES IN EVERY CATEGORIES
for j in range(0,len(nextlinks)):
req_final_page=requests.get(nextlinks[j],headers=headers)
page_stored=BeautifulSoup(req_final_page.text,'html5lib')
detail_content=page_stored.find('div', { 'class' : 'company-page-body body'})
details,website=[],[]
for content in detail_content:
details.append(content.string)
print("DESCRIPTION ABOUT THE WEBSITE \n",details) #SCRAPPED THE DETAILS OF WEBSITE
detail_website=page_stored.find('div',{'id':"company-page-contact-details"})
table=detail_website.find('table')
for tr in table.find_all('tr')[2:]:
tds=tr.find_all('td')[1:]
for td in tds:
website.append(td.a.get('href'))
print("VISIT THE WEBSITE \n",website)
你有什麼確切的問題?請描述你嘗試過的和無法實現的。沒有人會爲你寫出整個刮板。 – VeGABAU
@ VeGABAU ..我只需要解決這個整個網站的方法..從第一頁我需要所有的類別名稱,第二個去每個類別和第三個從第三頁採取細節部分..... – pupu