我正在寫一個Scrapy蜘蛛遍歷旅遊網站。該網站的結構如下:蟒蛇Scrapy網絡爬行和刮
Continents
North America
USA
lat: 123
long: 456
Canada
lat: 123
long: 456
South America
Brazil
lat: 456
long: 789
Peru
lat: 123
long: 456
我已經找到了如何抓取每一個國家頁,使用下面的腳本,但我在與被存儲信息的難度是什麼搶經/緯度信息。
import scrapy
class WorldSpider(scrapy.Spider):
name = "world"
def start_requests(self):
urls = [
'www.world.com'
]
for url in urls:
# yield scrapy.Request(url=url, callback=self.parse)
yield scrapy.Request(url=url, callback=self.parse_region)
def parse(self, response):
for link in response.css(CONTINENT_SELECTOR):
continent = link.css('a::attr(href)').extract_first()
if continent is not None:
continent = response.urljoin(continent)
yield response.follow(continent, callback=self.parse_continent)
def parse_continent(self, continent_response):
country_urls = continent_response.css(COUNTRY_SELECTOR)
if len(country_urls) == 0:
# This if-statement is entered when the Spider is at a country web page (e.g. USA, Canada, etc.).
# TODO figure out how to store this to text file or append to JSON object
yield {
'country': continent_response.css(TITLE_SELECTOR).extract_first(),
'latitude' : continent_response.css(LATITUDE_SELECTOR).extract_first(),
'longitude' : continent_response.css(LONGITUDE_SELECTOR).extract_first()
}
for link in country_urls:
country = link.css('a::attr(href)').extract_first()
if area is not None:
yield continent_response.follow(continent_response.urljoin(area), callback=self.parse_continent)
如何將此信息寫入一個文件或JSON對象?我最好喜歡數據結構來捕捉網站的結構。
例如:
{
"continents": [
{"North America" : [
{"country" : {"title": "USA", "latitude" : 123, "longitude" : 456}},
{"country" : {"title": "Canada", "latitude" : 123, "longitude" : 456}}
]},
{"South America" : [
{"country" : {"title": "Brazil", "latitude" : 456, "longitude" : 789}},
{"Peru" : {"title": "Peru", "latitude" : 123, "longitude" : 456}}
]}
]
}
我應該如何修改我的蜘蛛來實現這一目標之上?
您需要[**管道**] (https://doc.scrapy.org/zh/latest/topics/item-pipeline.html#write-items-to-a-json-file) – Jan
@Jan感謝您的輸入。我仍然在使用Scrapy,因此瞭解文檔中的內容很有幫助。謝謝! – GobiasKoffi