我寫了一個蜘蛛,其唯一目的是從http://www.funda.nl/koop/amsterdam/中提取一個數字,即從底部尋呼機的最大頁數(例如, ,下面例子中的數字255)。Scrapy feed輸出包含多次而不是一次的期望輸出
我成功地做到這一點使用基於正則表達式,這些網頁的網址都符合LinkExtractor。蜘蛛如下圖所示:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess
from Funda.items import MaxPageItem
class FundaMaxPagesSpider(CrawlSpider):
name = "Funda_max_pages"
allowed_domains = ["funda.nl"]
start_urls = ["http://www.funda.nl/koop/amsterdam/"]
le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0]) # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/
rules = (
Rule(le_maxpage, callback='get_max_page_number'),
)
def get_max_page_number(self, response):
links = self.le_maxpage.extract_links(response)
max_page_number = 0 # Initialize the maximum page number
page_numbers=[]
for link in links:
if link.url.count('/') == 6 and link.url.endswith('/'): # Select only pages with a link depth of 3
page_number = int(link.url.split("/")[-2].strip('p')) # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/'
page_numbers.append(page_number)
# if page_number > max_page_number:
# max_page_number = page_number # Update the maximum page number if the current value is larger than its previous value
max_page_number = max(page_numbers)
print("The maximum page number is %s" % max_page_number)
yield {'max_page_number': max_page_number}
如果我跑這跟飼料輸出通過在命令行中輸入scrapy crawl Funda_max_pages -o funda_max_pages.json
,生成的JSON文件看起來像這樣:
[
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257}
]
我覺得奇怪的是,字典輸出7次而不是一次。畢竟,yield
語句不在for
循環之外。誰能解釋這種行爲?
你仍然在爬行7個網址,但是你用'max_page_number:257'覆蓋同一個文件7次... – Granitosaurus