2
我正在學習Scrapy。我在https://realpython.com/blog/python/web-scraping-with-scrapy-and-mongodb/做了教程,一切都很順利。 然後,我開始一個新的簡單的項目,從維基百科中提取數據,這是輸出Scrapy爬行0頁
C:\Users\Leo\Documenti\PROGRAMMAZIONE\SORGENTI\Python\wikiScraper>scrapy crawl w
iki
2015-09-07 02:28:59 [scrapy] INFO: Scrapy 1.0.3 started (bot: wikiScraper)
2015-09-07 02:28:59 [scrapy] INFO: Optional features available: ssl, http11
2015-09-07 02:28:59 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'wi
kiScraper.spiders', 'SPIDER_MODULES': ['wikiScraper.spiders'], 'BOT_NAME': 'wiki
Scraper'}
2015-09-07 02:28:59 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsol
e, LogStats, CoreStats, SpiderState
2015-09-07 02:28:59 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-09-07 02:28:59 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-09-07 02:28:59 [scrapy] INFO: Enabled item pipelines:
2015-09-07 02:28:59 [scrapy] INFO: Spider opened
2015-09-07 02:28:59 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i
tems (at 0 items/min)
2015-09-07 02:28:59 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2015-09-07 02:29:00 [scrapy] DEBUG: Crawled (200) <GET https://it.wikipedia.org/
wiki/Serie_A_2015-2016> (referer: None)
[]
2015-09-07 02:29:00 [scrapy] INFO: Closing spider (finished)
2015-09-07 02:29:00 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 236,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 55474,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 9, 7, 0, 29, 0, 355000),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2015, 9, 7, 0, 28, 59, 671000)}
2015-09-07 02:29:00 [scrapy] INFO: Spider closed (finished)
這是我wiki_spider.py:
# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.selector import Selector
from wikiScraper.items import WikiItem
class WikiSpider(Spider):
name = "wiki"
allowed_domains = ["wikipedia.it"]
start_urls = [
"http://it.wikipedia.org/wiki/Serie_A_2015-2016",
]
def parse(self, response):
questions = Selector(response).xpath('//*[@id="mw-content-text"]/center/table/tbody/tr')
print(questions)
for question in questions:
item = WikiItem()
item['position'] = question.xpath(
'td[2]/text()').extract()
item['team'] = question.xpath(
'td[3]/a/text()').extract()
item['point'] = question.xpath(
'td[4]/b/text()').extract()
yield item
在Chrome的開發者工具,我可以成功地選擇所有我想從XPath選擇器中提取的數據。
但如果我嘗試:
response.xpath('//*[@id="mw-content-text"]/center/table/tbody/tr')
在命令PROMT
,如果我打印(問題),它給了我一個
[]
謝謝!任何幫助表示讚賞!