2015-09-06 107 views
2

我正在學習Scrapy。我在https://realpython.com/blog/python/web-scraping-with-scrapy-and-mongodb/做了教程,一切都很順利。 然後,我開始一個新的簡單的項目,從維基百科中提取數據,這是輸出Scrapy爬行0頁

C:\Users\Leo\Documenti\PROGRAMMAZIONE\SORGENTI\Python\wikiScraper>scrapy crawl w 
iki 
2015-09-07 02:28:59 [scrapy] INFO: Scrapy 1.0.3 started (bot: wikiScraper) 
2015-09-07 02:28:59 [scrapy] INFO: Optional features available: ssl, http11 
2015-09-07 02:28:59 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'wi 
kiScraper.spiders', 'SPIDER_MODULES': ['wikiScraper.spiders'], 'BOT_NAME': 'wiki 
Scraper'} 
2015-09-07 02:28:59 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsol 
e, LogStats, CoreStats, SpiderState 
2015-09-07 02:28:59 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddl 
eware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultH 
eadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMidd 
leware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2015-09-07 02:28:59 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddlewa 
re, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 
2015-09-07 02:28:59 [scrapy] INFO: Enabled item pipelines: 
2015-09-07 02:28:59 [scrapy] INFO: Spider opened 
2015-09-07 02:28:59 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 i 
tems (at 0 items/min) 
2015-09-07 02:28:59 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2015-09-07 02:29:00 [scrapy] DEBUG: Crawled (200) <GET https://it.wikipedia.org/ 
wiki/Serie_A_2015-2016> (referer: None) 
[] 
2015-09-07 02:29:00 [scrapy] INFO: Closing spider (finished) 
2015-09-07 02:29:00 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 236, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 55474, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2015, 9, 7, 0, 29, 0, 355000), 
'log_count/DEBUG': 2, 
'log_count/INFO': 7, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2015, 9, 7, 0, 28, 59, 671000)} 
2015-09-07 02:29:00 [scrapy] INFO: Spider closed (finished) 

這是我wiki_spider.py:

# -*- coding: utf-8 -*- 
from scrapy import Spider 
from scrapy.selector import Selector 
from wikiScraper.items import WikiItem 


class WikiSpider(Spider): 

name = "wiki" 
allowed_domains = ["wikipedia.it"] 
start_urls = [ 
    "http://it.wikipedia.org/wiki/Serie_A_2015-2016", 
] 

def parse(self, response): 
    questions = Selector(response).xpath('//*[@id="mw-content-text"]/center/table/tbody/tr') 


    print(questions) 


    for question in questions: 

     item = WikiItem() 
     item['position'] = question.xpath(
      'td[2]/text()').extract() 
     item['team'] = question.xpath(
      'td[3]/a/text()').extract() 
     item['point'] = question.xpath(
      'td[4]/b/text()').extract() 
     yield item 

在Chrome的開發者工具,我可以成功地選擇所有我想從XPath選擇器中提取的數據。

但如果我嘗試:

response.xpath('//*[@id="mw-content-text"]/center/table/tbody/tr') 
在命令PROMT

,如果我打印(問題),它給了我一個

[] 

謝謝!任何幫助表示讚賞!

回答

3

實際的問題是包含tbody的XPath表達式 - 該元素由瀏覽器添加並且不存在於使用Scrapy獲得的HTML內部。我還將依靠Classifica文本獲得Seria A目前排名所需的表格。更新代碼:

def parse(self, response): 
    questions = response.xpath('//h2[span = "Classifica"]/following-sibling::center/table//tr')[1:] 

    for question in questions: 
     item = StackItem() 
     item['position'] = question.xpath('td[2]/text()').extract()[0] 
     item['team'] = question.xpath('td[3]/a/text()').extract()[0] 
     item['point'] = question.xpath('td[4]/b/text()').extract()[0] 
     yield item 

它產生:

{'position': u'1.', 'point': u'6', 'team': u'Chievo'} 
{'position': u'1.', 'point': u'6', 'team': u'Torino'} 
{'position': u'1.', 'point': u'6', 'team': u'Inter'} 
{'position': u'1.', 'point': u'6', 'team': u'Sassuolo'} 
{'position': u'1.', 'point': u'6', 'team': u'Palermo'} 
{'position': u'6.', 'point': u'4', 'team': u'Sampdoria'} 
{'position': u'6.', 'point': u'4', 'team': u'Roma'} 
{'position': u'8.', 'point': u'3', 'team': u'Atalanta'} 
{'position': u'8.', 'point': u'3', 'team': u'Genoa'} 
{'position': u'8.', 'point': u'3', 'team': u'Fiorentina'} 
{'position': u'8.', 'point': u'3', 'team': u'Udinese'} 
{'position': u'8.', 'point': u'3', 'team': u'Milan'} 
{'position': u'8.', 'point': u'3', 'team': u'Lazio'} 
{'position': u'14.', 'point': u'1', 'team': u'Napoli'} 
{'position': u'14.', 'point': u'1', 'team': u'Verona'} 
{'position': u'16.', 'point': u'0', 'team': u'Bologna'} 
{'position': u'16.', 'point': u'0', 'team': u'Juventus'} 
{'position': u'16.', 'point': u'0', 'team': u'Empoli'} 
{'position': u'16.', 'point': u'0', 'team': u'Frosinone'} 
{'position': u'16.', 'point': u'0', 'team': u'Carpi'}