2017-10-06 58 views
0

試圖找出scrapy如何工作並使用它來查找有關論壇的信息。Scrapy不產生結果(爬行0頁)

items.py

import scrapy 


class BodybuildingItem(scrapy.Item): 
    # define the fields for your item here like: 
    title = scrapy.Field() 
    pass 

spider.py

from scrapy.spider import BaseSpider 
from scrapy.selector import Selector 
from bodybuilding.items import BodybuildingItem 

class BodyBuildingSpider(BaseSpider): 
    name = "bodybuilding" 
    allowed_domains = ["forum.bodybuilding.nl"] 
    start_urls = [ 
     "https://forum.bodybuilding.nl/fora/supplementen.22/" 
    ] 

    def parse(self, response): 
     responseSelector = Selector(response) 
     for sel in responseSelector.css('li.past.line.event-item'): 
      item = BodybuildingItem() 
      item['title'] = sel.css('a.data-previewUrl::text').extract() 
      yield item 

我試圖從這個例子中得到的職銜論壇是這樣的:https://forum.bodybuilding.nl/fora/supplementen.22/

但是我一直沒有收到結果:

class BodyBuildingSpider(BaseSpider): 2017-10-07 00:42:28 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: bodybuilding) 2017-10-07 00:42:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'bodybuilding.spiders', 'SPIDER_MODULES': ['bodybuilding.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'bodybuilding'} 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.memusage.MemoryUsage', 'scrapy.extensions.logstats.LogStats', 'scrapy.extensions.corestats.CoreStats'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware', 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 'scrapy.downloadermiddlewares.retry.RetryMiddleware', 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 'scrapy.downloadermiddlewares.stats.DownloaderStats'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 'scrapy.spidermiddlewares.referer.RefererMiddleware', 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 'scrapy.spidermiddlewares.depth.DepthMiddleware'] 2017-10-07 00:42:28 [scrapy.middleware] INFO: Enabled item pipelines: [] 2017-10-07 00:42:28 [scrapy.core.engine] INFO: Spider opened 2017-10-07 00:42:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2017-10-07 00:42:28 [scrapy.core.engine] DEBUG: Crawled (404) https://forum.bodybuilding.nl/robots.txt> (referer: None) 2017-10-07 00:42:29 [scrapy.core.engine] DEBUG: Crawled (200) https://forum.bodybuilding.nl/fora/supplementen.22/> (referer: None) 2017-10-07 00:42:29 [scrapy.core.engine] INFO: Closing spider (finished) 2017-10-07 00:42:29 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 469, 'downloader/request_count': 2, 'downloader/request_method_count/GET': 2, 'downloader/response_bytes': 22878, 'downloader/response_count': 2, 'downloader/response_status_count/200': 1, 'downloader/response_status_count/404': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2017, 10, 6, 22, 42, 29, 223305), 'log_count/DEBUG': 2, 'log_count/INFO': 7, 'memusage/max': 31735808, 'memusage/startup': 31735808, 'response_received_count': 2, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'start_time': datetime.datetime(2017, 10, 6, 22, 42, 28, 816043)} 2017-10-07 00:42:29 [scrapy.core.engine] INFO: Spider closed (finished)

我一直在這裏跟隨指南:http://blog.florian-hopf.de/2014/07/scrapy-and-elasticsearch.html

更新1:

正如有人告訴我,我需要更新我的代碼,以新的標準,我做到了,但它並沒有改變結果:

from scrapy.spider import BaseSpider 
from scrapy.selector import Selector 
from bodybuilding.items import BodybuildingItem 

class BodyBuildingSpider(BaseSpider): 
    name = "bodybuilding" 
    allowed_domains = ["forum.bodybuilding.nl"] 
    start_urls = [ 
     "https://forum.bodybuilding.nl/fora/supplementen.22/" 
    ] 

    def parse(self, response): 
     for sel in response.css('li.past.line.event-item'): 
      item = BodybuildingItem() 
      yield {'title': title.css('a.data-previewUrl::text').extract_first()} 
      yield item 

最近更新與修復

後一些很好的幫助,我終於得到它與這種蜘蛛工作:

import scrapy 

class BlogSpider(scrapy.Spider): 
    name = 'bodybuilding' 
    start_urls = ['https://forum.bodybuilding.nl/fora/supplementen.22/'] 

    def parse(self, response): 
     for title in response.css('h3.title'): 
      yield {'title': title.css('a::text').extract_first()} 
      next_page_url = response.xpath("//a[text()='Volgende >']/@href").extract_first() 
      if next_page_url: 
       yield response.follow(next_page_url, callback=self.parse) 
+0

你應該使用'response.css('li.past.line.event-item')',並且不需要'responseSelector = Selector(response)'。此外,您使用的CSS不再有效,因此您需要根據最新網頁 –

+0

更新這些內容。我想我現在已經更新了所有內容,但仍然沒有任何結果。查看更新。 – Nerotix

+0

問題是在頁面上沒有什麼匹配'li.past.line.event-item' –

回答

1

您應該使用response.css('li.past.line.event-item'),也沒有必要responseSelector = Selector(response)

而且你正在使用li.past.line.event-item的CSS,沒有更有效的,所以你需要更新首款基於最新的網頁

那些爲了得到下一個頁面的網址,你可以使用

>>> response.css("a.text::attr(href)").extract_first() 
'fora/supplementen.22/page-2' 

而且然後使用response.follow按照此相對URL

編輯-2:下一頁處理校正

以前的編輯沒有把它前面的頁面URL匹配的下一個頁面上,因爲工作,所以你需要使用下面

next_page_url = response.xpath("//a[text()='Volgende >']/@href").extract_first() 
if next_page_url: 
    yield response.follow(next_page_url, callback=self.parse) 

編輯-1:下一頁處理

next_page_url = response.css("a.text::attr(href)").extract_first() 
if next_page_url: 
    yield response.follow(next_page_url, callback=self.parse) 
+0

這是它看起來像現在: 在response.css next_page( 「a.text :: ATTR(HREF)」)extract_first(): 產量response.follow(next_page,self.parse) 但得到一個錯誤「TypeError:'NoneType'對象不可迭代」。 還告訴我,11行有一個問題,這是for循環,我只是表現。 – Nerotix

+0

@Nerotix,請編輯 –

+0

嗯奇怪的事情發生在我補充一點..它再次首先做的第一頁,然後是第二個,然後在第一,但它永遠不會再跳躍,然後第2頁 – Nerotix