2017-10-28 116 views
0

請你能幫助我嗎?我被困在試圖理解爲什麼閃沒了渲染HTML響應:Splash的響應不會將html呈現爲html

  • 首先,成功地與scrapy登錄FormRequest
  • 然後SplashRequest,裝在端點 但是,當我打印response.body,該頁面未呈現。

額外信息: - 頁面向下滾動時添加更多結果。 - page.com並不是真正的網頁。 感謝先進!

import scrapy 
    from scrapy_splash import SplashRequest,SplashFormRequest 

    class LoginSpider(scrapy.Spider): 
     name = 'page' 
     start_urls = ['https://www.page.com'] 

     def parse(self, response): 
      return scrapy.FormRequest(
      'https://www.page.com/login/loginInitAction.do?method=processLogin', 
      formdata={'username':'userid','password':'key', 'remember':'on'}, 
     callback=self.after_login 
    ) 

     def after_login(self, response): 

      yield SplashRequest("https://www.page.com/search/all/simple?typeaheadTermType=&typeaheadTermId=&searchType=21&keywords=&pageValue=22", self.parse_page2, meta={ 
      'splash': { 
       'endpoint': 'render.html', 
       'args': {'wait': 10, 'render_all': 1,'html': 1}, 
      } 
     }) 



     def parse_page2(self, response): 

      print(response.body) 

      return 

CMD

2017-10-28 11:53:43 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot) 
2017-10-28 11:53:43 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_LOADER_WARN_ONLY': True} 
2017-10-28 11:53:43 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.corestats.CoreStats', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.logstats.LogStats'] 
2017-10-28 11:53:43 [scrapy.middleware] INFO: Enabled downloader 
middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy_splash.SplashCookiesMiddleware', 
'scrapy_splash.SplashMiddleware', 
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-10-28 11:53:43 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy_splash.SplashDeduplicateArgsMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-10-28 11:53:43 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-10-28 11:53:43 [scrapy.core.engine] INFO: Spider opened 
2017-10-28 11:53:43 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-10-28 11:53:43 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023 
2017-10-28 11:53:44 [scrapy.downloadermiddlewares.redirect] DEBUG: 
Redirecting (301) to <GET https://www.page.com/technology/home.jsp> from 
<GET https://www.page.com> 
2017-10-28 11:53:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.page.com/technology/home.jsp> (referer: None) 
2017-10-28 11:53:45 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.page.com/login/loginInitAction.do?method=processLogin> (referer: https://www.page.com/technology/home.jsp) 
2017-10-28 11:53:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.page.com/search/all/simple?typeaheadTermType=&typeaheadTermId=&searchType=21&keywords=&pageValue=1 via http://192.168.0.20:8050/render.html> (referer: None) 

回答

0

登陸你需要發送一個會話cookie,但是當使用render.html端點scrapy濺不處理cookie。嘗試這樣做,使餅乾工作:

import scrapy 
from scrapy_splash import SplashRequest 

script = """ 
function main(splash) 
    splash:init_cookies(splash.args.cookies) 
    assert(splash:go(splash.args.url)) 
    assert(splash:wait(0.5)) 

    return { 
    url = splash:url(), 
    cookies = splash:get_cookies(), 
    html = splash:html(), 
    } 
end 
""" 

class MySpider(scrapy.Spider): 


    # ... 
    def parse(self, response): 
     # ... 
     yield SplashRequest(url, self.parse_result, 
      endpoint='execute', 
      cache_args=['lua_source'], 
      args={'lua_source': script}, 
     ) 

這個例子改編自scrapy-splash自述文件;請參閱here以更好地理解爲什麼需要這樣做。