2017-02-09 171 views
1

我編寫了Scrapy來廢棄我存儲在數據庫中的幾千個url鏈接。我編寫了一個蜘蛛來調用scrapy.Requests函數從數據庫中通過url傳遞。然而,在抓取1-2頁後,蜘蛛過早關閉(沒有錯誤)。我不知道爲什麼會發生這種情況。 代碼:Scrapy蜘蛛過早關閉

# -*- coding: utf-8 -*- 
import scrapy 
import olsDBUtil 
import tokopediautil 
from datetime import datetime 
import time 

import logging 
from scrapy.utils.log import configure_logging 


class DataproductSpider(scrapy.Spider): 

    dbObj = olsDBUtil.olsDBUtil() 
    name = "dataProduct" 
    allowed_domains = ["tokopedia.com"] 
    newProductLink = list(dbObj.getNewProductLinks(10)) 
    start_urls = list(newProductLink.pop()) 
    # start_urls = dbObj.getNewProductLinks(NumOfLinks=2) 

    tObj = tokopediautil.TokopediaUtil() 

    configure_logging(install_root_handler=False) 
    logging.basicConfig(
     filename='log.txt', 
     format='%(levelname)s: %(message)s', 
     level=logging.INFO 
    ) 


    def parse(self, response): 

     if response.status == 200: 
      thisIsProductPage = response.selector.xpath("/html/head/meta[@property='og:type']/@content").extract()[ 
            0] == 'product' 
      if thisIsProductPage: 
       vProductID = self.dbObj.getProductIDbyURL(response.url) 
       vProductName = \ 
       response.selector.xpath("//input[@type='hidden'][@name='product_name']/@value").extract()[0] 
       vProductDesc = response.selector.xpath("//p[@itemprop='description']/text()").extract()[0] 
       vProductPrice = \ 
       response.selector.xpath("/html/head/meta[@property='product:price:amount']/@content").extract()[0] 
       vSiteProductID = \ 
       response.selector.xpath("//input[@type='hidden'][@name='product_id']/@value").extract()[0] 
       vProductCategory = response.selector.xpath("//ul[@itemprop='breadcrumb']//text()").extract()[1:-1] 
       vProductCategory = ' - '.join(vProductCategory) 
       vProductUpdated = \ 
       response.selector.xpath("//small[@class='product-pricelastupdated']/i/text()").extract()[0][26:36] 
       vProductUpdated = datetime.strptime(vProductUpdated, '%d-%M-%Y') 
       vProductVendor = response.selector.xpath("//a[@id='shop-name-info']/text()").extract()[0] 

       vProductStats = self.tObj.getItemSold(vSiteProductID) 
       vProductSold = vProductStats['item_sold'] 
       vProductViewed = self.tObj.getProductView(vSiteProductID) 
       vSpecificPortalData = "item-sold - %s , Transaction Sucess - %s , Transaction Rejected - %s " % (
       vProductStats['item_sold'], vProductStats['success'], vProductStats['reject']) 

       print "productID  : " + str(vProductID) 
       print "product Name : " + vProductName 
       print "product Desc : " + vProductDesc 
       print "Product Price : " + str(vProductPrice) 
       print "Product SiteID : " + str(vSiteProductID) 
       print "Category  : " + vProductCategory 
       print "Product Updated: " + vProductUpdated.strftime('%Y-%m-%d') 
       print "Product Vendor : " + vProductVendor 
       print "Product Sold : " + str(vProductSold) 
       print "Product Viewed : " + str(vProductViewed) 
       print "Site Specific Info: " + vSpecificPortalData 

       self.dbObj.storeNewProductData(
        productID=vProductID, 
        productName=vProductName, 
        productPrice=vProductPrice, 
        productSiteProdID=vSiteProductID, 
        productVendor=vProductVendor, 
        productDesc=vProductDesc, 
        productQtyDilihat=vProductViewed, 
        productTerjual=vProductSold, 
        productCategory=vProductCategory, 
        productSiteSpecificInfo=vSpecificPortalData 

       ) 

       self.dbObj.storeProductRunningData(
        productID=vProductID, 
        productDilihat=str(vProductViewed), 
        productTerjual=str(vProductSold) 

       ) 

     else: 
      print "Error Logged : Page Call Error" 

     LinkText = str(self.newProductLink.pop()) 
     print "LinkText : %s" % LinkText 
     print "Total newProductLink is %s" % str(len(self.newProductLink)) 

     yield scrapy.Request(url=LinkText, callback=self.parse) 

這裏的scrapy日誌:

INFO: Scrapy 1.3.0 started (bot: tokopedia) 
INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tokopedia.spiders', 'HTTPCACHE_EXPIRATION_SECS': 1800, 'SPIDER_MODULES': ['tokopedia.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'tokopedia', 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'} 
INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats', 
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware'] 
INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
INFO: Enabled item pipelines: 
[] 
INFO: Spider opened 
INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
DEBUG: Telnet console listening on 127.0.0.1:6023 
DEBUG: Crawled (200) <GET https://www.tokopedia.com/karmedia/penjelasan-pembatal-keislaman> (referer: None) 
DEBUG: Starting new HTTPS connection (1): js.tokopedia.com 
DEBUG: https://js.tokopedia.com:443 "GET /productstats/check?pid=27455429 HTTP/1.1" 200 61 
DEBUG: Starting new HTTPS connection (1): www.tokopedia.com 
DEBUG: https://www.tokopedia.com:443 "GET /provi/check?pid=27455429&callback=show_product_view HTTP/1.1" 200 31 
INFO: Closing spider (finished) 
INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 333, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 20815, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 2, 10, 18, 4, 10, 355000), 
'httpcache/firsthand': 1, 
'httpcache/miss': 1, 
'httpcache/store': 1, 
'log_count/DEBUG': 6, 
'log_count/INFO': 7, 
'offsite/filtered': 1, 
'request_depth_max': 1, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2017, 2, 10, 18, 4, 8, 922000)} 
INFO: Spider closed (finished) 
+1

你最後可以共享日誌和統計信息嗎? –

+0

Hello Paul ...對不起,我是scrapy的新手..如何查看scrapy上的日誌? – Zhermanus

+0

你的蜘蛛只有1個URL可以顯示,https://www.tokopedia.com/toko388/tile-8x17。而你的'parse'回調不會產生新的'scrapy.Request'。檢查你的'start_urls',或者甚至更好地實現一個'start_requests()'方法來遍歷你的數據庫中的URL,就像[在文檔示例中]一樣(https://docs.scrapy.org/en/latest/intro /tutorial.html#our-first-spider) –

回答

0

改變了scrapy.Request調用絕對URL鏈接,下一個產品..它的工作。我不明白爲什麼會發生這種情況..不知怎的,list.pop()語句不起作用,即使我已將其更改爲字符串。

+0

當我打印獲取產品列表時顯示:[('https://www.tokopedia.com/ladybeautysmart/htmh-005-pesona-asli-htmh-005-theraskin-cream-malam-htmh',),( 'https://www.tokopedia.com/supplierkosmetik/tahap-1-been-pink-cream-bpom-original-new-baby-pink',),('https://www.tokopedia.com/marieshop/ nomor-16-limited-kissproofkisprofkiss-profkis-proof',)] – Zhermanus

+0

這是一個列表中的元組嗎? – Zhermanus