2017-02-15 35 views
0

如何使用python網頁報廢腳本中的代理可以從亞馬遜報廢數據。我需要學習如何使用下面的腳本代理 腳本是在這裏如何在python腳本中使用代理?

import scrapy 
from urls import start_urls 
import re 

class BbbSpider(scrapy.Spider): 

    AUTOTHROTTLE_ENABLED = True 
    name = 'bbb_spider' 
    # start_urls = ['http://www.bbb.org/chicago/business-reviews/auto-repair-and-service-equipment-and-supplies/c-j-auto-parts-in-chicago-il-88011126'] 


    def start_requests(self): 
     for x in start_urls: 
      yield scrapy.Request(x, self.parse) 

    def parse(self, response): 

     brickset = str(response) 
     NAME_SELECTOR = 'normalize-space(.//div[@id="titleSection"]/h1[@id="title"]/span[@id="productTitle"]/text())' 
     #PAGELINK_SELECTOR = './/div[@class="info"]/h3[@class="n"]/a/@href' 
     ASIN_SELECTOR = './/table/tbody/tr/td/div[@class="content"]/ul/li[./b[text()="ASIN: "]]//text()' 
     #LOCALITY = 'normalize-space(.//div[@class="info"]/div/p/span[@class="locality"]/text())' 
     #PRICE_SELECTOR = './/div[@id="price"]/table/tbody/tr/td/span[@id="priceblock_ourprice"]//text()' 
     PRICE_SELECTOR = '#priceblock_ourprice' 
     STOCK_SELECTOR = 'normalize-space(.//div[@id="availability"]/span/text())' 
     PRODUCT_DETAIL_SELECTOR = './/table//div[@class="content"]/ul/li//text()' 
     PRODUCT_DESCR_SELECTOR = 'normalize-space(.//div[@id="productDescription"]/p/text())' 
     IMAGE_URL_SELECTOR = './/div[@id="imgTagWrapperId"]/img/@src' 

     yield { 
      'name': response.xpath(NAME_SELECTOR).extract_first().encode('utf8'), 
      'pagelink': response.url, 
      #'asin' : str(re.search("<li><b>ASIN: </b>([A-Z0-9]+)</li>",brickset).group(1).strip()), 
      'price' : str(response.css(PRICE_SELECTOR).extract_first().encode('utf8')), 
      'stock' : str(response.xpath(STOCK_SELECTOR).extract_first()), 
      'product_detail' : str(response.xpath(PRODUCT_DETAIL_SELECTOR).extract()), 
      'product_description' : str(response.xpath(PRODUCT_DESCR_SELECTOR).extract()), 
      'img_url' : str(response.xpath(IMAGE_URL_SELECTOR).extract_first()), 
     } 

start_url文件是在這裏

start_urls = ['https://www.amazon.co.uk/d/Hair-Care/Loreal-Majirel-Hair-Colour-Tint-Golden-Mahogany/B0085L50QU', 'https://www.amazon.co.uk/d/Hair-Care/Michel-Mercier-Ultimate-Detangling-Wooden-Brush-Normal/B00TE1WH7U'] 
+0

可以複製? :http://stackoverflow.com/questions/4710483/scrapy-and-proxies – Kelvin

回答

1

據我所知,有使用代理兩種方式對於Python代碼:

  • 設置環境變量http_proxyhttps_proxy,也許是使用代理的最簡單的方法。

    視窗:

    set http_proxy=http://proxy.myproxy.com 
    set https_proxy=https://proxy.myproxy.com 
    python get-pip.py 
    

    的Linux/OS X:通過HTTP代理下載器中間件因爲Scrapy 0.8提供

    export http_proxy=http://proxy.myproxy.com 
    export https_proxy=https://proxy.myproxy.com 
    sudo -E python get-pip.py 
    
  • 支持HTTP代理。 ,你可以檢查出HttpProxyMiddleware

    該中間件通過設置Request對象的代理元值來設置HTTP代理以用於請求。

    像Python標準庫模塊的urllib和urllib2的,它遵循以下環境變量:

    http_proxy 
    https_proxy 
    no_proxy 
    

希望這有助於。

+0

我需要在上面的腳本中使用HTTP代理 –

+0

@McGrady給了你一個腳本的解決方案,中間件是要走的路 –

0

如果你想做內部代碼。

這樣做。

def start_requests(self): 
    for x in start_urls: 
     req = scrapy.Request(x, self.parse) 
     req.meta['proxy'] = 'your_proxy_ip_here' 
     yield req 

而且不要忘了把這個settings.py文件

DOWNLOADER_MIDDLEWARES = { 
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 1, 
}