我對python不是很熟悉,請耐心等待。 我有一個scrapy爬蟲,它的工作原理應該是這樣,但現在我需要做一個新的爬蟲,但是這次它應該抓取登錄的會話。 所以我的scrapy使用start_urls從站點地圖獲得的url列表,它應該向登錄表單提出請求,然後,如果登錄,它應該開始解析我的列表...登錄解析url列表後的Scrapy
這是我的代碼到目前爲止:
class StockPricesSpider(Spider):
name = "logged-in"
allowed_domains = ["example.com"]
d = strftime("%Y-%m-%d", gmtime())
start_urls = ['https://www.example.com/customer/account/login/']
def parse(self, response):
return [FormRequest.from_response(response,
formdata={'username': 'myuser', 'password': 'mypass'},
callback=self.after_login)]
def after_login(self, response):
# check login succeed before going on
if "Invalid login or password." in response.body:
self.log("Login failed", level=log.ERROR)
return
else:
logging.log(logging.INFO,'Logged in and start parsing')
return Request("http://www.example.com/", callback=self.parse_products)
def parse_products(self, response):
f = open("data/sitemaps/urls04102015.txt")
start_urls = [url.strip() for url in f.readlines()]
f.close()
d = strftime("%Y-%m-%d", gmtime())
if os.path.exists("data/results/stock_"+d+".csv"):
os.remove("data/results/stock_"+d+".csv")
sel = Selector(response)
separator = ";"
items = []
item = MyPrices()
sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
logging.log(logging.INFO, sku)
if len(sku) > 0:
item['sku'] = "med_" + sel.xpath('.//strong[@itemprop="productID"]/text()').extract()[0].strip()
...
items.append(item)
return items
所以這是行不通的,因爲我沒有正確調用解析器。 所以基本上,我沒有得到錯誤,但網址也沒有得到解析。 因此,登錄工作,我成功登錄,但之後(登錄後)我怎麼做scrapy做(解析url列表)?
編輯 我發現一個新的方法來解決我的問題,但它也不能正常工作。請幫我調試這個(或第一種方法)
class StockPricesSpiderX(InitSpider):
name = "logged-in"
allowed_domains = ["example.com"]
login_page = 'https://www.example.com/ro/customer/account/login/'
d = strftime("%Y-%m-%d", gmtime())
f = open("data/sitemaps/urls04102015.txt")
start_urls = [url.strip() for url in f.readlines()]
f.close()
if os.path.exists("data/results/stock_"+d+".csv"):
os.remove("data/results/stock_"+d+".csv")
def init_request(self):
""" Called before crawler starts """
logging.log(logging.INFO, 'before crawler starts...')
return Request(url=self.login_page, callback=self.login)
def login(self, response):
""" Generate login request """
logging.log(logging.INFO, 'do login...')
return FormRequest.from_response(response,
formdata={'name':'myuser','password':'mypass'},
callback=self.check_login_response)
def check_login_response(self,response):
""" Check the response returned by login request to see if we are logged in """
if "Invalid login or password." in response.body:
logging.log(logging.INFO,'... BAD LOGIN ...')
else:
logging.log(logging.INFO, 'GOOD LOGIN... initialize')
self.initialized()
def parse_item(self, response):
sel = Selector(response)
separator = ";"
items = []
item = StockPrices()
sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
logging.log(logging.INFO, sku)
...
items.append(item)
return items
執行的日誌顯示此:
2015-12-03 14:54:16 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2015-12-03 14:54:16 [scrapy] INFO: Optional features available: ssl, http11
2015-12-03 14:54:16 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'products.spiders', 'FEED_URI': 'calinxautomat.csv', 'LOG_LEVEL': 'INFO', 'DUPEFILTER_CLASS': 'scrapy.dupefilter.RFPDupeFilter', 'SPIDER_MODULES': ['products.spiders'], 'DEFAULT_ITEM_CLASS': 'products.items.Subcategories', 'FEED_FORMAT': 'csv'}
2015-12-03 14:54:21 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-03 14:54:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-03 14:54:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-03 14:54:23 [scrapy] INFO: Enabled item pipelines: myWriteToCsv
2015-12-03 14:54:23 [root] INFO: before crawler starts...
2015-12-03 14:54:23 [scrapy] INFO: Spider opened
2015-12-03 14:54:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-03 14:54:25 [root] INFO: do login...
2015-12-03 14:54:26 [scrapy] INFO: Closing spider (finished)
2015-12-03 14:54:26 [scrapy] INFO: Dumping Scrapy stats:
...
這一個看起來如此不獲得通過登錄階段...這就像回調不退出formRequest ... 我做錯了什麼?
我得到使用該代碼的URL列表。我確定。我會嘗試你的建議並回復給你。我還發現了一種新的方法,所以請查看我編輯的問題,並讓我知道你的想法... – user1137313
更新了我的答案 – Steve