2015-06-10 140 views
0

我想使用scrapy抓取一個網站,但我無法通過scrapy登錄到我的帳戶。這裏是蜘蛛代碼:蟒蛇scrapy登錄重定向問題

from scrapy.spider import BaseSpider 
from scrapy.selector import HtmlXPathSelector 
from images.items import ImagesItem 
from scrapy.http import Request 
from scrapy.http import FormRequest 
from loginform import fill_login_form 
import requests 
import os 
import scrapy 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.shell import inspect_response 

class ImageSpider(BaseSpider): 
    counter=0; 
    name = "images" 
    start_urls=['https://poshmark.com/login'] 
    f=open('poshmark.txt','wb') 
    if not os.path.exists('./Image'): 
     os.mkdir('./Image') 

    def parse(self,response): 
     return [FormRequest("https://www.poshmark.com/login", 
        formdata={ 
         'login_form[username_email]':'Oliver1234', 
         'login_form[password]':'password'}, 
        callback=self.real_parse)] 

    def real_parse(self, response): 
     print 'you are here' 
     rq=[] 
     mainsites=response.xpath("//body[@class='two-col feed one-col']/div[@class='body-con']/div[@class='main-con clear-fix']/div[@class='right-col']/div[@id='tiles']/div[@class='listing-con shopping-tile masonry-brick']/a/@href").extract() 
     for mainsite in mainsites: 
     r=Request(mainsite, callback=self.get_image) 
     rq.append(r) 
     return rq 
    def get_image(self, response): 
     req=[] 
     sites=response.xpath("//body[@class='two-col small fixed']/div[@class='body-con']/div[@class='main-con']/div[@class='right-col']/div[@class='listing-wrapper']/div[@class='listing']/div[@class='img-con']/img/@src").extract() 
     for site in sites: 
     r = Request('http:'+site, callback=self.DownLload) 
     req.append(r) 
     return req 

    def DownLload(self, response): 
     str = response.url[0:-3]; 
     self.counter = self.counter+1 
     str = str.split('/'); 
     print '----------------Image Get----------------',self.counter,str[-1],'jpg' 
     imgfile = open('./Image/'+str[-1]+"jpg",'wb') 
     imgfile.write(response.body) 
     imgfile.close() 

而且我得到的命令窗口輸出象下面這樣:

C:\Python27\Scripts\tutorial\images>scrapy crawl images 
C:\Python27\Scripts\tutorial\images\images\spiders\images_spider.py:14: ScrapyDe 
precationWarning: images.spiders.images_spider.ImageSpider inherits from depreca 
ted class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (w 
arning only on first subclass, there may be others) 
    class ImageSpider(BaseSpider): 
2015-06-09 23:43:29-0400 [scrapy] INFO: Scrapy 0.24.6 started (bot: images) 
2015-06-09 23:43:29-0400 [scrapy] INFO: Optional features available: ssl, http11 

2015-06-09 23:43:29-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE' 
: 'images.spiders', 'SPIDER_MODULES': ['images.spiders'], 'BOT_NAME': 'images'} 
2015-06-09 23:43:29-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetCons 
ole, CloseSpider, WebService, CoreStats, SpiderState 
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled downloader middlewares: HttpAuth 
Middleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, Def 
aultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, Redirec 
tMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled spider middlewares: HttpErrorMid 
dleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddlew 
are 
2015-06-09 23:43:30-0400 [scrapy] INFO: Enabled item pipelines: 
2015-06-09 23:43:30-0400 [images] INFO: Spider opened 
2015-06-09 23:43:30-0400 [images] INFO: Crawled 0 pages (at 0 pages/min), scrape 
d 0 items (at 0 items/min) 
2015-06-09 23:43:30-0400 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6 
023 
2015-06-09 23:43:30-0400 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080 

2015-06-09 23:43:33-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com 
/login> (referer: None) 
2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (302) to <GET https://www.p 
oshmark.com/feed> from <POST https://www.poshmark.com/login> 
2015-06-09 23:43:35-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm 
ark.com/feed> from <GET https://www.poshmark.com/feed> 
2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (302) to <GET https://poshm 
ark.com/login?pmrd%5Burl%5D=%2Ffeed> from <GET https://poshmark.com/feed> 
2015-06-09 23:43:36-0400 [images] DEBUG: Redirecting (301) to <GET https://poshm 
ark.com/login> from <GET https://poshmark.com/login?pmrd%5Burl%5D=%2Ffeed> 
2015-06-09 23:43:37-0400 [images] DEBUG: Crawled (200) <GET https://poshmark.com 
/login> (referer: https://poshmark.com/login) 
you are here 
2015-06-09 23:43:37-0400 [images] INFO: Closing spider (finished) 
2015-06-09 23:43:37-0400 [images] INFO: Dumping Scrapy stats: 
     {'downloader/request_bytes': 4213, 
     'downloader/request_count': 6, 
     'downloader/request_method_count/GET': 5, 
     'downloader/request_method_count/POST': 1, 
     'downloader/response_bytes': 9535, 
     'downloader/response_count': 6, 
     'downloader/response_status_count/200': 2, 
     'downloader/response_status_count/301': 2, 
     'downloader/response_status_count/302': 2, 
     'finish_reason': 'finished', 
     'finish_time': datetime.datetime(2015, 6, 10, 3, 43, 37, 213000), 
     'log_count/DEBUG': 8, 
     'log_count/INFO': 7, 
     'request_depth_max': 1, 
     'response_received_count': 2, 
     'scheduler/dequeued': 6, 
     'scheduler/dequeued/memory': 6, 
     'scheduler/enqueued': 6, 
     'scheduler/enqueued/memory': 6, 
     'start_time': datetime.datetime(2015, 6, 10, 3, 43, 30, 788000)} 
2015-06-09 23:43:37-0400 [images] INFO: Spider closed (finished) 

你可以看到,它會被重定向從./login,這似乎是一個以./feed成功登錄,但最終重定向到./login。任何關於可能導致它的想法?

回答

1

當您登錄到網站時,它會存儲(取決於身份驗證方法,它可能會有所不同)用戶會話上的某種令牌。您遇到的問題是,當您正確地進行身份驗證時,您的會話數據(瀏覽器能夠告訴您登錄的服務器的方式以及您是誰的身份)不會被保存。

的人在這個線程似乎已經成功地做你正在尋找在這裏做什麼:

Crawling with an authenticated session in Scrapy

這裏:

Using Scrapy with authenticated (logged in) user session