-2
我有共同的陷阱,無法擺脫它:我的Scrapy蜘蛛很懶,所以它只能解析start_urls。代碼如下:Scrapy不在網站上爬行
import scrapy
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.item import Field
from scrapy.selector import Selector
class HabraPostSpider(scrapy.Spider):
name = 'habrapost'
allowed_domains = ['habrahabr.ru']
start_urls = ['https://habrahabr.ru/interesting/']
def parse(self, response):
self.logger.info('A response from %s just arrived!', response.url)
rules = (Rule(LinkExtractor()),
Rule(LinkExtractor(allow=('/post/'),),callback='parse_post',follow= True))
如果有人能說如何修理我的蜘蛛,我將非常高興!
你明白我)而且這是工作,謝謝你。 –
但'scrapy.spiders.CrawlSpider'實際上 –