scrapy無視我settins.pyScrapy忽略settings.py
我scraper.py
import scrapy
class BlogSpider(scrapy.Spider):
name = 'blogspider'
start_urls = ['https://www.doctolib.de/directory/a']
def parse(self, response):
if not response.xpath('//title'):
yield Request(url=response.url, dont_filter=True)
if not response.xpath('//lead'):
yield Request(url=response.url, dont_filter=True)
for title in response.css('.seo-directory-doctor-link'):
yield {'title': title.css('a ::attr(href)').extract_first()}
next_page = response.css('li.seo-directory-page > a[rel=next] ::attr(href)').extract_first()
if next_page:
yield scrapy.Request(response.urljoin(next_page), callback=self.parse)
在同一文件夾中的腳本放置是一個settings.py以下內容
# Retry many times since proxies often fail
RETRY_TIMES = 5
# Retry on most error codes since proxies fail for different reasons
RETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408]
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
# Fix path to this module
'botcrawler.randomproxy.RandomProxy': 600,
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}
PROXY_LIST = '/home/user/botcrawler/botcrawler/proxy/list.txt'
他爲什麼不加載這個文件?我做錯了什麼?
感謝
ohh好的位於Ubuntu的蜘蛛fodler在哪裏? – Joni
運行scrapy start項目名稱。它將在相同的路徑中創建一個目錄。你會發現它的一切 –