我爬行大量的網址,並想知道是否有可能讓scrapy不用'meta name =「robots」content =「noindex」'解析頁面? 看看這裏列出的拒絕規則http://doc.scrapy.org/en/latest/topics/link-extractors.html它看起來像拒絕規則只適用於URL。你可以讓scrapy忽略xpath嗎?Scrapy忽略noindex
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from wallspider.items import Website
class Spider(CrawlSpider):
name = "browsetest"
allowed_domains = ["www.mydomain.com"]
start_urls = ["http://www.mydomain.com",]
rules = (
Rule(SgmlLinkExtractor(allow=('/browse/')), callback="parse_items", follow= True),
Rule(SgmlLinkExtractor(allow=(),unique=True,deny=('/[1-9]$', '(bti=)[1-9]+(?:\.[1-9]*)?', '(sort_by=)[a-zA-Z]', '(sort_by=)[1-9]+(?:\.[1-9]*)?', '(ic=32_)[1-9]+(?:\.[1-9]*)?', '(ic=60_)[0-9]+(?:\.[0-9]*)?', '(search_sort=)[1-9]+(?:\.[1-9]*)?', 'browse-ng.do\?', '/page/', '/ip/', 'out\+value', 'fn=', 'customer_rating', 'special_offers', 'search_sort=&', 'facet='))),
)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//html')
items = []
for site in sites:
item = Website()
item['url'] = response.url
item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
item['robots'] = site.select('//meta[@name="robots"]/@content').extract()
items.append(item)
return items
你想跳過檢索這些頁面?如果是這樣,那是不可能的,因爲爲了查找元機器人,您必須檢索該頁面。 – Rolando
對不起,我改寫了我的問題。是否有可能讓它解析包含'meta name =「robots」content =「noindex」'的網址? –
我可以否認xpath嗎? –