2013-07-22 21 views
1

假設我有以下的網站結構:爲什麼我定義的項目不能從Scrapy中填充和存儲?

  1. 開始網址:http://thomas.loc.gov/cgi-bin/query/z?c107:H.R%s:其中%s是指數1-50(用於說明目的的樣本)。
  2. 「第一層」:比爾文本或鏈接到多個版本...
  3. 「第二層」:比爾文本w /鏈接到「打印友好」(純文本)版本。

腳本的最終目標:

  1. 導航通過起始網址;解析URL,標題&正文;將它們保存到starts.txt文件中
  2. 從起始URL正文中提取「第1層」鏈接;導航到這些鏈接;解析URL,標題&正文;將它們保存到bills.txt文件中
  3. 從「第1層」URL的主體中提取「第2層」鏈接;導航到這些鏈接;解析URL,標題&身體;它們保存到versions.txt文件

假設我有以下腳本:

from scrapy.item import Item, Field 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 

class StartItem(Item): 
    url = Field() 
    title = Field() 
    body = Field() 

class BillItem(Item): 
    url = Field() 
    title = Field() 
    body = Field() 

class VersionItem(Item): 
    url = Field() 
    title = Field() 
    body = Field() 

class Lrn2CrawlSpider(CrawlSpider): 
    name = "lrn2crawl" 
    allowed_domains = ["thomas.loc.gov"] 
    start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00050,00001) ### Sample of 40 bills; Total range of bills is 1-5767 

    ] 

    rules = (
      # Extract links matching /query/D fragment (restricting tho those inside the content body of the url); follow; & scrape all bill text. 
      # and follow links from them (since no callback means follow=True by default). 
      # Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse. 
      Rule(SgmlLinkExtractor(allow=(r'/query/D'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True), 

      # Extract links in the body of a bill-version & follow them. 
      #Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse. 
      Rule(SgmlLinkExtractor(allow=(r'/query/C'), restrict_xpaths=('//table/tr/td[2]/a/@href')), callback='parse_versions', follow=True) 
     ) 

    def parse_start_url(self, response): 
     hxs = HtmlXPathSelector(response) 
     starts = hxs.select('//div[@id="content"]') 
     scraped_starts = [] 
     for start in starts: 
      scraped_start = StartItem() ### Start object defined previously 
      scraped_start['url'] = response.url 
      scraped_start['title'] = start.select('//h1/text()').extract() 
      scraped_start['body'] = response.body 
      scraped_starts.append(scraped_start) 
      with open('starts.txt', 'a') as f: 
       f.write('url: {0}, title: {1}, body: {2}\n'.format(scraped_start['url'], scraped_start['title'], scraped_start['body'])) 
     return scraped_starts 

    def parse_bills(self, response): 
     hxs = HtmlXPathSelector(response) 
     bills = hxs.select('//div[@id="content"]') 
     scraped_bills = [] 
     for bill in bills: 
      scraped_bill = BillItem() ### Bill object defined previously 
      scraped_bill['url'] = response.url 
      scraped_bill['title'] = bill.select('//h1/text()').extract() 
      scraped_bill['body'] = response.body 
      scraped_bills.append(scraped_bill) 
      with open('bills.txt', 'a') as f: 
       f.write('url: {0}, title: {1}, body: {2}\n'.format(scraped_bill['url'], scraped_bill['title'], scraped_bill['body'])) 
     return scraped_bills 

    def parse_versions(self, response): 
     hxs = HtmlXPathSelector(response) 
     versions = hxs.select('//div[@id="content"]') 
     scraped_versions = [] 
     for version in versions: 
      scraped_version = VersionItem() ### Version object defined previously 
      scraped_version['url'] = response.url 
      scraped_version['title'] = version.select('//h1/text()').extract() 
      scraped_version['body'] = response.body 
      scraped_versions.append(scraped_version) 
      with open('versions.txt', 'a') as f: 
       f.write('url: {0}, title: {1}, body: {2}\n'.format(scraped_version['url'], scraped_version['title'], scraped_version['body'])) 
     return scraped_versions 

這個腳本似乎做我想要的一切,除了導航到「第二層」的鏈接和解析這些網站的項目(URL,標題&正文)。換句話說,Scrapy不會抓取或解析我的「第二層」。

更簡單地重申我的問題:爲什麼Scrapy不能填充我的VersionItem並將其輸出到我想要的文件:version.txt?

回答

1

問題出在第二個SgmlLinkExtractorrestrict_xpaths設置。將其更改爲:

restrict_xpaths=('//div[@id="content"]',) 

希望有所幫助。

相關問題