2013-07-12 59 views
1

我正試圖刮掉國會圖書館/托馬斯網站。這個Python腳本旨在從他們的站點訪問40個賬單的示例(URL中的#1-40標識符)。我想解析每一條法律的正文,在正文/內容中搜索,提取鏈接到潛在的多個版本&後面。爲什麼Scrapy不能抓取或解析?

一旦在版本頁面上我想解析每個立法的正文,請搜索正文/內容&提取鏈接到潛在章節&後面。

一旦在章節頁面上我想解析法案的每個部分的正文。

我相信我的代碼中的Rules/LinkExtractor段存在一些問題。 python代碼正在執行,抓取啓動url,但沒有解析或任何後續任務。

三個問題:

  1. 一些法案不具有多個版本(和ERGO在
  2. 一些法案不有聯繫的部分,因爲他們是如此之短的URL的主體部分沒有鏈接,而一些不過是鏈接部分。
  3. 有些部分鏈接並不僅僅包含特定的部分內容,大部分內容都是之前或之後部分內容的只是多餘的夾雜物。

我的問題是,爲什麼Scrapy不能抓取或解析?

from scrapy.item import Item, Field 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 

class BillItem(Item): 
    title = Field() 
    body = Field() 

class VersionItem(Item): 
    title = Field() 
    body = Field() 

class SectionItem(Item): 
    body = Field() 

class Lrn2CrawlSpider(CrawlSpider): 
    name = "lrn2crawl" 
    allowed_domains = ["thomas.loc.gov"] 
    start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00040,00001) ### Sample of 40 bills; Total range of bills is 1-5767 

    ] 

rules = (
     # Extract links matching /query/ fragment (restricting tho those inside the content body of the url) 
     # and follow links from them (since no callback means follow=True by default). 
     # Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse. 
     Rule(SgmlLinkExtractor(allow=(r'/query/'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True), 

     # Extract links in the body of a bill-version & follow them. 
     #Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse. 
     Rule(SgmlLinkExtractor(restrict_xpaths=('//div/a[2]')), callback='parse_versions', follow=True) 
    ) 

def parse_bills(self, response): 
    hxs = HtmlXPathSelector(response) 
    bills = hxs.select('//div[@id="content"]') 
    scraped_bills = [] 
    for bill in bills: 
     scraped_bill = BillItem() ### Bill object defined previously 
     scraped_bill['title'] = bill.select('p/text()').extract() 
     scraped_bill['body'] = response.body 
     scraped_bills.append(scraped_bill) 
    return scraped_bills 

def parse_versions(self, response): 
    hxs = HtmlXPathSelector(response) 
    versions = hxs.select('//div[@id="content"]') 
    scraped_versions = [] 
    for version in versions: 
     scraped_version = VersionItem() ### Version object defined previously 
     scraped_version['title'] = version.select('center/b/text()').extract() 
     scraped_version['body'] = response.body 
     scraped_versions.append(scraped_version) 
    return scraped_versions 

def parse_sections(self, response): 
    hxs = HtmlXPathSelector(response) 
    sections = hxs.select('//div[@id="content"]') 
    scraped_sections = [] 
    for section in sections: 
     scraped_section = SectionItem() ## Segment object defined previously 
     scraped_section['body'] = response.body 
     scraped_sections.append(scraped_section) 
    return scraped_sections 

spider = Lrn2CrawlSpider() 

回答

0

我剛剛固定壓痕,除去在腳本的末尾spider = Lrn2CrawlSpider()線,通過scrapy runspider lrn2crawl.py跑了蜘蛛和它刮掉,如下鏈接,返回的項目 - 你的規則工作。

這裏是我運行的是什麼:

from scrapy.item import Item, Field 
from scrapy.contrib.spiders import CrawlSpider, Rule 
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor 
from scrapy.selector import HtmlXPathSelector 

class BillItem(Item): 
    title = Field() 
    body = Field() 

class VersionItem(Item): 
    title = Field() 
    body = Field() 

class SectionItem(Item): 
    body = Field() 

class Lrn2CrawlSpider(CrawlSpider): 
    name = "lrn2crawl" 
    allowed_domains = ["thomas.loc.gov"] 
    start_urls = ["http://thomas.loc.gov/cgi-bin/query/z?c107:H.R.%s:" % bill for bill in xrange(000001,00040,00001) ### Sample of 40 bills; Total range of bills is 1-5767 

    ] 

    rules = (
      # Extract links matching /query/ fragment (restricting tho those inside the content body of the url) 
      # and follow links from them (since no callback means follow=True by default). 
      # Desired result: scrape all bill text & in the event that there are multiple versions, follow them & parse. 
      Rule(SgmlLinkExtractor(allow=(r'/query/'), restrict_xpaths=('//div[@id="content"]')), callback='parse_bills', follow=True), 

      # Extract links in the body of a bill-version & follow them. 
      #Desired result: scrape all version text & in the event that there are multiple sections, follow them & parse. 
      Rule(SgmlLinkExtractor(restrict_xpaths=('//div/a[2]')), callback='parse_versions', follow=True) 
     ) 

    def parse_bills(self, response): 
     hxs = HtmlXPathSelector(response) 
     bills = hxs.select('//div[@id="content"]') 
     scraped_bills = [] 
     for bill in bills: 
      scraped_bill = BillItem() ### Bill object defined previously 
      scraped_bill['title'] = bill.select('p/text()').extract() 
      scraped_bill['body'] = response.body 
      scraped_bills.append(scraped_bill) 
     return scraped_bills 

    def parse_versions(self, response): 
     hxs = HtmlXPathSelector(response) 
     versions = hxs.select('//div[@id="content"]') 
     scraped_versions = [] 
     for version in versions: 
      scraped_version = VersionItem() ### Version object defined previously 
      scraped_version['title'] = version.select('center/b/text()').extract() 
      scraped_version['body'] = response.body 
      scraped_versions.append(scraped_version) 
     return scraped_versions 

    def parse_sections(self, response): 
     hxs = HtmlXPathSelector(response) 
     sections = hxs.select('//div[@id="content"]') 
     scraped_sections = [] 
     for section in sections: 
      scraped_section = SectionItem() ## Segment object defined previously 
      scraped_section['body'] = response.body 
      scraped_sections.append(scraped_section) 
     return scraped_sections 

希望有所幫助。

+0

是的,這確實有幫助,並且基本上刪除最後一行「spider = [...]」確實允許腳本運行。我仍然困惑爲什麼?當我在調試中運行腳本時,它告訴我在「規則([...]」)上出現語法錯誤,這就是爲什麼我說我相信問題出在那裏。 我剛發現這個腳本很奇怪運行但不執行任務;調試指向了錯誤的方向嗎?也許我錯了 無論如何,是的,這對我有很大的幫助。 –

1

只是爲了記錄在案,與你的腳本的問題是,可變rules不是Lrn2CrawlSpider的範圍內,因爲它不共享相同的縮進,所以當alecxe固定的縮進變量rules成了現在的屬性班上。稍後,繼承的方法__init__()將讀取該屬性並編譯規則並強制執行它們。

def __init__(self, *a, **kw): 
    super(CrawlSpider, self).__init__(*a, **kw) 
    self._compile_rules() 

擦掉最後一行與此無關。