2013-02-05 64 views
0

現在我使用3.6.1和nutch 1.5,它工作正常...我抓取我的網站和索引數據到索爾和使用solr搜索,但兩個星期前它開始不工作... Whene我使用./nutch抓取urls -solr http://localhost:8080/solr/ -depth 5 -topN 100命令它的工作,但是當我使用./nutch抓取urls -solr http://localhost:8080/solr/ -depth 5 -topN 100000,它拋出一個異常,在我的日誌文件中我發現這個..錯誤,同時運行solrindexer

2013-02-05 17:04:20,697 INFO solr.SolrWriter - Indexing 250 documents 
2013-02-05 17:04:20,697 INFO solr.SolrWriter - Deleting 0 documents 
2013-02-05 17:04:21,275 WARN mapred.LocalJobRunner - job_local_0029 
org.apache.solr.common.SolrException: Internal Server Error 

Internal Server Error 

request: `http://localhost:8080/solr/update?wt=javabin&version=2` 
    at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430) 
    at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244) 
    at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105) 
    at org.apache.nutch.indexer.solr.SolrWriter.write(SolrWriter.java:124) 
    at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:55) 
    at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:44) 
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:457) 
    at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:497) 
    at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:195) 
    at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:51) 
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519) 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260) 
2013-02-05 17:04:21,883 ERROR solr.SolrIndexer - java.io.IOException: Job failed! 
2013-02-05 17:04:21,887 INFO solr.SolrDeleteDuplicates - SolrDeleteDuplicates: starting at 2013-02-05 17:04:21 
2013-02-05 17:04:21,887 INFO solr.SolrDeleteDuplicates - SolrDeleteDuplicates: Solr url: `http://localhost:8080/solr/`  

兩個星期前,它工作得很好...... 有沒有人有類似的問題?

嗨,我剛剛完成抓取和山楂相同的異常,但是當我看我的日誌/ hadoop.log文件,我發現這個..

2013-02-06 22:02:14,111 INFO solr.SolrWriter - Indexing 250 documents 
2013-02-06 22:02:14,111 INFO solr.SolrWriter - Deleting 0 documents 
2013-02-06 22:02:14,902 WARN mapred.LocalJobRunner - job_local_0019 
org.apache.solr.common.SolrException: Bad Request 

Bad Request 

request: `http://localhost:8080/solr/update?wt=javabin&version=2` 
    at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430) 
    at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244) 
    at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105) 
    at org.apache.nutch.indexer.solr.SolrWriter.write(SolrWriter.java:124) 
    at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:55) 
    at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:44) 
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:457) 
    at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:497) 
    at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:304) 
    at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:53) 
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519) 
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) 
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260) 
2013-02-06 22:02:15,027 ERROR solr.SolrIndexer - java.io.IOException: Job failed! 
2013-02-06 22:02:15,032 INFO solr.SolrDeleteDuplicates - SolrDeleteDuplicates: starting at 2013-02-06 22:02:15 
2013-02-06 22:02:15,032 INFO solr.SolrDeleteDuplicates - SolrDeleteDuplicates: Solr url: `http://localhost:8080/solr/` 
2013-02-06 22:02:21,281 WARN mapred.FileOutputCommitter - Output path is null in cleanup 
2013-02-06 22:02:22,263 INFO solr.SolrDeleteDuplicates - SolrDeleteDuplicates: finished at 2013-02-06 22:02:22, elapsed: 00:00:07 
2013-02-06 22:02:22,263 INFO crawl.Crawl - crawl finished: crawl-20130206205733 

我希望這將有助於理解這個問題...

+0

似乎在地圖減少作業失敗,您可以檢查hadoop日誌瞭解更多詳情。 – Jayendra

+0

我編輯答案並添加日誌文件的最後部分...感謝您的回答。 –

+0

Solr可能會收到格式錯誤的請求。您會在Solr日誌中獲取有關錯誤請求細節和問題的信息。 – Jayendra

回答

0

隨着您顯示的日誌,我認爲答案將在Solr方面。你應該有一個異常跟蹤,它會告訴你什麼組件停止了處理。如果它在兩週前運行,或者有更改(jar版本?),或者你有一個特定的文檔是一個問題。

如果單個文檔發生問題(嘗試幾個不同的文檔),那麼您可能會改變環境(罐子,屬性等)。如果文檔的一個子集沒有發生但與另一個文檔發生,則可能是特定文檔存在問題(例如編碼錯誤)。

同樣,Solr端堆棧跟蹤將是第一件要檢查的事情。

+0

嗨,感謝您的回覆......我試圖抓取另一個網頁,例如'www.woodgears.ca'和另外兩個網頁,並且具有相同的結果,相同的例外...我認爲它與數據無關,現在我使用nutch 1.6和solr 3.6.2,並在ubuntu中安裝tomcat6 ...? –