2016-01-22 74 views
0

我正在使用Nutch 2.1爬行整個域名(例如company.com)。我曾遇到過這個問題,由於Apache Nutch中設置的內容限制,我沒有收到我想要爬取的所有鏈接。通常,當我檢查內容時,只有頁面的上半部分存儲在數據庫中,因此下半部分的鏈接沒有被抓取。使用Nutch內容限制的建議

爲了解決這個問題,我改變了的Nutch-site.xml中使內容限制如下:

<property> 
    <name>http.content.limit</name> 
    <value>-1</value> 
    <description>The length limit for downloaded content using the http 
    protocol, in bytes. If this value is nonnegative (>=0), content longer 
    than it will be truncated; otherwise, no truncation at all. Do not 
    confuse this setting with the file.content.limit setting. 
    </description> 
</property> 

這樣做解決了這個問題,但在某些時候,我遇到了內存溢出錯誤,由該輸出在分析證明:

ParserJob: starting 
ParserJob: resuming: false 
ParserJob: forced reparse: false 
ParserJob: parsing all 
Exception in thread "main" java.lang.RuntimeException: job failed: name=parse, jobid=job_local_0001 
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:54) 
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:251) 
at org.apache.nutch.parse.ParserJob.parse(ParserJob.java:259) 
at org.apache.nutch.parse.ParserJob.run(ParserJob.java:302) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
at org.apache.nutch.parse.ParserJob.main(ParserJob.java:306) 

這裏是我的hadoop.log(錯誤附近的部分) :

2016-01-22 02:02:35,898 INFO crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature 
2016-01-22 02:02:37,255 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2016-01-22 02:02:39,130 INFO mapreduce.GoraRecordReader - gora.buffer.read.limit = 10000 
2016-01-22 02:02:39,255 INFO mapreduce.GoraRecordWriter - gora.buffer.write.limit = 10000 
2016-01-22 02:02:39,322 INFO crawl.SignatureFactory - Using Signature impl: org.apache.nutch.crawl.MD5Signature 
2016-01-22 02:02:53,018 WARN mapred.FileOutputCommitter - Output path is null in cleanup 
2016-01-22 02:02:53,031 WARN mapred.LocalJobRunner - job_local_0001 
java.lang.OutOfMemoryError: Java heap space 
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3051) 
    at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2991) 
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3532) 
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:943) 
    at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1441) 
    at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2936) 
    at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:477) 
    at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2631) 
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1800) 
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2221) 
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624) 
    at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127) 
    at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293) 
    at org.apache.gora.sql.store.SqlStore.execute(SqlStore.java:423) 
    at org.apache.gora.query.impl.QueryBase.execute(QueryBase.java:71) 
    at org.apache.gora.mapreduce.GoraRecordReader.executeQuery(GoraRecordReader.java:66) 
    at org.apache.gora.mapreduce.GoraRecordReader.nextKeyValue(GoraRecordReader.java:102) 
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532) 
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67) 
    at org.apache.hadoop.map 

當我將內容限制設置爲-1時,我只遇到了這個問題。但是,如果我不這樣做,有可能我無法獲得我想要的所有鏈接。有關如何使用內容限制的任何建議?這樣做不是很明智嗎?如果是這樣,我可以使用什麼樣的替代方案?謝謝!

+0

你爲什麼不增加內存,看看它是如何工作的? – ameertawfik

+0

有沒有辦法讓Nutch增加內存? – dagitab

回答

0

問題是您將爬網深度設置爲無限制(-1)。當您的抓取工具系統遇到重度URL(例如https://en.wikipedia.org, https://wikipedia.org and https://en.wikibooks.org)時,您的系統在抓取過程中可能會耗盡內存。您應該通過設置NUTCH_HEAPSIZE環境變量值e.g., export NUTCH_HEAPSIZE=4000來增加Nuch的內存(請參閱Nutch腳本中的詳細信息)。請注意,此值等同於Hadoop的HADOOP_HEAPSIZE。如果仍然無法正常工作,你應該增加你的系統^^

希望這有助於物理內存,

李全安待辦事項