2012-03-30 90 views
3

索引大量數據時,Solr遇到OOM錯誤。我知道一般的建議是將索引拆分成碎片,但事實上情況已經如此。我正在索引分片,並且在這一點上進一步分裂不是一種選擇。我想了解正在發生的事情,爲什麼會出現此錯誤,以及是否有任何事情可以處理,而不是分割或提供更多內存。SOLR內存不足錯誤讀取索引大型索引時

如果在這種情況下RAM消耗是線性的(或更糟糕的),我會感到難過,我寧願讓它呈現亞線性。

這種情況是我用隨機字符串索引文件(因此字典非常大)。每份文件都有一對20-30個字符的字段和一個字段大約200-500個字符。每個分片中的索引大小約爲250-260GB,處理該索引的每個solr實例都有大約4GB的內存。當OOM發生時,在重新啓動之後,Solr HeapDump看起來差不多,所以它可能與索引無關,但與Solr搜索器相關。就在OOM前堆轉儲的最大對象如下所示:

<tree type="Heap walker - Biggest objects"> 
    <object leaf="false" class="org.apache.solr.core.SolrCore" objectId="0xf02c" type="instance" retainedBytes="120456864" retainedPercent="97.4"> 
    <outgoing leaf="false" class="org.apache.solr.search.SolrIndexSearcher" objectId="0xfb52" type="instance" retainedBytes="120383232" retainedPercent="97.3" referenceType="not specified" referenceName="[transitive reference]"> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018e" type="instance" retainedBytes="8161688" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10185" type="instance" retainedBytes="8148072" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10188" type="instance" retainedBytes="8138232" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10186" type="instance" retainedBytes="8129160" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10191" type="instance" retainedBytes="8124608" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018a" type="instance" retainedBytes="8123144" retainedPercent="6.6" referenceType="not specified" referenceName="[transitive reference]"/> 

     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10192" type="instance" retainedBytes="8100904" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10190" type="instance" retainedBytes="8097984" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018b" type="instance" retainedBytes="8096160" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018d" type="instance" retainedBytes="8081656" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10187" type="instance" retainedBytes="8042504" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018c" type="instance" retainedBytes="8039336" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10189" type="instance" retainedBytes="8036952" retainedPercent="6.5" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1018f" type="instance" retainedBytes="7948568" retainedPercent="6.4" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10195" type="instance" retainedBytes="832448" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 

     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10196" type="instance" retainedBytes="830584" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10194" type="instance" retainedBytes="829232" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10197" type="instance" retainedBytes="828808" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10198" type="instance" retainedBytes="827312" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10199" type="instance" retainedBytes="824736" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x1019a" type="instance" retainedBytes="822608" retainedPercent="0.7" referenceType="not specified" referenceName="[transitive reference]"/> 
     <outgoing leaf="false" class="org.apache.lucene.index.ReadOnlySegmentReader" objectId="0x10193" type="instance" retainedBytes="783424" retainedPercent="0.6" referenceType="not specified" referenceName="[transitive reference]"/> 
     <cutoff objectCount="96" totalSizeBytes="534976" maximumSingleSizeBytes="87560"/> 
    </outgoing> 

    <cutoff objectCount="53" totalSizeBytes="73496" maximumSingleSizeBytes="40992"/> 
    </object> 
    <object leaf="false" class="org.mortbay.jetty.webapp.WebAppClassLoader" objectId="0xdf88" type="instance" retainedBytes="420208" retainedPercent="0.3"/> 
    <object leaf="false" class="org.apache.solr.core.SolrConfig" objectId="0xe5f5" type="instance" retainedBytes="184976" retainedPercent="0.1"/> 
..... 

的JMAP簡單轉儲看起來是這樣的:

Attaching to process ID 27000, please wait... 
Debugger attached successfully. 
Server compiler detected. 
JVM version is 20.5-b03 

using thread-local object allocation. 
Parallel GC with 2 thread(s) 

Heap Configuration: 
    MinHeapFreeRatio = 40 
    MaxHeapFreeRatio = 70 
    MaxHeapSize  = 268435456 (256.0MB) 
    NewSize   = 1310720 (1.25MB) 
    MaxNewSize  = 17592186044415 MB 
    OldSize   = 5439488 (5.1875MB) 
    NewRatio   = 2 
    SurvivorRatio = 8 
    PermSize   = 21757952 (20.75MB) 
    MaxPermSize  = 85983232 (82.0MB) 

Heap Usage: 
PS Young Generation 
Eden Space: 
    capacity = 31719424 (30.25MB) 
    used  = 17420488 (16.61347198486328MB) 
    free  = 14298936 (13.636528015136719MB) 
    54.92056854500258% used 
From Space: 
    capacity = 26673152 (25.4375MB) 
    used  = 10550856 (10.062080383300781MB) 
    free  = 16122296 (15.375419616699219MB) 
    39.55608995892199% used 
To Space: 
    capacity = 27000832 (25.75MB) 
    used  = 0 (0.0MB) 
    free  = 27000832 (25.75MB) 
    0.0% used 
PS Old Generation 
    capacity = 178978816 (170.6875MB) 
    used  = 168585552 (160.7757110595703MB) 
    free  = 10393264 (9.911788940429688MB) 
    94.19302002757689% used 
PS Perm Generation 
    capacity = 42008576 (40.0625MB) 
    used  = 41690016 (39.758697509765625MB) 
    free  = 318560 (0.303802490234375MB) 
    99.24167865152106% used 

我在這裏看不到任何東西,都會給我任何線索,如何處理它,除了提供更多的內存,在一般情況下這不是一個解決方案,我想知道是怎麼回事,爲什麼Searcher和它的ReadOnlySegmentReaders佔用所有內存,他們真的需要,我可以做些什麼嗎?

更新: 我已經做了一個大約150萬字(而不是隨機單詞)的小詞典測試,我達到了大約350GB的索引大小,並沒有OOME,所以這不是直接連接到索引大小,可能必須做更多的術語矢量大小(獨特的條款)。但是我仍然想了解我的侷限性,以及如何繞過它們。

+0

您正在使用的操作系統是什麼?它是64位還是32位? – Yavar 2012-03-30 05:42:57

+0

Linux RH,64bit – ilfrin 2012-03-30 13:17:44

+0

更新: 我已經做了一個大約150萬字(而不是隨機單詞)的小詞典的測試,我達到了大約350GB的索引大小,並且沒有OOME,所以這不是直接的連接到索引大小,可能必須做更多的術語向量大小(獨特的條款)。但是我仍然想了解我的侷限性,以及如何繞過它們。 – ilfrin 2012-03-30 13:20:28

回答

0

它取決於您獲取您的服務器場的每個「分片」上索引的所有文檔。對於分佈式索引沒有現成的支持,但是您的方法可以像循環技術一樣簡單:將每個文檔索引到圓圈中的下一個服務器。一個簡單的哈希系統也可以工作,Solr Wiki建議uniqueId.hashCode()%numServers作爲一個適當的哈希函數。

請記住,Solr不計算通用術語/文檔頻率。在大規模情況下,tf/idf在碎片級別計算不太可能是重要的 - 但是,如果您的集合在服務器上的分佈嚴重偏離,則可能會對相關性結果產生影響。它可能是最好的隨機分發文件到你的碎片。 請注意>>>>>>>嘗試使用散列碼代替隨機字符串來索引文檔

+0

我想你不明白我的問題。我正在索引碎片。它使用Hadoop並行完成。而鑑於我提到在大約每碎片索引的大小260GB索引這個數據在這些碎片崩潰的特定文本語料庫,我現在已經是這個OOM不直接連接到索引的大小,因爲我收錄了一些其它數據(不隨機字符串),這給了我一個360GB的指數,碎片倖存下來......無論如何,我猜你正在回答一個不同的問題,感謝無論如何感興趣;) – ilfrin 2012-03-30 17:57:22