2014-07-07 38 views
0

我有一個嵌入式solr服務器,我與Spring Data Solr結合使用。我有大約60萬個文件佔用3GB。啓動期間,Solr需要幾分鐘才能執行第一個查詢。使用VisualVM,我已經能夠追蹤到似乎正在加載LZ4解壓縮需要很長時間從磁盤讀取的第一個文檔的瓶頸。跟蹤看起來像這樣:緩慢Solr啓動解壓存儲字段

searcherExecutor-5-thread-1 
    java.lang.Thread.run() 
    java.util.concurrent.ThreadPoolExecutor$Worker.run() 
     java.util.concurrent.ThreadPoolExecutor.runWorker() 
     java.util.concurrent.FutureTask.run() 
     java.util.concurrent.FutureTask$Sync.innerRun() 
     org.apache.solr.core.SolrCore$5.call() 
      org.apache.solr.handler.component.SuggestComponent$SuggesterListener.newSearcher() 
      org.apache.solr.spelling.suggest.SolrSuggester.reload() 
      org.apache.solr.spelling.suggest.SolrSuggester.build() 
      org.apache.lucene.search.suggest.Lookup.build() 
       org.apache.lucene.search.suggest.analyzing.AnalyzingSuggester.build() 
       org.apache.lucene.search.suggest.DocumentDictionary$DocumentInputIterator.next() 
       org.apache.lucene.index.IndexReader.document() 
       org.apache.lucene.index.BaseCompositeReader.document() 
        org.apache.lucene.index.SegmentReader.document() 
        org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument() 
        org.apache.lucene.codecs.compressing.CompressionMode$4.decompress() 
        org.apache.lucene.codecs.compressing.LZ4.decompress() 
         org.apache.lucene.store.BufferedIndexInput.readBytes() 
         org.apache.lucene.store.BufferedIndexInput.readBytes() 
         org.apache.lucene.store.BufferedIndexInput.refill() 
         org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.readInternal() 
          java.io.RandomAccessFile.seek[native]() 

我需要存儲的字段的對象映射。我不明白爲什麼加載單個文檔時需要進行這麼多的解壓縮。這就像解壓查找表是巨大的。任何提示/建議?

enter image description here

回答

1

我禁用了建議者成分,拼寫檢查,現在的速度更快。