2014-10-27 67 views
0

我有一個HBase表,其中有兩個列族,'i:*'表示信息,'f:b'表示文件:blob。 我正在存儲圖像的值,一些圖像幾乎12MB。爲什麼掃描大塊值會導致HBase集羣崩潰?

我可以加載/插入文件在java中沒有問題,但只要我嘗試通過爲F掃描,以獲取他們:B族值(斑點),我的掃描儀坐鎮,直到超時,每個區域我的羣集上的服務器依次死亡(我有一個20個節點的羣集)。阻止我的掃描儀以某種方式造成我的無助節點的這種準病毒的唯一方法是完全放棄這個表(或者看起來好像)。

我使用Cloudera的EDH「0.98.6-cdh5.2.0」

不幸的是我的客戶只是超時,所以沒有有價值的例外存在,所有我可以從節點日​​志中獲取低於

2014-10-27 21:47:36,106 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp 
java.io.FileNotFoundException: File hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp does not exist. 
     at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:658) 
     at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) 
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) 
    at org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath.getChildren(HFileArchiver.java:628) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:346) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:347) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137) 
    at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75) 
at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333) 
     at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101) 
     at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
     at java.lang.Thread.run(Thread.java:745) 
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to complete archive of: [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp]. Those files are still in the original location, and they may slow down reads. 
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table 
java.io.IOException: Received error when attempting to archive files ([class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/i, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits]), cannot delete region directory. 
     at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:148) 
     at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333) 
    at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254) 
     at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101) 
    at org.apache.hadoop.hbase.Chore.run(Chore.java:87) 
    at java.lang.Thread.run(Thread.java:745) 
2014-10-27 21:47:36,146 INFO org.apache.hadoop.hbase.master.SplitLogManager: Done splitting /hbase/splitWAL/WALs%2Finsight-staging-slave019.spadac.com%2C60020%2C1414446135179-splitting%2Finsight-staging-slave019.spadac.com%252C60020%252C1414446135179.1414446317771 

這裏是我的掃描表

try { 
     if (hBaseConfig == null) { 
     hBaseConfig = HBaseConfiguration.create(); 
     hBaseConfig.setInt("hbase.client.scanner.timeout.period", 1200000); 
     hBaseConfig.set("hbase.client.keyvalue.maxsize", "0"); 
     hBaseConfig.set("hbase.master", PROPS.get().getProperty("hbase.master")); 
     hBaseConfig.set("hbase.zookeeper.quorum", PROPS.get().getProperty("zks")); 
     hBaseConfig.set("zks.port", "2181"); 
     table = new HTable(hBaseConfig, "RASTER"); 
     } 

     Scan scan = new Scan(); 
     scan.addColumn("f".getBytes(), "b".getBytes()); 
     scan.addColumn("i".getBytes(), "name".getBytes()); 
     ResultScanner scanner = table.getScanner(scan); 

     for (Result rr = scanner.next(); rr != null; rr = scanner.next()) { 
/*I NEVER EVEN GET HERE IF I SCAN FOR 'f:b'*/ 
     CellScanner cs = rr.cellScanner(); 
     String name = ""; 
     byte[] fileBs = null; 
     while (cs.advance()) { 

      Cell current = cs.current(); 

      byte[] cloneValue = CellUtil.cloneValue(current); 
      byte[] cloneFamily = CellUtil.cloneFamily(current); 
      byte[] qualBytes = CellUtil.cloneQualifier(current); 
      String fam = Bytes.toString(cloneFamily); 
      String qual = Bytes.toString(qualBytes); 
      if (fam.equals("i")) { 

      if (qual.equals("name")) { 
       name = Bytes.toString(cloneValue); 
      } 
      } else if (fam.equals("f") && qual.equals("b")) { 
      fileBs = cloneValue; 
      } 

     } 

     OutputStream bof = new FileOutputStream("c:\\temp\\" + name); 
     bof.write(fileBs); 
     break; 

     } 
    } catch (IOException ex) { 
     //removed 
    } 

感謝 有誰知道爲什麼對於大型Blob掃描可能消滅我的羣集的代碼?我相信這是愚蠢的,只是無法弄清楚。

+0

順便說一句,我可以掃描其他列上面的代碼沒有問題 – markg 2014-10-27 23:27:31

回答

0

看起來這是問題

hBaseConfig.set("hbase.client.keyvalue.maxsize", "0"); 

我改成了「50」和現在的工作。

相關問題