2012-11-26 56 views
4

當我嘗試通過使用ColumnFamilyInputFormat類從hadoop訪問Cassandra時,出現了一個奇怪的異常。 在我Hadoop的過程中,我這是怎麼連接到卡桑德拉,包括卡桑德拉 - all.jar在1.1版本後:ColumnFamilyInputFormat - 無法獲得輸入分割

private void setCassandraConfig(Job job) { 
    job.setInputFormatClass(ColumnFamilyInputFormat.class); 
    ConfigHelper.setInputRpcPort(job.getConfiguration(), "9160"); 
    ConfigHelper 
     .setInputInitialAddress(job.getConfiguration(), "204.236.1.29"); 
    ConfigHelper.setInputPartitioner(job.getConfiguration(), 
      "RandomPartitioner"); 
    ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE, 
      COLUMN_FAMILY); 
    SlicePredicate predicate = new SlicePredicate().setColumn_names(Arrays 
      .asList(ByteBufferUtil.bytes(COLUMN_NAME))); 
    ConfigHelper.setInputSlicePredicate(job.getConfiguration(), predicate); 
    // this will cause the predicate to be ignored in favor of scanning 
    // everything as a wide row 
    ConfigHelper.setInputColumnFamily(job.getConfiguration(), KEYSPACE, 
      COLUMN_FAMILY, true); 
    ConfigHelper.setOutputInitialAddress(job.getConfiguration(), 
      "204.236.1.29"); 
    ConfigHelper.setOutputPartitioner(job.getConfiguration(), 
      "RandomPartitioner"); 
} 

public int run(String[] args) throws Exception { 
    // use a smaller page size that doesn't divide the row count evenly to 
    // exercise the paging logic better 
    ConfigHelper.setRangeBatchSize(getConf(), 99); 

    Job processorJob = new Job(getConf(), "dmp_normalizer"); 
    processorJob.setJarByClass(DmpProcessorRunner.class); 
    processorJob.setMapperClass(NormalizerMapper.class); 
    processorJob.setReducerClass(SelectorReducer.class); 
    processorJob.setOutputKeyClass(Text.class); 
    processorJob.setOutputValueClass(Text.class); 
    FileOutputFormat 
      .setOutputPath(processorJob, new Path(TEMP_PATH_PREFIX)); 
    processorJob.setOutputFormatClass(TextOutputFormat.class); 
    setCassandraConfig(processorJob); 
    ... 
} 

但是當我運行Hadoop的(我在亞馬遜EMR運行它)我得到的異常波紋管。不是說IP是127.0.0.1而是我想要的IP ...

任何提示?我可能做錯了什麼?

2012-11-22 21:37:34,235 ERROR org.apache.hadoop.security.UserGroupInformation (Thread-6): PriviledgedActionException as:hadoop cause:java.io.IOException: Could not get input splits 
2012-11-22 21:37:34,235 INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob (Thread-6): dmp_normalizer got an error while submitting java.io.IOException: Could not get input splits at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:178) at 
    org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1017) at 
    org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1034) at 
    org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:174) at 
    org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:952) at 
    org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:905) at 
    java.security.AccessController.doPrivileged(Native Method) at 
    javax.security.auth.Subject.doAs(Subject.java:396) at 
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132) at 
    org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:905) at 
    org.apache.hadoop.mapreduce.Job.submit(Job.java:500) at 
    org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:336) at 
    org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:233) at 
    java.lang.Thread.run(Thread.java:662) Caused by: java.util.concurrent.ExecutionException: java.io.IOException: failed connecting to all endpoints 127.0.0.1 at 
    java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at 
    java.util.concurrent.FutureTask.get(FutureTask.java:83) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:174) ... 13 more Caused by: java.io.IOException: failed connecting to all endpoints 127.0.0.1 at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSubSplits(ColumnFamilyInputFormat.java:272) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat.access$200(ColumnFamilyInputFormat.java:77) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:211) at 
    org.apache.cassandra.hadoop.ColumnFamilyInputFormat$SplitCallable.call(ColumnFamilyInputFormat.java:196) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at 
    java.util.concurrent.FutureTask.run(FutureTask.java:138) at 
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at 
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) ... 1 more 
2012-11-22 21:37:39,319 INFO com.s1mbi0se.dmp.processor.main.DmpProcessorRunner (main): Process ended 
+0

你可以發佈你的配置邏輯(驅動程序類)的其餘部分? –

+0

對不起,我不明白你的意思......我會在問題中添加其他配置,如果這不是你的意思,請告訴我。 – mvallebr

回答

1

我能夠通過更改cassandra配置來解決問題。 listen_address需要成爲一個有效的外部IP才能工作。

這個例外似乎沒有什麼關係,它花了很長時間才找到答案。最後,如果您在cassandra config中指定了0.0.0.0,並嘗試從外部IP訪問它,則會發生此錯誤,指出在127.0.0.1處找不到主機。

-1

在我的情況下,它是錯誤keyspace名稱問題,仔細看看你傳遞給ConfigHelper.setInputColumnFamily方法。