2017-06-15 57 views
0

我想在Hbase中使用mapreduce來批量加載文本文件。 一切工作正常,但是當我做最後一步的批量加載時,我得到警告和我的mapreduce工作卡住了。WARN mapreduce.LoadIncrementalHFiles:跳過非目錄hdfs:在EMR上

17/06/15 10:22:43 INFO mapreduce.Job: Job job_1495181241247_0013 completed successfully 
17/06/15 10:22:43 INFO mapreduce.Job: Counters: 49 
     File System Counters 
       FILE: Number of bytes read=836391 
       FILE: Number of bytes written=1988049 
       FILE: Number of read operations=0 
       FILE: Number of large read operations=0 
       FILE: Number of write operations=0 
       HDFS: Number of bytes read=73198 
       HDFS: Number of bytes written=12051358 
       HDFS: Number of read operations=8 
       HDFS: Number of large read operations=0 
       HDFS: Number of write operations=3 
     Job Counters 
       Launched map tasks=1 
       Launched reduce tasks=1 
       Data-local map tasks=1 
       Total time spent by all maps in occupied slots (ms)=196200 
       Total time spent by all reduces in occupied slots (ms)=428490 
       Total time spent by all map tasks (ms)=4360 
       Total time spent by all reduce tasks (ms)=4761 
       Total vcore-milliseconds taken by all map tasks=4360 
       Total vcore-milliseconds taken by all reduce tasks=4761 
       Total megabyte-milliseconds taken by all map tasks=6278400 
       Total megabyte-milliseconds taken by all reduce tasks=13711680 
     Map-Reduce Framework 
       Map input records=5604 
       Map output records=5603 
       Map output bytes=8240332 
       Map output materialized bytes=836387 
       Input split bytes=240 
       Combine input records=0 
       Combine output records=0 
       Reduce input groups=5603 
       Reduce shuffle bytes=836387 
       Reduce input records=5603 
       Reduce output records=179296 
       Spilled Records=11206 
       Shuffled Maps =1 
       Failed Shuffles=0 
       Merged Map outputs=1 
       GC time elapsed (ms)=137 
       CPU time spent (ms)=11240 
       Physical memory (bytes) snapshot=820736000 
       Virtual memory (bytes) snapshot=7694557184 
       Total committed heap usage (bytes)=724566016 
     Shuffle Errors 
       BAD_ID=0 
       CONNECTION=0 
       IO_ERROR=0 
       WRONG_LENGTH=0 
       WRONG_MAP=0 
       WRONG_REDUCE=0 
     File Input Format Counters 
       Bytes Read=72958 
     File Output Format Counters 
       Bytes Written=12051358 
Incremental upload completed.......... 
job is successfull..........H file Loading Will start Now 
17/06/15 10:22:43 WARN mapreduce.LoadIncrementalHFiles: Skipping non-directory hdfs://ip:8020/user/hadoop/ESGTRF/outputdir/output0/_SUCCESS 

同樣的事情正在cloudera上工作,但是當我在AWS EMR上運行這個問題時,我得到了這個問題。

我懷疑配置的東西。 我沒有明確提到任何配置。

回答

0

明確設置許可後,我的問題解決

hfs.setPermission(new Path(outputPath+"/columnFamilyName"),FsPermission.valueOf("drwxrwxrwx"));