2013-12-09 28 views
2

我開始在我的測試中使用MiniDfsCluster(我的依賴項是2.0.0-cdh4.5.0)。我用一個簡單的例程來啓動它:在測試中啓動minidfscluster

File baseDir = new File("./target/hdfs/" + RunWithHadoopCluster.class.getSimpleName()).getAbsoluteFile(); 
FileUtil.fullyDelete(baseDir); 
Configuration conf = new Configuration(); 
conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath()); 
MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf); 
MiniDFSCluster hdfsCluster = builder.build(); 
String hdfsURI = "hdfs://localhost:"+ hdfsCluster.getNameNodePort() + "/"; 

並不斷收到以下錯誤。

12:02:15.994 [main] WARN o.a.h.metrics2.impl.MetricsConfig - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 
12:02:16.047 [main] INFO o.a.h.m.impl.MetricsSystemImpl - Scheduled snapshot period at 10 second(s). 
12:02:16.047 [main] INFO o.a.h.m.impl.MetricsSystemImpl - NameNode metrics system started 

java.lang.IncompatibleClassChangeError: Implementing class 
    at java.lang.ClassLoader.defineClass1(Native Method) 
    at java.lang.ClassLoader.defineClass(ClassLoader.java:800) 
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) 
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) 
    at java.net.URLClassLoader.access$100(URLClassLoader.java:71) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:361) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at org.apache.hadoop.metrics2.source.JvmMetrics.getEventCounters(JvmMetrics.java:162) 
    at org.apache.hadoop.metrics2.source.JvmMetrics.getMetrics(JvmMetrics.java:96) 
    at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) 
    at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:171) 
    at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:150) 
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) 
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) 
    at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) 
    at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) 
    at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) 
    at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:95) 
    at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:244) 
    at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:222) 
    at org.apache.hadoop.metrics2.source.JvmMetrics.create(JvmMetrics.java:80) 
    at org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.create(NameNodeMetrics.java:94) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initMetrics(NameNode.java:278) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:436) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169) 
    at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:879) 
    at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:770) 
    at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:628) 
    at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:323) 
    at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:113) 
    at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:305) 

這可能是什麼原因造成的?

回答

2

仔細檢查您的依賴關係。此錯誤表示在類路徑中存在不兼容的日誌記錄罐版本。我面臨類似的問題,不得不排除由另一個第三方庫引入的log4j-over-slf4j依賴項。

+0

沒錯!當我刪除log4j依賴關係時,不存在這樣的錯誤。 –

+0

其實我的生活變得更好,除了風暴核心的log4j-over-slf4j橋:)正如你所建議的那樣。 –

1

升級到SLF4J 1.7.6應該解決問題(我們使用1.7.7),因爲log4j-over-slf4j v1.7.5缺少AppenderSkeleton。

很可能一個類正在使用Log4J調用AppenderSkeleton的某個地方,但隨後通過slf4j重定向log4j的橋丟失了,它隨着海報顯示的堆棧跟蹤而爆炸。發行說明http://www.slf4j.org/news.html描述了這在1.7.6中得到了解決。

登錄紗線,在那裏我們看到了問題:https://issues.apache.org/jira/browse/YARN-2875