我試圖從mysql表導入數據到hdfs。我使用的是下面sqoop導入命令從MySQL數據庫導入時無法複製到Hadoop中的Datanode
sqoop import --connect jdbc:mysql://localhost:3306/employee --username root --password *** --table Emp --m 1
我提示以下錯誤:
16/05/07 20:01:18 ERROR tool.ImportTool: Encountered IOException running import job: java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/usr/lib/sqoop/lib/parquet-format-2.0.0.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:269)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:390)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:483)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
我在usr/lib目錄/文件夾sqoop拼花格式-2.0.0.jar,但即使那麼它顯示錯誤。
我試圖導入所有sqoop lib添加到HDFS但我能做到這一點它拋出下面的錯誤
16/05/07 18時40分十一秒WARN hdfs.DFSClient:DataStreamer異常 org.apache.hadoop.ipc.RemoteException(java.io.IOException):文件/usr/lib/sqoop/lib/xz-1.0.jar。 COPYING只能複製到0節點而不是minReplication(= 1)。有0個數據節點正在運行,並且在此操作中不包含任何節點。 在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549) 在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200) 在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641) 在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482) 在org.apache。 hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 在org.apache.hadoop.ipc.ProtobufRpcEngine $服務器$ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) 在org.apache.hadoop .ipc.RPC $ Server.call(RPC.java:962) at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2039) at org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:2035) at java。 security.AccessController.doPrivileged(本機方法) 在javax.security.auth.Subject.doAs(Subject.java:415) 在
有什麼可以做呢?我無法將jar文件複製到HDFS,也無法將數據導入到HDFS格式的MySQL中。
我試圖此溶液 sqoop import eror - File does not exist:
,但不能從第二個步驟進行。我還清除了緩存並重新啓動了Hadoop文件系統。
感謝
java.io.IOException:/ usr/local/hadoop_store/hdfs/datanode中的不兼容clusterID:namenode clusterID = CID-c5c99144-3198-443a-8af0-cd3e3407f706;我在datanode日誌中收到此錯誤。 – CoderTest