2016-05-18 130 views
2

我試圖通過pyspark使用jdbc連接到mysql。我能夠在EMR之外做到這一點。但是當我嘗試使用EMR時,pyspark無法正確啓動。AWS EMR PySpark連接到mysql

,我在我的機器

pyspark --conf spark.executor.extraClassPath=/home/hadoop/mysql-connector-java-5.1.38-bin.jar --driver-class-path /home/hadoop/mysql-connector-java-5.1.38-bin.jar --jars /home/hadoop/mysql-connector-java-5.1.38-bin.jar 

,並得到下面的輸出中使用的命令:

16/05/18 14:29:21 INFO Client: Application report for application_1463578502297_0011 (state: FAILED) 
16/05/18 14:29:21 INFO Client: 
    client token: N/A 
    diagnostics: Application application_1463578502297_0011 failed 2 times due to AM Container for appattempt_1463578502297_0011_000002 exited with exitCode: 1 
For more detailed output, check application tracking page:http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011Then, click on links to logs of each attempt. 
Diagnostics: Exception from container-launch. 
Container id: container_1463578502297_0011_02_000001 
Exit code: 1 
Stack trace: ExitCodeException exitCode=1: 
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) 
    at org.apache.hadoop.util.Shell.run(Shell.java:456) 
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) 
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) 
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:745) 


Container exited with a non-zero exit code 1 
Failing this attempt. Failing the application. 
    ApplicationMaster host: N/A 
    ApplicationMaster RPC port: -1 
    queue: default 
    start time: 1463581754050 
    final status: FAILED 
    tracking URL: http://ip-10-24-0-75.ec2.internal:8088/cluster/app/application_1463578502297_0011 
    user: hadoop 
16/05/18 14:29:21 INFO Client: Deleting staging directory .sparkStaging/application_1463578502297_0011 
16/05/18 14:29:21 ERROR SparkContext: Error initializing SparkContext. 
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master. 
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124) 
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64) 
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144) 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:530) 
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:234) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:214) 
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:79) 
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:68) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 

我還沒有使用額外的罐子但mariadb.jdbc連接試圖用我讀過是默認的驅動程序:

from pyspark.sql import SQLContext 
sqlctx = SQLContext(sc) 
df = sqlctx.read.format("jdbc").option("url", "jdbc:mysql://ip:port/db").option("driver", "com.mariadb.jdbc.Driver").option("dbtable", "...").option("user", "....").option("password", "...").load() 

,但我得到

Traceback (most recent call last): 
    File "<stdin>", line 1, in <module> 
    File "/usr/lib/spark/python/pyspark/sql/readwriter.py", line 139, in load 
    return self._df(self._jreader.load()) 
    File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o81.load. 
: java.lang.ClassNotFoundException: com.mariadb.jdbc.Driver 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366) 
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) 
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) 
    at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:45) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:45) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:45) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120) 
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91) 
    at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57) 
    at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158) 
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 

應該怎麼做?

謝謝, Pedro Rosanes。

+0

轉到作業的spark歷史用戶界面並檢查環境選項卡,查看是否所有需要的庫都按預期方式加載 – vgunnu

+0

您是否試過僅指定--jars選項? –

+0

它看起來像在類路徑中有mysql連接器,並且在連接屬性中有MariaDB的驅動程序。你試過'.option(「driver」,「com.mysql.jdbc.Driver」)嗎? –

回答

3

如果你想運行在Amazon EMR 3.x或4.x的EMR任何星火工作,你需要做以下的事情:

1)你可以提火花defaults.conf而性質引導,即你可以改變驅動程序類路徑執行人類路徑屬性的配置,也maximizeResourceAllocation(如果你需要要求提供更多信息的評論。)docs

2)你需要下載所有的在你的情況下MariaDB和MySQL連接器需要jars ie(mysql-connector.jar和mariadb-connector.jar)JDBC將jars添加到所有類路徑位置,例如所有節點上的Spark,Yarn和Hadoop,或者它是MASTER,CORE或TASK (星火紗線方案涵蓋了最)bootstrap scripts docs

3)如果你的星火工作只是從驅動器節點通信到你的數據庫,那麼你可能只需要使用它--jars,不會給你例外,工作正常。

4)還建議您嘗試法師爲絲簇的代替本地紗客戶

在你的情況,如果你使用MariaDB的或MySQL或者您的罐子上$複製SPARK_HOME/lib,$ HADOOP_HOME/lib等在您的羣集的每個節點上,然後試一試。

稍後,您可以使用引導操作在集羣創建時複製所有節點上的JAR。

請在下面評論以獲取更多信息。

+1

通過僅使用--jars工作的驅動程序節點進行連接(如3所述)。 要通過其他連接,我需要發現如何在不使用ssh sparks-default.conf的情況下進行編輯(如您在2中所述)。 –

相關問題