Spark伙伴们,我对Spark还是很陌生,这就是为什么我确实希望您的帮助。
我正在尝试从我的笔记本电脑安排火花集群上的非常简单的工作。尽管它可以工作,但是当我使用提交时./spark-submit
,当我尝试以编程方式进行操作时,它会引发异常。
环境:-Spark-1个主节点和2个工作节点(独立模式)。Spark未编译,但二进制文件已下载。Spark版本-1.0.2-Java版本“ 1.7.0_45”-应用程序jar随处可见(在同一位置的客户端和工作节点上);-README.md文件也被复制到每个节点;
我正在尝试运行的应用程序:
val logFile = "/user/vagrant/README.md"
val conf = new SparkConf()
conf.setMaster("spark://192.168.33.50:7077")
conf.setAppName("Simple App")
conf.setJars(List("file:///user/vagrant/spark-1.0.2-bin-hadoop1/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar"))
conf.setSparkHome("/user/vagrant/spark-1.0.2-bin-hadoop1")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
...
所以问题是,当我这样做时,此应用程序可以在群集上成功运行:
./spark-submit --class com.paycasso.SimpleApp --master spark://192.168.33.50:7077 --deploy-mode client file:///home/vagrant/spark-1.0.2-bin-hadoop1/bin/hello-apache-spark_2.10-1.0.0-SNAPSHOT.jar
但是,当我尝试通过调用以编程方式执行相同操作时,此操作不起作用 sbt run
这是我在主节点上获得的堆栈跟踪:
14/09/04 15:09:44 ERROR Remoting: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = -6451051318873184044, local class serialVersionUID = 583745679236071411
java.io.InvalidClassException: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = -6451051318873184044, local class serialVersionUID = 583745679236071411
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136)
at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104)
at scala.util.Try$.apply(Try.scala:161)
at akka.serialization.Serialization.deserialize(Serialization.scala:98)
at akka.remote.serialization.MessageContainerSerializer.fromBinary(MessageContainerSerializer.scala:58)
at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104)
at scala.util.Try$.apply(Try.scala:161)
at akka.serialization.Serialization.deserialize(Serialization.scala:98)
at akka.remote.MessageSerializer$.deserialize(MessageSerializer.scala:23)
at akka.remote.DefaultMessageDispatcher.payload$lzycompute$1(Endpoint.scala:55)
at akka.remote.DefaultMessageDispatcher.payload$1(Endpoint.scala:55)
at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:73)
at akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:764)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
有什么解决方案?先感谢您。
浪费了很多时间后,我发现了问题所在。尽管我没有在应用程序中使用hadoop / hdfs,但是hadoop客户端仍然很重要。问题出在hadoop-client版本上,它与hadoop的版本不同,spark是为此而建的。Spark的hadoop版本1.2.1,但在我的应用程序中为2.4。
当我在应用程序中将hadoop客户端的版本更改为1.2.1时,便能够在集群上执行spark代码。
本文收集自互联网,转载请注明来源。
如有侵权,请联系[email protected] 删除。
我来说两句