尝试以群集模式访问HDFS时,Spark Submit python失败

阿比舍克·乔哈里(Abhishek Choudhary)

我正在尝试在cluster中提交一个简单的wordcount pyspark作业,但由于找不到文件或目录而失败。

  • 当我使用pyspark shell并尝试访问hdfs文件时,该方案工作正常。
  • 当我使用Scala jar进行具有相同hdfs文件位置的wordcount时,该方案工作正常。
  • 当我尝试以本地模式提交python作业时,该方案工作正常。

以下是我的提交脚本-

spark-submit-部署模式客户端--master spark:// Wonderwoman:7077 --total-executor-cores 2 wordcount.py hdfs://10.65.104.14:8099 / user / cdh / cian / detector / text。文本文件

字数脚本

错误日志

File "/home/cdh/code/wordcount.py", line 22, in <module>
    output = counts.collect()
  File "/home/cdh/spark-1.6.1-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
  File "/home/cdh/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/home/cdh/spark-1.6.1-bin-hadoop2.6/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, Spiderwoman): java.io.IOException: Cannot run program "python": error=2, No such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
        at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
        at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
        at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
        at java.lang.ProcessImpl.start(ProcessImpl.java:134)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
        ... 18 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
        at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
        at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:209)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "python": error=2, No such file or directory
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:161)
        at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:87)
        at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:63)
        at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:134)
        at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:101)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        ... 1 more
Caused by: java.io.IOException: error=2, No such file or directory
        at java.lang.UNIXProcess.forkAndExec(Native Method)
        at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
        at java.lang.ProcessImpl.start(ProcessImpl.java:134)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
        ... 18 more

我相信在以某种方式提交py文件时,它无法检测hdfs客户端。有什么建议吗?

马盖多

您的问题不是HDFS上的文件。抛出异常是因为您的工作节点之一未找到程序python。您只需要检查所有节点上的python安装,就不会再有问题了。

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

传递 --config 时 Spark-submit 失败

来自分类Dev

在python main中使用spark-submit

来自分类Dev

在python main中使用spark-submit

来自分类Dev

Spark-submit 使用 yarn master 失败,错误要求在 scala.Predef 失败

来自分类Dev

没有Spark库的`spark-submit`普通Python脚本

来自分类Dev

在 python 和 spark-submit 中运行 PySpark 代码

来自分类Dev

尝试连接时 Python 脚本失败

来自分类Dev

尝试以编程方式发布媒体项目时,PublishOptions失败

来自分类Dev

运行sparkclr-submit.cmd失败

来自分类Dev

Apache Spark Python UDF 失败

来自分类Dev

“ SPARK-SUBMIT”中的部署模式

来自分类Dev

Installin时Python测试失败

来自分类Dev

Spark Submit失败,并出现java.lang.NoSuchMethodError:scala.Predef $。$ conforms()Lscala / Predef $$ less $ colon $ less;

来自分类Dev

spark2-submit 与 spark-submit 不同

来自分类Dev

错误:必须指定主要资源(JAR或Python文件)-Spark Submit Python app

来自分类Dev

spark-submit python 文件并没有找到模块

来自分类Dev

为什么spark-submit和spark-shell失败并显示“无法找到Spark程序集JAR。您需要在运行此程序之前构建Spark”。

来自分类Dev

烧瓶form.validate_on_submit()使用wtforms失败

来自分类Dev

$('form')。submit更改内容长度标头并导致请求失败?

来自分类Dev

opencv + python:findcontours时断言失败

来自分类Dev

URL无效时,Python提取失败

来自分类Dev

python:发送邮件,在“ with”块内时失败

来自分类Dev

断言失败时的Python unittest调用函数

来自分类Dev

尝试以双引导方式安装时磁盘分区失败

来自分类Dev

如果未设置驱动程序类路径或执行程序类路径,则带有--py-files的spark-submit命令将失败

来自分类Dev

在纱线群集模式下运行python spark作业

来自分类Dev

SparkConf不读取spark-submit参数

来自分类Dev

SparkContext.addFile与spark-submit --files

来自分类Dev

找不到“ spark-submit2.cmd”

Related 相关文章

  1. 1

    传递 --config 时 Spark-submit 失败

  2. 2

    在python main中使用spark-submit

  3. 3

    在python main中使用spark-submit

  4. 4

    Spark-submit 使用 yarn master 失败,错误要求在 scala.Predef 失败

  5. 5

    没有Spark库的`spark-submit`普通Python脚本

  6. 6

    在 python 和 spark-submit 中运行 PySpark 代码

  7. 7

    尝试连接时 Python 脚本失败

  8. 8

    尝试以编程方式发布媒体项目时,PublishOptions失败

  9. 9

    运行sparkclr-submit.cmd失败

  10. 10

    Apache Spark Python UDF 失败

  11. 11

    “ SPARK-SUBMIT”中的部署模式

  12. 12

    Installin时Python测试失败

  13. 13

    Spark Submit失败,并出现java.lang.NoSuchMethodError:scala.Predef $。$ conforms()Lscala / Predef $$ less $ colon $ less;

  14. 14

    spark2-submit 与 spark-submit 不同

  15. 15

    错误:必须指定主要资源(JAR或Python文件)-Spark Submit Python app

  16. 16

    spark-submit python 文件并没有找到模块

  17. 17

    为什么spark-submit和spark-shell失败并显示“无法找到Spark程序集JAR。您需要在运行此程序之前构建Spark”。

  18. 18

    烧瓶form.validate_on_submit()使用wtforms失败

  19. 19

    $('form')。submit更改内容长度标头并导致请求失败?

  20. 20

    opencv + python:findcontours时断言失败

  21. 21

    URL无效时,Python提取失败

  22. 22

    python:发送邮件,在“ with”块内时失败

  23. 23

    断言失败时的Python unittest调用函数

  24. 24

    尝试以双引导方式安装时磁盘分区失败

  25. 25

    如果未设置驱动程序类路径或执行程序类路径,则带有--py-files的spark-submit命令将失败

  26. 26

    在纱线群集模式下运行python spark作业

  27. 27

    SparkConf不读取spark-submit参数

  28. 28

    SparkContext.addFile与spark-submit --files

  29. 29

    找不到“ spark-submit2.cmd”

热门标签

归档