You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
24/01/10 18:33:09 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The text was updated successfully, but these errors were encountered:
rouxiaomin
changed the title
【ERROR】scala.NotImplementedError: cannot convert partitioning to native
【ERROR】spark job fail due to scala.NotImplementedError: cannot convert partitioning to native
Jan 11, 2024
Describe the bug
spark job fail because of this error。other unImplemented operator are 【WARN】,but this operator is 【ERROR】
env
spark version:spark 3.3.3
blaze version:v2.0.7
the full log is below
24/01/10 18:33:09 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Driver stacktrace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 268.0 failed 4 times, most recent failure: Lost task 0.3 in stage 268.0 (TID 20389) (yuntu-qiye-e-010058011010.hz.td executor 3): scala.NotImplementedError: cannot convert partitioning to native: rangepartitioning(inDegree#2307 ASC NULLS FIRST, outDegree#2321 ASC NULLS FIRST, shortPath#2291 ASC NULLS FIRST, 1)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeBase.$anonfun$prepareNativeShuffleDependency$2(NativeShuffleExchangeBase.scala:205)
at org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleWriterBase.nativeShuffleWrite(BlazeShuffleWriterBase.scala:71)
at org.apache.spark.sql.execution.blaze.plan.NativeShuffleExchangeExec$$anon$1.write(NativeShuffleExchangeExec.scala:154)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
The text was updated successfully, but these errors were encountered: