Spark在执行jdbc保存时给出空指针异常

Spark giving Null Pointer Exception while performing jdbc save(Spark在执行jdbc保存时给出空指针异常)
本文介绍了Spark在执行jdbc保存时给出空指针异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

当我执行以下代码行时,我得到以下堆栈跟踪:

Hi I am getting the following stack trace when I execute the following lines of code:

transactionDF.write.format("jdbc")
        .option("url",SqlServerUri)
        .option("driver", driver)
        .option("dbtable", fullQualifiedName)
        .option("user", SqlServerUser).option("password",SqlServerPassword)
        .mode(SaveMode.Append).save()

以下是堆栈跟踪:

at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_3$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.LocalTableScanExec.<init>(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:394)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:112)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:237)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:54)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2788)
at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2319)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:670)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:77)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:518)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
at com.test.spark.jobs.ingestion.test$.main(test.scala:193)
at com.test.spark.jobs.ingestion.test.main(test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

我尝试调试它,我相信查询执行会给出空指针异常

I tried debugging it and I believe query execution is giving null pointer exception

我不确定这意味着什么.我在我的本地机器上运行它,而不是在任何集群上

I am not sure what it means. I am running this on my local machine and not on any cluster

任何帮助将不胜感激.

推荐答案

我想通了(Alteast 我认为这就是原因).对于面临类似情况的其他人:在创建表时,我将每一列都设置为空,因此我认为它允许在表中插入空值.但是我正在构建数据框的 Avro 模式具有可空性 = false.因此,dataframe.create 正在读取 null 并因此引发 NPE 错误.当我执行 Dataframe.write 时出现错误(这让我认为这是一个 jdbc 错误)但实际的 NPE 在创建数据帧时发生

I figured it out (Alteast I think this is the reason). For others facing a similar situation: While I was creating the table, I made every column as null so I assumed it would allow null insertion in the table. But the Avro schema I was building the dataframe had nullable = false. So, dataframe.create was reading null and hence raising a NPE error. The error was raised when I did Dataframe.write (which made me think it was a jdbc error) but the actual NPE happened while creating the dataframe

这篇关于Spark在执行jdbc保存时给出空指针异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

相关文档推荐

Does SQL Server Offer Anything Like MySQL#39;s ON DUPLICATE KEY UPDATE(SQL Server 是否提供类似于 MySQL 的 ON DUPLICATE KEY UPDATE 之类的东西)
Storing JSON in database vs. having a new column for each key(将 JSON 存储在数据库中与为每个键创建一个新列)
How to export SQL Server database to MySQL?(如何将 SQL Server 数据库导出到 MySQL?)
Emulate MySQL LIMIT clause in Microsoft SQL Server 2000(在 Microsoft SQL Server 2000 中模拟 MySQL LIMIT 子句)
SQL: Repeat a result row multiple times, and number the rows(SQL:多次重复结果行,并对行进行编号)
MySQL LIMIT clause equivalent for SQL SERVER(与 SQL SERVER 等效的 MySQL LIMIT 子句)