Apache Spark Serialization issue

Its bit common to face Spark Serialization Issue while working with Streaming or basic Spark Job

org.apache.spark.SparkException: Task not serializable

Its very annoying or I may say Hard to debug the issue and find it out exactly what caused the issue. Ideally something is not Serializable and that threw the issue. Some basic Guidelines made by Databricks to avoid the scenario -

  • Declare functions inside an Object as much as possible
  • If you need to use SparkContext or SQLContext inside closures (e.g. inside foreachRDD), then use SparkContext.get() and SQLContext.getActiveOrCreate() instead
  • Redefine variables provided to class constructors inside functions
    stream.map(addOne).window(Minutes(1)).foreachRDD { rdd =>

      // Access the SQLContext using getOrCreate
      val _sqlContext = SQLContext.getOrCreate(rdd.sparkContext)
      _sqlContext.createDataFrame(rdd).toDF("value", "time")
        .withColumn("date", from_unixtime(col("time") / 1000)) // we could have imported _sqlContext.implicits._ and used $"time"
        .registerTempTable("demo_numbers")
    }



Ian Reppel, PhD

Product manager | audio engineer

8 年

Or define your classes to be Serializable, preferably in combination with Kryo (instead of native Java serialization).

要查看或添加评论,请登录

Abhishek Choudhary的更多文章

  • Slack New Architecture

    Slack New Architecture

    This article presented the architecture/engineering decisions and changes brought in Slack to Scale it massively but by…

  • Unit Testing Apache Spark Applications in Scala or Python

    Unit Testing Apache Spark Applications in Scala or Python

    I saw a trend that developers usually find it very complicated to test spark application, may be no good library…

  • Spark On YARN cluster, Some Observations

    Spark On YARN cluster, Some Observations

    1. Number of partitions in Spark Basic => n Number of cores = n partitions = Number of executors Good => 2-3 times of…

    4 条评论
  • Apache Spark (Big Data) Cache - Something Nice to Know

    Apache Spark (Big Data) Cache - Something Nice to Know

    Spark Caching is one of the most important aspect of in-memory computing technology. Spark RDD Caching is required when…

  • Apache Airflow - if you are bored of Oozie & style

    Apache Airflow - if you are bored of Oozie & style

    Apache Airflow is an incubator Apache project for Workflow or Job Scheduler. DAG is the backbone of airflow.

    1 条评论
  • Few points On Apache Spark 2.0 Streaming Over cluster

    Few points On Apache Spark 2.0 Streaming Over cluster

    Experience on Apache Spark 2.0 Streaming Over cluster Apache Spark streaming documentation has enough details about its…

  • Facebook Architecture (Technical)

    Facebook Architecture (Technical)

    Facebook's current architecture is: Web front-end written in PHP. Facebook's HipHop Compiler [1] then converts it to…

  • Apache Flink ,From a Developer point of View

    Apache Flink ,From a Developer point of View

    What is Apache Flink ? Apache Flink is an open source platform for distributed stream and batch data processing Flink’s…

    2 条评论
  • Apache Spark (big Data) DataFrame - Things to know

    Apache Spark (big Data) DataFrame - Things to know

    What is the architecture of Apache Spark Now? What is the point of interaction in Spark? Previously it was RDD but…

    6 条评论
  • Apache Spark 1.5 Released ...

    Apache Spark 1.5 Released ...

    Apache Spark 1.5 is released and now available to download https://spark.

社区洞察

其他会员也浏览了