WHAT  IS SPARK

WHAT IS SPARK

Apache Spark?is an?open-source?unified analytics engine for large-scale data processing. Spark provides an?interface?for programming clusters with implicit?data parallelism?and?fault tolerance. Originally developed at the?University of California, Berkeley's?AMPLab, the Spark?codebase?was later donated to the?Apache Software Foundation, which has maintained it since.

Overview[edit]

Apache Spark has its architectural foundation in the resilient distributed dataset (RDD), a read-only?multiset?of data items distributed over a cluster of machines, that is maintained in a?fault-tolerant?way.[2]?The Dataframe API was released as an abstraction on top of the RDD, followed by the Dataset API. In Spark 1.x, the RDD was the primary?application programming interface?(API), but as of Spark 2.x use of the Dataset API is encouraged[3]?even though the RDD API is not?deprecated.[4][5]?The RDD technology still underlies the Dataset API.[6][7]

Spark and its RDDs were developed in 2012 in response to limitations in the?MapReduce?cluster computing?paradigm, which forces a particular linear?dataflow?structure on distributed programs: MapReduce programs read input data from disk,?map?a function across the data,?reduce?the results of the map, and store reduction results on disk. Spark's RDDs function as a?working set?for distributed programs that offers a (deliberately) restricted form of distributed?shared memory.[8]

Inside Apache Spark the workflow is managed as a?directed acyclic graph?(DAG). Nodes represent RDDs while edges represent the operations on the RDDs.

Spark facilitates the implementation of both?iterative algorithms, which visit their data set multiple times in a loop, and interactive/exploratory data analysis, i.e., the repeated?database-style querying of data. The?latency?of such applications may be reduced by several orders of magnitude compared to?Apache Hadoop?MapReduce implementation.[2][9]?Among the class of iterative algorithms are the training algorithms for?machine learning?systems, which formed the initial impetus for developing Apache Spark.[10]

Apache Spark requires a?cluster manager?and a?distributed storage system. For cluster management, Spark supports standalone (native Spark cluster, where you can launch a cluster either manually or use the launch scripts provided by the install package. It is also possible to run these daemons on a single machine for testing),?Hadoop YARN,?Apache Mesos?or?Kubernetes.[11]?For distributed storage, Spark can interface with a wide variety, including?Alluxio,?Hadoop Distributed File System (HDFS),[12]?MapR File System (MapR-FS),[13]?Cassandra,[14]?OpenStack Swift,?Amazon S3,?Kudu,?Lustre file system,[15]?or a custom solution can be implemented. Spark also supports a pseudo-distributed local mode, usually used only for development or testing purposes, where distributed storage is not required and the local file system can be used instead; in such a scenario, Spark is run on a single machine with one executor per?CPU core.

Spark Core[edit]

Spark Core is the foundation of the overall project. It provides distributed task dispatching, scheduling, and basic?I/O?functionalities, exposed through an application programming interface (for?Java,?Python,?Scala,?.NET[16]?and?R) centered on the RDD?abstraction?(the Java API is available for other JVM languages, but is also usable for some other non-JVM languages that can connect to the JVM, such as?Julia[17]). This interface mirrors a?functional/higher-order?model of programming: a "driver" program invokes parallel operations such as map,?filter?or reduce on an RDD by passing a function to Spark, which then schedules the function's execution in parallel on the cluster.[2]?These operations, and additional ones such as?joins, take RDDs as input and produce new RDDs. RDDs are?immutable?and their operations are?lazy; fault-tolerance is achieved by keeping track of the "lineage" of each RDD (the sequence of operations that produced it) so that it can be reconstructed in the case of data loss. RDDs can contain any type of Python, .NET, Java, or Scala objects.

Besides the RDD-oriented functional style of programming, Spa

要查看或添加评论,请登录

Ashish Ranjan的更多文章

  • WHAT IS AGILE

    WHAT IS AGILE

    In software development, agile practices (sometimes written "Agile")[1] include requirements discovery and solutions…

  • WHAT IS GCP

    WHAT IS GCP

    Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the same…

  • WHAT IS AGILE

    WHAT IS AGILE

    In software development, agile practices (sometimes written "Agile")[1] include requirements discovery and solutions…

  • WHAT IS UNITY 3D

    WHAT IS UNITY 3D

    Unity is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at…

  • WHAT IS SHELL SCRIPTING

    WHAT IS SHELL SCRIPTING

    A shell script is a computer program designed to be run by a Unix shell, a command-line interpreter.[1] The various…

  • WHAT IS API

    WHAT IS API

    An application programming interface (API) is a way for two or more computer programs to communicate with each other…

  • WHAT IS JAVA DEVELOPER

    WHAT IS JAVA DEVELOPER

    Despite its age and legacy, Java remains one of the most popular programming languages to this day. According to a 2021…

  • WHAT IS POWER BI

    WHAT IS POWER BI

    Microsoft Power BI is an interactive data visualization software product developed by Microsoft with a primary focus on…

  • WHAT IS PMO

    WHAT IS PMO

    A project management office (abbreviated to PMO) is a group or department within a business, government agency, or…

  • WHAT IS NETWORKING

    WHAT IS NETWORKING

    A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use…

社区洞察

其他会员也浏览了