A Look at SparkSQL

A Look at SparkSQL

If you’ve been reading about Apache Spark, you might be worried about whether you have to relearn all of your skills for using it to interact with databases. With Apache Spark, whether you’re a DBA or a developer, you’ll be able to interact with Apache Spark in the way you’re used to?—?while solving real problems.

What Is SparkSQL?

SparkSQL, as the name suggests, is a way to use Apache Spark using the SQL language. Apache Spark makes it easy to run complex queries over lots of nodes, something that’s rather difficult with conventional RDBMSs like MySQL.

Unlike a NoSQL database, you don’t have to learn a new query language or database model. It offers the advantage of NoSQL in scalability, and ease of running over a cluster while using the familiar SQL query model. You can import a number of different data formats in SparkSQL, such as Parquet files, JSON data, as well as RDDs (the native data format of Apache Spark).

SparkSQL allows for both interactive and batch operations. You can take advantage of Spark’s speed, running queries in real time. Spark is so fast partly because of lazy evaluation, which means that queries won’t actually be computed until you need some kind of output.

By using a REPL (i.e. interactive shell), you can explore your data using SparkSQL in real time. You can choose either Spark’s native Scala or Python.

If you haven’t noticed, Spark draws on a lot of functional programming concepts from languages like Haskell and Lisp: lazy evaluation, immutable data structures, and an interactive REPL. These concepts aren’t exactly new, as Lisp data back to the late ‘50s.

SchemaRDD

SchemaRDD is a special RDD, or Resilient Distributed Dataset. RDDs are central to understanding Apache Spark. RDDs are immutable data structures, which means that you can’t change them. Operations on RDDs simply return new RDDs. This allows for a degree of safety when dealing with RDDs.

Lineages keep track of all the changes on RDDs, which are known as transformations. In case of some kind of failure, Spark can reconstruct the data from these lineages.

RDDs are also represented in memory, or in at least as much memory as is possible. This gives Spark an extra speed boost.

SchemaRDD is a special RDD that works similarly to a SQL table. You can import your data from a text file into a SchemaRDD.

Queries

You can import your data from text files and then work on it using SQL queries such as SELECT, JOIN, and more. (see a live example)

Spark provides two contexts for queries: SQLContext and HiveContext. The former provides a simple SQL parser, while HiveContext gives you access to a HiveQL cluster for more powerful queries.

Use Case: Customers

You’re probably itching to see all this stuff in action. Let’s borrow an example from MapR’s Apache Spark reference card.

Let’s pretend we run a clothing store in the Dallas, Texas, area, and we want to know a little more about our customers. We have a plain text database showing customer name, age, gender, and address, where the values are separated by a “|”:
John Smith|38|M|201 East Heading Way #2203,Irving, TX,75063
Liana Dole|22|F|1023 West Feeder Rd, Plano,TX,75093
Craig Wolf|34|M|75942 Border Trail,Fort Worth,TX,75108
John Ledger|28|M|203 Galaxy Way,Paris, TX,75461
Joe Graham|40|M|5023 Silicon Rd,London,TX,76

Using Scala, we’ll define a schema:
case class Customer(name:String,age:Int,gender:String,address:String)

Next, we’ll import our plain text file and make a SQLContext:
val sparkConf = new SparkConf().setAppName(“Customers”)
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val r = sc.textFile(“/Users/jim/temp/customers.txt”)
val records = r.map(_.split(‘|’))
val c = records.map(r=>Customer(r(0),r(1).trim.toInt,r(2),r(3)))
c.registerAsTable(“customers”)

Suppose management has decided that they’re going to start targeting millennial males as a lucrative market. We might start by looking through our database by age and gender:

sqlContext.sql(“select * from customers where gender=’M’ and age < 30”).collect().foreach(println)

Here’s the result:
[John Ledger,28,M,203 Galaxy Way,Paris, TX,75461]

It looks like we’re going to have to do a little work in attracting more of these kinds of customers.

Conclusion

For a more in-depth introduction to Spark, read Getting Started with Spark: From Inception to Production, a free interactive eBook by James A. Scott


Originally published at www.smartdatacollective.com.

Puneet Kumar

Database Engineer | Database Administrator | Database Developer | Database Platform Engineer | | AWS | SQL | MySQL | Oracle | PL/SQL | SQL Server | DynamoDB | Linux | Python | Snowflake SQL

8 年

SHARK

Paul Wills

Coder, data junkie, roboticist

8 年

This is the future right here. With the new types of storage that are coming in the future, this is the model that best takes advantage of it.

要查看或添加评论,请登录

Jim Scott的更多文章

  • Apache Spark in a Hadoop-based Big Data Architecture – Infographic

    Apache Spark in a Hadoop-based Big Data Architecture – Infographic

    Note: If you’re interested in learning more about Apache Spark, download this free interactive ebook?—?Getting Started…

  • The Importance of Apache Drill to the Big Data Ecosystem

    The Importance of Apache Drill to the Big Data Ecosystem

    There are many lessons that our high school teachers tried to teach us. Some stuck and others went in one ear and out…

    7 条评论
  • Turning Data Into Value with Hadoop and Spark?—?Infographic

    Turning Data Into Value with Hadoop and Spark?—?Infographic

    The faster questions can be asked the faster you can get answers. Waiting for data to be shipped off of servers to a…

  • Big Data on the Road

    Big Data on the Road

    Getting from point A to point B has been one of humanity’s greatest preoccupations throughout history. While we’ve…

  • Zeta Architecture: Hexagon is the new circle

    Zeta Architecture: Hexagon is the new circle

    Data processing in the enterprise goes very swiftly from “good enough” to “we need to be faster!” as expectations grow.…

    4 条评论
  • A Guide to Spark Streaming?—?Code Examples Included

    A Guide to Spark Streaming?—?Code Examples Included

    Apache Spark is great for processing large amounts of data over large clusters, but wouldn’t it be great if you could…

  • A Closer Look at RDDs

    A Closer Look at RDDs

    Apache Spark has gotten a lot of attention for its fast processing of large amounts of data. But how does it get up to…

    5 条评论
  • How the Internet of Things Impacts Big Data Strategies

    How the Internet of Things Impacts Big Data Strategies

    What exactly is the Internet of Things? Put simply, the Internet of Things (IoT) connects devices such as everyday…

  • NoSQL and the Internet of Things

    NoSQL and the Internet of Things

    Internet of Things technology is a hot topic. You can’t read a tech news site without coming across at least one…

    1 条评论
  • NoSQL and Real-Time Analytics: What You Need to Know

    NoSQL and Real-Time Analytics: What You Need to Know

    If you’re in business, you need to know what’s going on both inside and outside your company operations. You need some…

    8 条评论

社区洞察

其他会员也浏览了