Cognitive Class

Spark Fundamentals I

Ignite your interest in Spark with an introduction to the core concepts that make this general processor an essential tool set for working with Big Data.

Start the Free Course

About This Course

Learn the fundamentals of Spark, the technology that is revolutionizing the analytics and big data world!

Spark is an open source processing engine built around speed, ease of use, and analytics. If you have large amounts of data that requires low latency processing that a typical MapReduce program cannot provide, Spark is the way to go.

  • Learn how it performs at speeds up to 100 times faster than Map Reduce for iterative algorithms or interactive data mining.
  • Learn how it provides in-memory cluster computing for lightning fast speed and supports Java, Python, R, and Scala APIs for ease of development.
  • Learn how it can handle a wide range of data processing scenarios by combining SQL, streaming and complex analytics together seamlessly in the same application.
  • Learn how it runs on top of Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources such as HDFS, Cassandra, HBase, or S3.
IBM Data Science Experience provides you with Jupyter notebooks that is already connected to Spark and supports Python, R, and Scala so that you start creating your Spark projects and collaborating with other data scientists. When you sign up, you get free access to Data Science Experience and all other IBM services for 30 days. Start now and take advantage of this offer.

Course Syllabus

Module 1 - Introduction to Spark - Getting started

  1. What is Spark and what is its purpose?
  2. Components of the Spark unified stack
  3. Resilient Distributed Dataset (RDD)
  4. Downloading and installing Spark standalone
  5. Scala and Python overview
  6. Launching and using Spark’s Scala and Python shell ©

Module 2 - Resilient Distributed Dataset and DataFrames

  1. Understand how to create parallelized collections and external datasets
  2. Work with Resilient Distributed Dataset (RDD) operations
  3. Utilize shared variables and key-value pairs

Module 3 - Spark application programming

  1. Understand the purpose and usage of the SparkContext
  2. Initialize Spark with the various programming languages
  3. Describe and run some Spark examples
  4. Pass functions to Spark
  5. Create and run a Spark standalone application
  6. Submit applications to the cluster

Module 4 - Introduction to Spark libraries

  1. Understand and use the various Spark libraries

Module 5 - Spark configuration, monitoring and tuning

  1. Understand components of the Spark cluster
  2. Configure Spark to modify the Spark properties, environmental variables, or logging properties
  3. Monitor Spark using the web UIs, metrics, and external instrumentation
  4. Understand performance tuning considerations

General Information

  • This course is free.
  • It is self-paced.
  • It can be taken at any time.
  • It can be audited as many times as you wish.


  • Have taken the Hadoop 101 course on Cognitive Class.

Course Staff

Henry Quach, Instructor of

Henry L. Quach

Henry L. Quach is the Technical Curriculum Developer Lead for Big Data. He has been with IBM for 9 years focusing on education development. Henry likes to dabble in a number of things including being part of the original team that developed and designed the concept for the IBM Open Badges program. He has a Bachelor of Science in Computer Science and a Master of Science in Software Engineering from San Jose State University.

Alan Barnes

Alan Barnes

Alan Barnes is a Senior IBM Information Management Course Developer / Consultant. He has worked in several companies as a Senior Technical Consultant, Database Team Manager, Application Programmer, Systems Programmer, Business Analyst, DB2 Team Lead and more. His career in IT spans more than 35 years.