ABOUT THIS DATAOPS METHODOLOGY COURSE
DataOps is defined by Gartner as "a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and consumers across an organization. Much like DevOps, DataOps is not a rigid dogma, but a principles-based practice influencing how data can be provided and updated to meet the need of the organization’s data consumers.”
The DataOps Methodology is designed to enable an organization to utilize a repeatable process to build and deploy analytics and data pipelines. By following data governance and model management practices they can deliver high-quality enterprise data to enable AI. Successful implementation of this methodology allows an organization to know, trust and use data to drive value.
In the DataOps Methodology course you will learn about best practices for defining a repeatable and business-oriented framework to provide delivery of trusted data. This course is part of the Data Engineering Specialization which provides learners with the foundational skills required to be a Data Engineer.
LEARNING OBJECTIVES
After completing this course, you will be able to:
- Understand the process to establish a repeatable process that delivers rigor and repeatability
- Articulate the business value of any data sprint by capturing the KPI's the sprint will deliver
- Understand how to enable the organization's business, development and operations to continuously design, deliver and validate new data demands
- Promote the necessary cultural considerations, for successful implementation of a DataOps practice including patterns that can be shared between teams as templates to promote successful adoption across an organization
- Understand our framework that fosters collaboration between contributors to the data pipeline towards a common business goal
COURSE SYLLABUS
Module 1: Establish DataOps - Prepare for operation
- Introduction and Overview
- Establish Data Strategy
- Establish Team
Module 2: Establish DataOps - Optimize for operation
- Establish Toolchain
Establish Baseline
- Establish Business Priorities
Module 3: Iterate DataOps - Know your data
Module 4: Iterate DataOps - Trust your data
- Manage Quality & Entities
- Manage Policies
Module 5: Iterate DataOps - Use your data
- Self Service
- Manage Movement & Integration
- Improve/Complete
Module 6: Improve DataOps
- Review, Refine and Recommend
GENERAL INFORMATION
- This course is free.
- It is self-paced.
- It can be taken at any time.
RECOMMENDED SKILLS PRIOR TO TAKING THIS COURSE
- No skill is required prior to taking this course.
- Anyone who wants to learn about or plays a role in data pipelines: C-level execs, Manager, Sellers, Data Steward, Quality Engineer or Data Engineer, can take this course.
COURSE STAFF

Elaine Hanley
Elaine Hanley is the IBM WW Lead for DataOps Center of Excellence, helping organizations across the globe to understand the value and whereabouts of their data to drive analytics-based decisions for success. This involves leading globally diverse organizations to understand how to direct their energy and resources to a focused business goal.
Elaine works with the engineering teams within IBM to serve this market need, determining and driving product strategy to combine multiple facets of information management. This role is a culmination of the last 25 years of work in information management across all aspects, from coding to product management.
Elaine has presented at many conferences on Data Governance and Data Management and holds a BAI (Software Engineering) from Trinity College, Dublin, and an M.Sc. (Computer Applications) from Dublin City University.
OTHER CONTRIBUTORS
The following individuals also contributed content for this course:
- Ritesh Kumar Gupta
- Ken Berridge
- Susan Whitmire
- May Li
- Rob Utzschneider
- Mary O'Shea