Offered By: IBM
Great Expectations, a data validation library for Python
Garbage in, garbage out but sometimes gold could be wrongly put in the garbage. When data scientists are doing projects, the dataset-machine learning model pipeline requires appropriate data formats. But how could we check that our datasets are in good shape before our modelling? Great Expectations is the perfect tool for it. In this project, you could learn how to do data exploration using Great Expectations. Great Expectations is a Python-based open-source library for validating, documenting, and profiling your data. It helps you to maintain data quality and improve models.
Continue readingGuided Project
Data Science
162 EnrolledAt a Glance
Garbage in, garbage out but sometimes gold could be wrongly put in the garbage. When data scientists are doing projects, the dataset-machine learning model pipeline requires appropriate data formats. But how could we check that our datasets are in good shape before our modelling? Great Expectations is the perfect tool for it. In this project, you could learn how to do data exploration using Great Expectations. Great Expectations is a Python-based open-source library for validating, documenting, and profiling your data. It helps you to maintain data quality and improve models.
- Test data they ingest from other teams or vendors and ensure its validity.
- Validate data they transform as a step in their data pipeline in order to ensure the correctness of transformations.
- Prevent data quality issues from slipping into data products.
- Streamline knowledge capture from subject-matter experts and make implicit knowledge explicit.
- Develop rich, shared documentation of their data.

Here, in this project, we are going to show the basic usage of Great Expectations and apply it to an example bank churn data, modified from the one provided by Kaggle.
A Look at the Project Ahead
- What to check for the dataset before conducting a machine learning model
- Basic of Great Expectations
- How to use Great Expectations to create pipelines to make use of the dataset is good for the model
What You'll Need
Remember that the IBM Skills Network Labs environment comes with many things pre-installed (e.g. Docker) to save you the hassle of setting everything up.
Data Preparation at Scale with the IBM Data Refinery
Level
Beginner
Skills You Will Learn
Artificial Intelligence, Data Governance, Data Science, Machine Learning, Python
Language
English
Course Code
GPXX0J6EEN