🚀 Master the language of AI with our brand new course: "Prompt Engineering for Everyone" Learn more

Offered By: IBM

Great Expectations, a data validation library for Python

Garbage in, garbage out but sometimes gold could be wrongly put in the garbage. When data scientists are doing projects, the dataset-machine learning model pipeline requires appropriate data formats. But how could we check that our datasets are in good shape before our modelling? Great Expectations is the perfect tool for it. In this project, you could learn how to do data exploration using Great Expectations. Great Expectations is a Python-based open-source library for validating, documenting, and profiling your data. It helps you to maintain data quality and improve models.

Continue reading

Guided Project

Data Science

129 Enrolled
4.6
(16 Reviews)

At a Glance

Garbage in, garbage out but sometimes gold could be wrongly put in the garbage. When data scientists are doing projects, the dataset-machine learning model pipeline requires appropriate data formats. But how could we check that our datasets are in good shape before our modelling? Great Expectations is the perfect tool for it. In this project, you could learn how to do data exploration using Great Expectations. Great Expectations is a Python-based open-source library for validating, documenting, and profiling your data. It helps you to maintain data quality and improve models.

Great Expectations is a Python-based open-source library for validating, documenting, and profiling your data. It helps you to maintain data quality and improve the usage of the data used in machine learning models. Mostly, Data science and data engineering teams use Great Expectations to:
  • Test data they ingest from other teams or vendors and ensure its validity.
  • Validate data they transform as a step in their data pipeline in order to ensure the correctness of transformations.
  • Prevent data quality issues from slipping into data products.
  • Streamline knowledge capture from subject-matter experts and make implicit knowledge explicit.
  • Develop rich, shared documentation of their data.

Pip-CTA.png 16.4 KB

Here, in this project, we are going to show the basic usage of Great Expectations and apply it to an example bank churn data, modified from the one provided by Kaggle.

A Look at the Project Ahead

In this project, you could learn:
  • What to check for the dataset before conducting a machine learning model 
  • Basic of Great Expectations 
  • How to use Great Expectations to create pipelines to make use of the dataset is good for the model

What You'll Need

This is a very simple project, beginners are welcome. We are using Python code in the Jupyter notebook in this project.
Remember that the IBM Skills Network Labs environment comes with many things pre-installed (e.g. Docker) to save you the hassle of setting everything up.


Data Preparation at Scale with the IBM Data Refinery

Great Expectations is a very useful library for casual data preparation tasks. If you are looking to improve the quality of your data at scale give the IBM Data Refinery tool a try at no charge. The data refinery tool, available with IBM Watson® Studio and IBM Watson® Knowledge Catalog, saves data preparation time by quickly transforming large amounts of raw data into consumable, high-quality information that’s ready for analytics. Interactively discover, cleanse, and transform your data with over 100 built-in operations. No coding skills are required.



Level

Beginner

Skills You Will Learn

Artificial Intelligence, Data Science, data-governance, Machine Learning, Python

Language

English

Course Code

GPXX0J6EEN

Tell Your Friends!

Saved this page to your clipboard!

Sign up to our newsletter

Stay connected with the latest industry news and knowledge!