Missed out on TechXchange 2025? No worries! Our workshops are now available to everyone 🤩 Learn more

Offered By: IBMSkillsNetwork

Explainability in Graph Neural Networks: Molecular Insights

Explaining how Graph Neural Networks reason is essential for validating their structural understanding. This project examines GNNExplainer for revealing which graph components drive model behavior, analyzing influential nodes, edges, and functional motifs, and evaluating explanation faithfulness through principled sparsification and substructure tests. The explainability method is applied to molecular graphs in the MUTAG dataset to uncover which atomic interactions and functional groups most strongly drive mutagenicity predictions, linking model reasoning to meaningful chemical insights.

Continue reading

Guided Project

Artificial Intelligence

At a Glance

Explaining how Graph Neural Networks reason is essential for validating their structural understanding. This project examines GNNExplainer for revealing which graph components drive model behavior, analyzing influential nodes, edges, and functional motifs, and evaluating explanation faithfulness through principled sparsification and substructure tests. The explainability method is applied to molecular graphs in the MUTAG dataset to uncover which atomic interactions and functional groups most strongly drive mutagenicity predictions, linking model reasoning to meaningful chemical insights.

In this project, you’ll learn how explainability techniques reveal the internal reasoning of graph-based models. You will examine how GNNExplainer highlights influential nodes, edges, and functional motifs, and how faithfulness metrics help validate whether explanations reflect true model behavior. These methods are then applied to the MUTAG molecular graph dataset, where you will investigate which atomic interactions and functional groups most strongly drive mutagenicity predictions, connecting explainability methods to real scientific use cases.

Who Is It For

This project is designed for learners who have basic familiarity with graph machine learning and want a practical, hands-on introduction to explaining Graph Neural Networks (GNNs). It is suitable for developers, data scientists, and machine learning enthusiasts who are interested in understanding how GNNs make decisions, especially in molecular prediction tasks. The project is particularly valuable for learners who want to move beyond treating GNNs as black boxes and gain insight into how graph-based models reason over nodes, edges, and substructures.

What You’ll Learn

By the end, you’ll be able to interpret GNN decisions in a principled and scientifically informed way.
  • Understand the core principles behind explaining Graph Neural Networks.
  • Use GNNExplainer to identify influential substructures and graph components.
  • Evaluate explanation quality using faithfulness-based tests.

What You'll Need

This project requires basic familiarity with Python, graph-structured data, and machine learning concepts. An understanding of molecules as graphs (atoms as nodes, bonds as edges) will be helpful. The libraries used in this project can be installed directly within the IBM Skills Network Labs environment, allowing you to set up and run the workflow without any external configuration. The project works best on modern browsers such as Chrome, Edge, Firefox, or Safari.

Estimated Effort

60 Minutes

Level

Advanced

Skills You Will Learn

Artificial Intelligence, Cheminformatics, Explainable Artificial Intelligence (XAI), Graph Neural Networks (GNNs), Molecular Modeling, PyTorch Geometric (PyG)

Language

English

Course Code

GPXX03AYEN

Tell Your Friends!

Saved this page to your clipboard!

Have questions or need support? Chat with me 😊