Offered By: IBMSkillsNetwork
Explainability in Graph Neural Networks: Molecular Insights
Explaining how Graph Neural Networks reason is essential for validating their structural understanding. This project examines GNNExplainer for revealing which graph components drive model behavior, analyzing influential nodes, edges, and functional motifs, and evaluating explanation faithfulness through principled sparsification and substructure tests. The explainability method is applied to molecular graphs in the MUTAG dataset to uncover which atomic interactions and functional groups most strongly drive mutagenicity predictions, linking model reasoning to meaningful chemical insights.
Continue readingGuided Project
Artificial Intelligence
At a Glance
Explaining how Graph Neural Networks reason is essential for validating their structural understanding. This project examines GNNExplainer for revealing which graph components drive model behavior, analyzing influential nodes, edges, and functional motifs, and evaluating explanation faithfulness through principled sparsification and substructure tests. The explainability method is applied to molecular graphs in the MUTAG dataset to uncover which atomic interactions and functional groups most strongly drive mutagenicity predictions, linking model reasoning to meaningful chemical insights.
Who Is It For
What You’ll Learn
- Understand the core principles behind explaining Graph Neural Networks.
- Use GNNExplainer to identify influential substructures and graph components.
- Evaluate explanation quality using faithfulness-based tests.
What You'll Need
Estimated Effort
60 Minutes
Level
Advanced
Skills You Will Learn
Artificial Intelligence, Cheminformatics, Explainable Artificial Intelligence (XAI), Graph Neural Networks (GNNs), Molecular Modeling, PyTorch Geometric (PyG)
Language
English
Course Code
GPXX03AYEN