Generating Human Understandable Explanations for Node Embeddings
Published in arXiv preprint arXiv:2406.07642, 2024
Shafi, Z., Chatterjee, A., & Eliassi-Rad, T. (2024). Generating Human Understandable Explanations for Node Embeddings. arXiv:2406.07642.Graph neural networks and embedding methods have achieved strong performance in many tasks, yet their outputs lack human-interpretable explanations. This paper proposes a framework that generates natural-language explanations for node embeddings in graph machine learning models. By linking learned representations to subgraph structures and semantic node attributes, the framework helps practitioners understand why a model assigns certain embeddings — an important step toward explainable AI for graph-structured data. We evaluate our approach on citation, social, and biological networks, showing strong alignment between model rationale and human expert judgment.
