Summary of Decision Trees For Interpretable Clusters in Mixture Models and Deep Representations, by Maximilian Fleissner et al.
Decision Trees for Interpretable Clusters in Mixture Models and Deep Representations
by Maximilian Fleissner, Maedeh Zarvandi, Debarghya Ghoshdastidar
First submitted to arxiv on: 3 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Decision Trees are a fundamental component of Explainable Machine Learning (XLM), serving as interpretable alternatives to black-box models. Recently, there has been growing interest in using Decision Trees for unsupervised learning, despite traditional applications being focused on supervised settings. This paper introduces the concept of an Explainability-to-Noise Ratio for mixture models, formalizing the intuition that well-clustered data can be explained effectively using a Decision Tree. The authors propose an algorithm constructing suitable trees from input mixture models and prove upper and lower bounds on error rates assuming sub-Gaussianity of mixture components. Additionally, they demonstrate how Concept Activation Vectors (CAVs) can extend XLM to Neural Networks. This approach is empirically validated on standard tabular and image datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Decision Trees are an important part of machine learning that helps us understand how a model works. Traditionally, Decision Trees have been used for supervised learning, but recently people have started using them for unsupervised learning too. In this paper, the authors introduce a new idea called Explainability-to-Noise Ratio that helps us understand when a Decision Tree can correctly group data. They also propose an algorithm to build these trees and show how well they work on different types of data. |
Keywords
» Artificial intelligence » Decision tree » Machine learning » Supervised » Unsupervised