Summary of Mixed-curvature Decision Trees and Random Forests, by Philippe Chlenski et al.
Mixed-curvature decision trees and random forests
by Philippe Chlenski, Quentin Chu, Raiyan R. Khan, Kaizhu Du, Antonio Khalil Moretti, Itsik Pe’er
First submitted to arxiv on: 3 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Decision trees (DTs) and random forests (RFs) are widely used for classification and regression in Euclidean spaces. However, learning algorithms for non-Euclidean spaces are limited. This paper extends DT and RF algorithms to product manifolds, which combine different types of curvature such as hyperbolic, hyperspherical, or Euclidean. The novel angular reformulation respects the geometry of the product manifold, allowing for geodesically convex, maximum-margin, and composable splits. In special cases, this simplifies to existing algorithms in Euclidean or hyperbolic spaces. Benchmarked on various tasks, including classification, regression, and link prediction, product RFs outperformed other classifiers on 25 out of 57 benchmarks. This highlights the value of product RFs as powerful new tools for data analysis in product manifolds. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Product manifolds combine different curvatures to handle complex datasets. Decision trees and random forests are used for classification and regression, but learning algorithms for non-Euclidean spaces were limited. The paper extends these algorithms to product manifolds, respecting the geometry of the space. This allows for geodesically convex splits that are maximum-margin and composable. In special cases, this simplifies to existing algorithms in Euclidean or hyperbolic spaces. The paper benchmarks its method on various tasks and shows that product random forests outperformed other classifiers. |
Keywords
» Artificial intelligence » Classification » Regression