Summary of Compositional Learning Of Functions in Humans and Machines, by Yanli Zhou et al.
Compositional learning of functions in humans and machines
by Yanli Zhou, Brenden M. Lake, Adina Williams
First submitted to arxiv on: 18 Mar 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Human-Computer Interaction (cs.HC); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates humans’ and artificial intelligence’s ability to learn and reason with complex compositions of visual functions. The study explores how individuals comprehend the output of interacting functions that depend on context changes induced by different function orderings. Participants were trained on individual functions and then assessed on composing two learned functions across four interaction types, including instances where the first function creates or removes the context for applying the second function. Results show humans can make zero-shot generalizations on novel visual function compositions across interaction conditions, demonstrating sensitivity to contextual changes. A comparison with a neural network model reveals that the meta-learning for compositionality approach allows standard sequence-to-sequence Transformers to mimic human generalization patterns in composing functions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how people and computers learn and understand complex things made up of smaller parts. It’s like cooking: we can take known recipes and make new dishes by combining them in different ways. The study shows that humans are good at understanding these combinations, even when the order of the “recipes” changes. A computer model was also tested, but it didn’t do as well until it learned a trick called meta-learning for compositionality. This allows computers to understand and make new things just like people do. |
Keywords
* Artificial intelligence * Generalization * Meta learning * Neural network * Zero shot