Loading Now

Summary of Hierarchical Mixture Of Experts: Generalizable Learning For High-level Synthesis, by Weikai Li et al.


Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis

by Weikai Li, Ding Wang, Zijian Ding, Atefeh Sohrabizadeh, Zongyue Qin, Jason Cong, Yizhou Sun

First submitted to arxiv on: 25 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Hardware Architecture (cs.AR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed hierarchical Mixture of Experts (MoE) model aims to automate the design of pragmas in High-Level Synthesis (HLS) for Field Programmable Gate Array (FPGA) design. This addresses the challenge faced by software developers who need hardware knowledge to design pragmas, which are crucial for performance prediction and optimization. The hierarchical MoE model consists of two levels: low-level MoE that learns to deal with different regions in the representation space and high-level MoE that aggregates the three granularities (node, basic block, and graph) for the final decision. To stabilize training, a two-stage training method is proposed to avoid expert polarization. The effectiveness of the hierarchical MoE is verified through extensive experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
HLS is a tool used in designing FPGAs. It allows software developers to design FPGA circuits using programming languages. While it’s easy for developers to write programs, they need hardware knowledge to optimize performance. Recently, machine learning algorithms like GNNs have been used to automate this process. However, the models often don’t generalize well to new kernels. The proposed hierarchical MoE model aims to address this issue by learning patterns between old and new kernels. It consists of two levels: low-level MoE that learns about different regions in the representation space, and high-level MoE that aggregates information for the final decision.

Keywords

» Artificial intelligence  » Machine learning  » Mixture of experts  » Optimization