Summary of Proofread: Fixes All Errors with One Tap, by Renjie Liu et al.
Proofread: Fixes All Errors with One Tap
by Renjie Liu, Yanxiang Zhang, Yun Zhu, Haicheng Sun, Yuanbo Zhang, Michael Xuelin Huang, Shanqing Cai, Lei Meng, Shumin Zhai
First submitted to arxiv on: 6 Jun 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper demonstrates Proofread, a novel Gboard feature powered by a Large Language Model (LLM) that enables seamless sentence-level and paragraph-level corrections with a single tap. The feature uses a server-side LLM to provide impressive typing experience capabilities. To achieve this, the authors implement a careful data synthetic pipeline tailored to online use cases, design multifaceted metrics, employ a two-stage tuning approach using Supervised Fine Tuning (SFT) for foundational quality and Reinforcement Learning (RL) for targeted refinement. The tuned PaLM2-XS model achieved 85.56% good ratio in an extensive experiment on a human-labeled golden set. The feature was launched to Pixel 8 devices, serving the model on TPU v5 in Google Cloud, with thousands of daily active users. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper shows how a new Gboard feature called Proofread uses a special kind of artificial intelligence (AI) to help people type more accurately and quickly. This AI is called a Large Language Model, or LLM. The authors did some careful work to make sure the LLM was trained correctly using data from online use cases. They also created special metrics to measure how well the LLM performed. Then, they used two different methods to fine-tune the LLM: one for general typing skills and another for specific proofreading tasks. The results were impressive, with the model achieving a good accuracy rate of 85.56%. This feature was then released to thousands of daily active users on Pixel 8 devices. |
Keywords
* Artificial intelligence * Fine tuning * Large language model * Reinforcement learning * Supervised