Loading Now

Summary of Smi-editor: Edit-based Smiles Language Model with Fragment-level Supervision, by Kangjie Zheng et al.


SMI-Editor: Edit-based SMILES Language Model with Fragment-level Supervision

by Kangjie Zheng, Siyue Liang, Junwei Yang, Bin Feng, Zequn Liu, Wei Ju, Zhiping Xiao, Ming Zhang

First submitted to arxiv on: 7 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Biomolecules (q-bio.BM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes SMI-Editor, a novel edit-based pre-trained SMILES language model, to address the limitations of existing pre-trained SMILES models. These models typically focus on single-token level supervision during pre-training, neglecting substructural information and only processing corrupted SMILES inputs. SMI-Editor introduces fragment-level training signals by disrupting substructures within molecules and feeding them back into the model. This approach enables the use of valid SMILES as inputs, allowing the model to learn how to reconstruct complete molecules from incomplete structures. The proposed method achieves state-of-the-art performance across multiple downstream molecular tasks, even outperforming several 3D molecular representation models.
Low GrooveSquid.com (original content) Low Difficulty Summary
SMILES is a way to represent molecules in text form. This helps pre-trained language models understand molecule structures. However, existing pre-trained SMILES models don’t fully use the information about smaller parts of molecules. They also only learn from broken or corrupted texts, not whole, correct ones. To fix this, the researchers created SMI-Editor, a new way to train SMILES models. It adds more training data by changing small parts of molecule structures and then asking the model to fix these changes. This helps the model understand how to reconstruct complete molecules from incomplete ones. The results show that SMI-Editor works better than other methods for certain tasks.

Keywords

* Artificial intelligence  * Language model  * Token