Summary of Label Propagation Training Schemes For Physics-informed Neural Networks and Gaussian Processes, by Ming Zhong et al.
Label Propagation Training Schemes for Physics-Informed Neural Networks and Gaussian Processes
by Ming Zhong, Dehao Liu, Raymundo Arroyave, Ulisses Braga-Neto
First submitted to arxiv on: 8 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed semi-supervised methodology trains physics-informed machine learning models by self-training neural networks and Gaussian processes separately and combining them through co-training. This approach addresses the challenge of propagating information forward in time, a common issue with physics-informed machine learning. The method is demonstrated through extensive numerical experiments. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Physics-informed machine learning methods can now train better with less data! A new way to combine two types of models – neural networks and Gaussian processes – helps solve problems that happen when trying to predict what will happen in the future. This is super important for things like predicting weather patterns or understanding how systems change over time. |
Keywords
» Artificial intelligence » Machine learning » Self training » Semi supervised