Summary of What You See Is Not What You Get: Neural Partial Differential Equations and the Illusion Of Learning, by Arvind Mohan et al.
What You See is Not What You Get: Neural Partial Differential Equations and The Illusion of Learning
by Arvind Mohan, Ashesh Chattopadhyay, Jonah Miller
First submitted to arxiv on: 22 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computational Physics (physics.comp-ph)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper questions the assumption that NeuralPDEs, which embed neural networks inside partial differential equations (PDEs), are more trustworthy and generalizable than black box models. Differentiable programming relies on high-quality PDE simulations as “ground truth” for training, but these simulations are only discrete numerical approximations of true physics. The study uses ideas from numerical analysis, experiments, and Jacobian analysis to investigate the physical interpretability of NeuralPDEs. Results show that NeuralPDEs learn artifacts in simulation training data arising from discretized Taylor Series truncation error, leading to systematic bias and poor generalization capabilities. This bias manifests aggressively even in simple 1-D equations, raising concerns about differentiable programming on complex PDEs and dataset integrity of foundation models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper explores whether NeuralPDEs are as physically interpretable as we think. Right now, people assume they are more trustworthy because they’re trained on simulations that come from real physics. But what if these simulations aren’t perfect? The study shows that NeuralPDEs learn mistakes in the simulation data and become biased because of this. This means they might not work well in real-world problems. The researchers also found that it’s hard to predict when a model will be accurate or not, even with simple equations. |
Keywords
* Artificial intelligence * Generalization