Summary of Impact Assessment Of Missing Data in Model Predictions For Earth Observation Applications, by Francisco Mena et al.
Impact Assessment of Missing Data in Model Predictions for Earth Observation Applications
by Francisco Mena, Diego Arenas, Marcela Charfuelan, Marlon Nuske, Andreas Dengel
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A machine learning-based approach for assessing the impact of missing Earth observation (EO) data sources on model performance is presented. The study focuses on four datasets with classification and regression tasks, evaluating the predictive quality of different methods under scenarios where EO data is missing due to noise, clouds, or satellite mission failures. The results show that some models are more robust to missing data than others, with the Ensemble strategy demonstrating a prediction robustness of up to 100%. The analysis also highlights the challenges of dealing with missing scenarios in regression tasks compared to classification tasks. Notably, the study finds that the optical view is the most critical when it is missing individually. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new study looks at how machine learning models work when some Earth observation data is missing. Right now, scientists often assume that all the data will be available, but this isn’t always true. For example, there might be clouds covering part of the area being studied or a satellite mission might fail. The researchers tested different methods to see how well they would work in these situations and found that some are better at handling missing data than others. They also discovered that regression tasks (which involve predicting a number) are harder to handle when data is missing compared to classification tasks (which involve categorizing things into groups). Overall, the study shows that scientists need to be prepared for when data might be missing and develop ways to deal with it. |
Keywords
* Artificial intelligence * Classification * Machine learning * Regression