Loading Now

Summary of Scalable and Efficient Methods For Uncertainty Estimation and Reduction in Deep Learning, by Soyed Tuhin Ahmed


Scalable and Efficient Methods for Uncertainty Estimation and Reduction in Deep Learning

by Soyed Tuhin Ahmed

First submitted to arxiv on: 13 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This PhD thesis explores scalable and efficient methods for uncertainty estimation and reduction in deep learning, with a focus on Computation-in-Memory (CIM) using emerging resistive non-volatile memories. The research tackles inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities, crucial in maintaining functional safety in automated decision-making systems. To achieve this, the authors propose problem-aware training algorithms, novel NN topologies, and hardware co-design solutions that utilize dropout-based binary Bayesian Neural Networks leveraging spintronic devices and variational inference techniques. These innovations enhance OOD data detection, inference accuracy, and energy efficiency, contributing to the reliability and robustness of NN implementations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This PhD thesis is all about making artificial intelligence (AI) work better in situations where it’s really important that it gets things right, like self-driving cars or medical diagnosis. Right now, AI can be super smart at things like recognizing pictures or understanding speech, but when it encounters something it hasn’t seen before, it can make mistakes. This research tries to solve this problem by developing new ways to predict what might happen in situations where the AI isn’t sure. It does this by using special kinds of computer chips and training algorithms that help the AI avoid making mistakes.

Keywords

* Artificial intelligence  * Deep learning  * Dropout  * Inference