Loading Now

Summary of Learning in Convolutional Neural Networks Accelerated by Transfer Entropy, By Adrian Moldovan et al.


Learning in Convolutional Neural Networks Accelerated by Transfer Entropy

by Adrian Moldovan, Angel Caţaron, Răzvan Andonie

First submitted to arxiv on: 3 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Information Theory (cs.IT)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel training mechanism for Convolutional Neural Networks (CNNs) integrates Transfer Entropy (TE) feedback connections to accelerate learning while adding computational overhead. The proposed approach leverages TE to quantify effective connectivity between artificial neurons, enabling the analysis of relationships between neuron output pairs across different layers. By introducing a TE parameter that accelerates training, fewer epochs are required, but this comes at the cost of increased computational complexity per epoch. Experimentally, it is shown that considering only inter-neural information transfer from specific layers and neuron subsets yields an efficient trade-off between accuracy and overhead.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper introduces a new way to train Convolutional Neural Networks (CNNs) by using Transfer Entropy (TE). TE helps us understand how different neurons in the network talk to each other. By adding this feedback, the training process gets faster, but it also makes the computer work harder. The researchers found that if they only look at certain parts of the network, the benefits outweigh the extra work. This new approach can help CNNs learn better and faster.

Keywords

* Artificial intelligence