Loading Now

Summary of Generalized Exponentiated Gradient Algorithms and Their Application to On-line Portfolio Selection, by Andrzej Cichocki et al.


Generalized Exponentiated Gradient Algorithms and Their Application to On-Line Portfolio Selection

by Andrzej Cichocki, Sergio Cruces, Auxiliadora Sarmiento, Toshihisa Tanaka

First submitted to arxiv on: 2 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Information Theory (cs.IT); Portfolio Management (q-fin.PM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces a novel family of multiplicative gradient algorithms for positive data, called EGAB, which derive from an Alpha-Beta divergence regularization function. These updates are highly flexible, controlled by three hyperparameters: alpha, beta, and the learning rate eta. The authors develop two methods to enforce unit l1 norm constraints on nonnegative weight vectors within generalized EGAB algorithms. As an illustration of their applicability, they evaluate the proposed updates in addressing the online portfolio selection problem (OLPS) using gradient-based methods. Simulation results confirm that the adaptability of these generalized gradient updates can effectively enhance performance for some portfolios, particularly in scenarios involving transaction costs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way to update algorithms for positive data. It’s called EGAB and uses a special formula from Alpha-Beta divergence. This new family of updates is very flexible because it has three important settings: alpha, beta, and the learning rate. To make sure these updates work correctly, the authors developed two ways to control them. They tested this on a problem called online portfolio selection (OLPS) using gradient-based methods. The results show that making these updates more adaptable can actually help some portfolios perform better in certain situations.

Keywords

» Artificial intelligence  » Regularization