Loading Now

Summary of Deep Learning to Predict Late-onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (shgs) Strategy For Meta Tuning Concerning Deep Feed-forward Neural Network, by Yijun Zhou et al.


Deep Learning to Predict Late-Onset Breast Cancer Metastasis: the Single Hyperparameter Grid Search (SHGS) Strategy for Meta Tuning Concerning Deep Feed-forward Neural Network

by Yijun Zhou, Om Arora-Jain, Xia Jiang

First submitted to arxiv on: 28 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE); Quantitative Methods (q-bio.QM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
While machine learning has made significant strides in medicine, its widespread adoption in clinical applications remains limited, particularly in predicting breast cancer metastasis years in advance. Our team developed a Deep Feedforward Neural Network (DFNN) model to tackle this challenge. However, efficiently identifying optimal hyperparameter values through grid search is a major hurdle due to the constraints of time and resources. To address these challenges, we introduced Single Hyperparameter Grid Search (SHGS), a preselection method that streamlines the process. Our experiments with SHGS applied to DFNN models for breast cancer metastasis prediction focus on analyzing eight target hyperparameters: epochs, batch size, dropout, L1, L2, learning rate, decay, and momentum. We created three figures illustrating the experiment results obtained from three LSM-I-10-Plus-year datasets. Our findings reveal that optimal hyperparameter values depend not only on the dataset but also on the settings of other hyperparameters.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about using machine learning to predict when breast cancer will spread. Right now, this technology isn’t widely used in hospitals because it’s hard to make it work well. We developed a special kind of neural network called DFNN and tested it with different settings. To make things faster and easier, we created a new way to choose the right settings for our model. We looked at how eight different factors affected our model’s performance. Our results show that choosing the right settings depends on both the data and other factors.

Keywords

» Artificial intelligence  » Dropout  » Grid search  » Hyperparameter  » Machine learning  » Neural network