Loading Now

Summary of Psne: Efficient Spectral Sparsification Algorithms For Scaling Network Embedding, by Longlong Lin et al.


PSNE: Efficient Spectral Sparsification Algorithms for Scaling Network Embedding

by Longlong Lin, Yunfeng Yu, Zihao Wang, Zeli Wang, Yuying Zhao, Jin Zhao, Tao Jia

First submitted to arxiv on: 5 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Network embedding has numerous practical applications in graph learning, mapping vertices into a low-dimensional and continuous dense vector space while preserving structural properties. Recent methods have factorized the Personalized PageRank (PPR) matrix empirically and theoretically. However, existing methods invoke Local Push to approximate single rows or columns of PPR, leading to prohibitively high computational costs for large graphs. Additionally, PPR has limited power in capturing vertex similarities, resulting in performance degradation. To overcome these issues, PSNE proposes an efficient spectral sparsification method, which accelerates the calculation of the PPR matrix while retaining strong structural similarities. PSNE uses a matrix polynomial to accelerate computation, a multiple-perspective strategy to enhance representation power, and randomized singular value decomposition for embedding vectors. Experimental evaluation on real-world and synthetic datasets shows that PSNE outperforms ten competitors in terms of efficiency, effectiveness, and scalability.
Low GrooveSquid.com (original content) Low Difficulty Summary
Network embedding helps turn complex graphs into easy-to-understand vector spaces. Researchers have been trying to make this process faster and better. Currently, methods use a technique called Local Push to approximate parts of the graph, but this takes a lot of time and isn’t very good at capturing how similar different nodes are. To fix these problems, scientists developed PSNE (an efficient spectral sparsification method). This method uses a special way of speeding up calculations, a new approach to get better results, and a clever trick to make the embedding vectors. When they tested PSNE on real-world and made-up datasets, it performed much better than other methods.

Keywords

» Artificial intelligence  » Embedding  » Vector space