Loading Now

Summary of Guardml: Efficient Privacy-preserving Machine Learning Services Through Hybrid Homomorphic Encryption, by Eugene Frimpong et al.


GuardML: Efficient Privacy-Preserving Machine Learning Services Through Hybrid Homomorphic Encryption

by Eugene Frimpong, Khoa Nguyen, Mindaugas Budzys, Tanveer Khan, Antonis Michalas

First submitted to arxiv on: 26 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Machine Learning has revolutionized data science, but its widespread adoption introduces privacy concerns due to malicious attacks targeting ML models. To address these concerns, Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of ML models. One such approach is Hybrid Homomorphic Encryption (HHE), which combines symmetric cryptography and HE to overcome traditional HE’s limitations. Our work introduces HHE to ML by designing a PPML scheme for end devices, leveraging HHE as the fundamental building block to enable secure learning over encrypted data while preserving input data and model privacy. We demonstrate real-world applicability with an HHE-based PPML application for classifying heart disease based on sensitive ECG data. Our evaluations show a slight reduction in accuracy compared to plaintext data, but minimal communication and computation costs, underscoring the practical viability of our approach.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine Learning helps us learn from big data, but it can also put people’s privacy at risk. To keep this from happening, scientists have developed ways to make Machine Learning more private. One way is called Hybrid Homomorphic Encryption (HHE). Our team used HHE to create a new way of learning that keeps sensitive information safe on devices like smartphones or tablets. We tested our method with a real-life problem: predicting heart disease based on electrocardiogram (ECG) readings. Our results show that our approach is secure and efficient, making it possible for people to use Machine Learning without putting their privacy at risk.

Keywords

* Artificial intelligence  * Machine learning