security

New Security Protocol Protects Cloud-Based Server Data – Tech Briefs


MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep learning computations. (Image: Christine Daniloff, MIT; iStock)

Deep-learning models are being used in many fields, from health care diagnostics to financial forecasting. However, these models are so computationally intensive that they require the use of powerful cloud-based servers.

This reliance on cloud computing poses significant security risks, particularly in areas like health care, where hospitals may be hesitant to use AI tools to analyze confidential patient data due to privacy concerns.

To tackle this pressing issue, MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep-learning computations.

By encoding data into the laser light used in fiber optic communications systems, the protocol exploits the fundamental principles of quantum mechanics, making it impossible for attackers to copy or intercept the information without detection.

Moreover, the technique guarantees security without compromising the accuracy of the deep-learning models. In tests, the researcher demonstrated that their protocol could maintain 96 percent accuracy while ensuring robust security measures.

“Deep learning models like GPT-4 have unprecedented capabilities but require massive computational resources. Our protocol enables users to harness these powerful models without compromising the privacy of their data or the proprietary nature of the models themselves,” said Lead Author Kfir Sulimany, MIT Postdoc in the Research Laboratory for Electronics (RLE).

Here is an exclusive Tech Briefs interview, edited for length and clarity, with Sulimany; Senior Author Dirk Englund, Professor in EECS, Principal Investigator of the Quantum Photonics and Artificial Intelligence Group and of RLE; MIT postdoc Sri Krishna Vadlamani; and electrical engineering and computer science graduate student Prahlad Iyengar.

Read More   Hackers Hijack GitHub Accounts in Supply Chain Attack Affecting Top-gg and Others

Tech Briefs: What was the biggest technical challenge you faced while developing this security protocol?

Sulimany: One of the biggest technical challenges was proving the security of the protocol without compromising the accuracy of the deep learning models or introducing the computational overhead typical of classical encryption methods. We had to carefully balance the quantum properties of light with the requirement for a high signal-to-noise ratio in deep learning. This effort demanded a deep understanding of both the fundamentals of quantum mechanics and machine learning.

Tech Briefs: How did this project come about? What was the catalyst for your work?

Englund: The catalyst for this work came from earlier experiments on distributed machine learning inference between MIT’s main campus and MIT Lincoln Laboratory. It seemed promising to me some years ago that we could build on years of quantum cryptography research to offer something entirely new — physical-layer security for distributed machine learning. However, the theoretical and experimental challenges were substantial. It wasn’t until Kfir joined our team that we could work through the technical details. Kfir’s expertise in both the experimental and theoretical methods enabled us to develop the unified framework underpinning this new protocol.

Tech Briefs: Can you explain in simple terms how it works?

Vadlamani: In simple terms, our protocol uses the quantum properties of light to secure the communication between a client (who owns confidential data) and a server (that holds a confidential deep learning model). The server encodes the deep learning model’s parameters into light waves and sends them to the client, which performs calculations on their private data using the encoded model. The quantum nature of light ensures that the incoming model cannot be copied or intercepted by the client or any eavesdropper without subsequent detection by the server. After the client performs the computation, the light is sent back to the server to pass verification checks; this protocol ensures that both the client’s data and the server’s model remain secure. The protocol leverages the no-cloning theorem from quantum mechanics to ensure that no sensitive information is leaked during the process.

Read More   Dragon Breath APT Group Using Double-Clean-App Technique to Target Gambling Industry

Tech Briefs: The article I read says, “In the future, the researchers want to study how this protocol could be applied to a technique called federated learning, where multiple parties use their data to train a central deep-learning model. It could also be used in quantum operations, rather than the classical operations they studied for this work, which could provide advantages in both accuracy and security.” Do you have any plans for this? What are your next steps, where do you go from here?

Iyengar: Yes, we are exploring how our protocol could be adapted for federated learning. In federated learning, multiple parties contribute data to train a shared model, and our protocol could help ensure that each party’s data remains private throughout the process. We are also considering expanding the protocol to enable the legitimate parties to use quantum operations rather than just the eavesdropper, which could lead to further improvements in both security and accuracy. Additionally, we are interested in exploring other applications in fields where data privacy is critical, such as healthcare and finance, which have seen an increasing reliance on AI models and bring in their own additional constraints.

Tech Briefs: Do you have any updates you can share?

Sulimany: We are currently working on several projects that extend the security guarantees of our protocol in real-world scenarios. For example, we aim to test the protocol in larger-scale systems in collaboration with Professor Eleni Diamanti at CNRS. Additionally, we plan to evaluate the system on more complex machine learning tasks, which will provide a clearer understanding of its potential for broader applications.

Read More   AI Will Create More Jobs Than It Replaces Says Business Leaders - PR Newswire





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.