enterprise

Anthropic’s red team methods are a needed step to close AI security gaps


It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More


AI red teaming is proving effective in discovering security gaps that other security approaches can’t see, saving AI companies from having their models used to produce objectionable content.

Anthropic released its AI red team guidelines last week, joining a group of AI providers that include Google, Microsoft, NIST, NVIDIA and OpenAI, who have also released comparable frameworks.

The goal is to identify and close AI model security gaps

All announced frameworks share the common goal of identifying and closing growing security gaps in AI models.

It’s those growing security gaps that have lawmakers and policymakers worried and pushing for more safe, secure, and trustworthy AI. The Safe, Secure, and Trustworthy Artificial Intelligence (14110) Executive Order (EO) by President Biden, which came out on Oct. 30, 2018, says that NIST “will establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.”


VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


NIST released two draft publications in late April to help manage the risks of generative AI. They are companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF).

Germany’s Federal Office for Information Security (BSI) provides red teaming as part of its broader IT-Grundschutz framework. Australia, Canada, the European Union, Japan, The Netherlands, and Singapore have notable frameworks in place. The European Parliament passed the  EU Artificial Intelligence Act in March of this year.

Read More   Invoke AI rolls out refined control features for image generation

Red teaming AI models rely on iterations of randomized techniques

Red teaming is a technique that interactively tests AI models to simulate diverse, unpredictable attacks, with the goal of determining where their strong and weak areas are. Generative AI (genAI) models are exceptionally difficult to test as they mimic human-generated content at scale.

The goal is to get models to do and say things they’re not programmed to do, including surfacing biases. They rely on LLMs to automate prompt generation and attack scenarios to find and correct model weaknesses at scale. Models can easily be “jailbreaked” to create hate speech, pornography, use copyrighted material, or regurgitate source data, including social security and phone numbers.

A recent VentureBeat interview with the most prolific jailbreaker of ChatGPT and other leading LLMs illustrates why red teaming needs to take a multimodal, multifaceted approach to the challenge.

Red teaming’s value in improving AI model security continues to be proven in industry-wide competitions. One of the four methods Anthropic mentions in their blog post is crowdsourced red teaming. Last year’s DEF CON hosted the first-ever Generative Red Team (GRT) Challenge, considered to be one of the more successful uses of crowdsourcing techniques. Models were provided by Anthropic, Cohere, Google, Hugging Face, Meta, Nvidia, OpenAI, and Stability. Participants in the challenge tested the models on an evaluation platform developed by Scale AI.

Anthropic releases their AI red team strategy

In releasing their methods, Anthropic stresses the need for systematic, standardized testing processes that scale and discloses that the lack of standards has slowed progress in AI red teaming industry-wide.

Read More   Enterprise Blockchain: How Does It Help Businesses? - Bitrates

“In an effort to contribute to this goal, we share an overview of some of the red teaming methods we have explored and demonstrate how they can be integrated into an iterative process from qualitative red teaming to the development of automated evaluations,” Anthropic writes in the blog post.

The four methods Anthropic mentions include domain-specific expert red teaming, using language models to red team, red teaming in new modalities, and open-ended general red teaming.

Anthropic’s approach to red teaming ensures human-in-the-middle insights enrich and provide contextual intelligence into the quantitative results of other red teaming techniques. There’s a balance between human intuition and knowledge and automated text data that needs that context to guide how models are updated and made more secure.

An example of this is how Anthropic goes all-in on domain-specific expert teaming by relying on experts while also prioritizing Policy Vulnerability Testing (PVT), a qualitative technique to identify and implement security safeguards for many of the most challenging areas they’re being compromised in. Election interference, extremism, hate speech, and pornography are a few of the many areas in which models need to be fine-tuned to reduce bias and abuse.  

Every AI company that has released an AI red team framework is automating their testing with models. In essence, they’re creating models to launch randomized, unpredictable attacks that will most likely lead to target behavior. “As models become more capable, we’re interested in ways we might use them to complement manual testing with automated red teaming performed by models themselves,” Anthropic says.  

Relying on a red team/blue team dynamic, Anthropic uses models to generate attacks in an attempt to cause a target behavior, relying on red team techniques that produce results. Those results are used to fine-tune the model and make it hardened and more robust against similar attacks, which is core to blue teaming. Anthropic notes that “we can run this process repeatedly to devise new attack vectors and, ideally, make our systems more robust to a range of adversarial attacks.”

Read More   Wizards of the Coast reveals a big slate of titles through 2026 at Gen Con

Multimodal red teaming is one of the more fascinating and needed areas that Anthropic is pursuing. Testing AI models with image and audio input is among the most challenging to get right, as attackers have successfully embedded text into images that can redirect models to bypass safeguards, as multimodal prompt injection attacks have proven. The Claude 3 series of models accepts visual information in a wide variety of formats and provide text-based outputs in responses. Anthropic writes that they did extensive testing of multimodalities of Claude 3 before releasing it to reduce potential risks that include fraudulent activity, extremism, and threats to child safety.

Open-ended general red teaming balances the four methods with more human-in-the-middle contextual insight and intelligence. Crowdsourcing red teaming and community-based red teaming are essential for gaining insights not available through other techniques.

Protecting AI models is a moving target

Red teaming is essential to protecting models and ensuring they continue to be safe, secure, and trusted. Attackers’ tradecraft continues to accelerate faster than many AI companies can keep up with, further showing how this area is in its early innings. Automating red teaming is a first step. Combining human insight and automated testing is key to the future of model stability, security, and safety.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.