Over the past years, there has been increased regulatory emphasis on the role of red-teaming in AI risk management. The EU AI Act requires adversarial testing of general purpose AI, the UK identifies red-teaming as an emerging process for frontier AI safety, and the Hiroshima AI Process recommends red-teaming as part of AI risk management programs. But while awareness around the potential of AI red-teaming practices is rising, there is still a lack of standardized best practices to design and implement red-teaming efforts. This workshop presents a unique opportunity to explore the concept of generative AI red-teaming and its applications in mitigating privacy and security risks associated with AI systems. Through a collaborative policy prototyping approach and sector-specific use cases, participants will engage in “hands on” design and in-depth discussions to identify real-world challenges and ideate potential solutions leveraging AI red teaming approaches.