首页   >  研究   >  研究成果   >  《Generative AI Application Security Testing and Validation Standard》
返回
《Generative AI Application Security Testing and Validation Standard》
This standard document provides a framework for testing and validating the security of Generative AI applications. The framework covers key areas across the AI application lifecycle, including Base Model Selection, Embedding and Vector Database in the Retrieve Augment Generation design patterns, Prompt Execution/Inference, Agentic Behaviors, Fine-Tuning, Response Handling, and AI Application Runtime Security. The primary objective is to ensure AI applications behave securely and according to their intended design throughout their lifecycle. By providing a set of testing and validation standards and guidelines for each layer of the AI Application Stack, focusing on security and compliance, this document aims to assist developers and organizations in enhancing the security and reliability of their AI applications built using LLMs, mitigating potential security risks, improving overall quality, and promoting responsible development and deployment of AI technologies. AI STR program represents a paradigm shift in how we approach the development and deployment of AI technologies. Championing safety, trust, and responsibility in AI systems, lays the groundwork for a more ethical, secure, and equitable digital future, where AI technologies serve as enablers of progress rather than as sources of uncertainty and harm. Generative AI Application Security Testing and Validation Standard is one of the AI STR standards.
本网站使用Cookies以使您获得最佳的体验。为了继续浏览本网站,您需同意我们对Cookies的使用。想要了解更多有关于Cookies的信息,或不希望当您使用网站时出现cookies,请阅读我们的Cookies声明隐私声明
全 部 接 受
拒 绝