Cortex Guard
Snowflake offers Cortex Guard, a feature that enables customers to easily implement safeguards that filter out potentially hateful, violent, inappropriate or unsafe large language model responses.
Snowflake Summit '25
Join fellow data and AI pioneers this June at Snowflake's annual user conference in San Francisco.
At Snowflake, we recognize the potential of artificial intelligence (AI) to accelerate innovation, improve quality of life, and address the world’s biggest data challenges. As we advance in the AI space, we are committed to responsibly developing AI technologies to support their ethical use and reduce potential harms and unintended consequences.
Our Responsible AI Principles demonstrate our commitment to innovating and developing AI in a manner that aligns with our core values. By aiming to uphold these principles, we hope to harness the transformative impact of AI in order to contribute positively to society and build a future where technology serves the greater good.
The processes behind AI features should be understandable so customers can make informed decisions about the features they use. We strive to offer clear information about the development, purpose, and operations of our AI features to help customers understand how they are built and operate.
AI features should be developed with clear roles and responsibilities, oversight, and audit mechanisms in place. We encourage feedback from customers, users, and other affected parties and prioritize accountability at all stages of the AI lifecycle—from design and deployment to ongoing monitoring and adaptation—to help customers comply with applicable laws, internal policies, and ethical standards.
AI features should be designed to minimize the risk of improper bias. We seek to avoid unjust impacts on people, particularly those related to sensitive characteristics.
AI features should amplify human capabilities to solve real-world challenges. This principle requires shared responsibility - Snowflake is responsible for providing the AI features that allow appropriate human direction and control, while our customers are responsible for defining, deploying, and monitoring their use of these AI features.
AI features should be resilient, consistent, and dependable under a wide range of conditions, with mechanisms for ongoing monitoring and validation. We are committed to developing AI features that operate effectively, with a focus on delivering reliable outputs and reducing hallucinations.
Privacy and security principles should be incorporated into the development of AI features. This principle also requires shared responsibility - Snowflake is responsible for offering security and privacy settings to help protect Customer Data when AI features are used, while our customers are responsible for configuring such settings to support their compliance with industry standards and legal requirements.
We follow these Responsible AI Principles with respect to the AI features and proprietary models that we develop. As explained in our AI Trust and Safety FAQs, our AI features may also be powered by licensed third-party open-source and proprietary models. In such cases, customers are encouraged to review the AI development principles published by the applicable third-party model developers.
Snowflake strives to integrate ethical principles throughout the AI development lifecycle. Here are some of the ways Snowflake incorporates its Responsible AI Principles into its AI features to help deliver meaningful results while cultivating an honest, trustworthy and fair environment.
Snowflake offers Cortex Guard, a feature that enables customers to easily implement safeguards that filter out potentially hateful, violent, inappropriate or unsafe large language model responses.
Snowflake offers Cortex Guard, a feature that enables customers to easily implement safeguards that filter out potentially hateful, violent, inappropriate or unsafe large language model responses.
By explicitly defining "AI Data" and distinguishing it from "Customer Data," Snowflake provides customers with clear expectations about how their data is used, reducing ambiguity in AI-related processing.
By explicitly defining "AI Data" and distinguishing it from "Customer Data," Snowflake provides customers with clear expectations about how their data is used, reducing ambiguity in AI-related processing.
Snowflake does not use Customer Data to train any AI models that are made available for use across its customer base.
Snowflake does not use Customer Data to train any AI models that are made available for use across its customer base.
Snowflake’s Observability for LLM Apps / ML Models and Model Explainability features allow users to (i) understand how a model arrives at its final conclusion and (ii) detect model weaknesses and degradation by noticing unintuitive behavior in production.
Snowflake’s Observability for LLM Apps / ML Models and Model Explainability features allow users to (i) understand how a model arrives at its final conclusion and (ii) detect model weaknesses and degradation by noticing unintuitive behavior in production.
Snowflake’s Horizon Catalog empowers customers to manage data governance by implementing role-based access controls over their data and AI applications.
Snowflake’s Horizon Catalog empowers customers to manage data governance by implementing role-based access controls over their data and AI applications.
Snowflake offers Model-Level RBAC, which allows customers to manage access to different models within an organization.
Snowflake offers Model-Level RBAC, which allows customers to manage access to different models within an organization.
Try Snowflake free for 30 days and experience the AI Data Cloud that helps eliminate the complexity, cost and constraints inherent with other solutions.
Sign Up for Our Newsletter
If You’d Rather Not Receive Future Emails From Snowflake, Unsubscribe Here Or Customize Your Communication Preferences