Retrieval-augmented generation (RAG)

Create enterprise-grade RAG apps with Snowflake Cortex AI, fast.

  • Overview
  • What is RAG?
  • What are the benefits of RAG?
  • Where are RAG techniques used?
  • How does RAG work?
  • RAG and Snowflake
  • Customers
  • RAG Resources

Overview

RAG is a popular framework in which a large language model (LLM) accesses a specific knowledge base used to generate a response. Because there is no need to retrain the foundation model, this allows developers to use LLMs within a specific context in a fast, cost-effective way. RAG apps can be used for customer service, sales, marketing, knowledge bases and more. 

With Snowflake Cortex AI, you can build and deploy LLM apps that learn the unique nuances of your business and data in minutes. And since Snowflake provides industry-leading LLMs, vector search and Streamlit app-building capabilities all in a fully managed service, you can easily create production-ready RAG apps.

What is retrieval-augmented generation, or RAG?

Retrieval-Augmented Generation (RAG) is a technique that enhances a foundation model’s (large language model or LLMs) output by referencing an external knowledge base beyond its original training data. 

LLMs, trained on vast datasets with billions of parameters, excel at tasks like answering questions, translations and sentence completion. RAG extends these capabilities by allowing the model to access specific domains or an organization's internal knowledge without having to undergo retraining. This cost-effective approach improves the accuracy, relevance and usefulness of LLM app outputs in various contexts.

What are the benefits of using retrieval-augmented generation?

1. RAG addresses the limitations of using LLMs alone

LLMs rely on static training data, which may not include the most current or organization-specific information. Without guidance on authoritative sources, LLMs may generate inaccurate or inconsistent responses, especially when faced with conflicting terminology. When uncertain, LLMs might "hallucinate" or fabricate answers. RAG mitigates these issues by providing controlled access to up-to-date, authoritative sources, resulting in more accurate and reliable responses.

2. RAG delivers higher-quality outputs that can be tracked to a specific source

For LLMs to be useful, they must provide consistently reliable, authoritative responses. RAG enables response traceability to specific references and allows for the inclusion of source citations, which enhances the transparency and trustworthiness of the generated content.

3. RAG ensures up-to-date answers in a cost-effective way

In dynamic industries, information quickly becomes outdated. RAG allows pre-trained models to access current information without expensive fine-tuning. This approach enables LLMs to incorporate real-time data from various sources, including news feeds, social media, financial reports and IoT sensors, ensuring relevance and accuracy.

4. RAG gives more control to app developers

RAG empowers developers with greater flexibility to create tailored, purpose-built solutions. With a security framework around RAG, app developers can allow controlled access to sensitive information, ensuring that restricted data is only used when formulating responses for authorized individuals.

Where are retrieval-augmented generation techniques used?

With the rapid advancement of gen AI, RAG has become an integral component of many AI-powered systems, particularly in chatbot and knowledge management applications.

1. Employee access to internal knowledge bases, such as HR, product, or service information:

RAG applications enhance employee access to proprietary information within domain-specific knowledge bases, like company intranets or internal documentation systems. These models allow employees to ask specific questions using natural language (e.g., "What's our company's parental leave policy?" or "How do I request time off?") and receive responses generated from the organization's internal knowledge base. RAG ensures more accurate, contextually relevant answers and can provide personalized information based on the requester's authorization level and role within the company.

2. Market or business intelligence:

By leveraging continuously updated market data and internal reports, RAG enhances the quality and timeliness of business intelligence activities. This allows organizations to make data-driven decisions, identify emerging trends and gain a competitive edge. RAG can synthesize information from multiple sources, providing comprehensive insights that might be overlooked in traditional analysis methods.

3. Intelligent customer support:

LLM-powered customer service chatbots enhanced with RAG can handle a wide range of tasks, including product support, issue resolution and claims processing. RAG provides real-time access to accurate, verified content, including things like up-to-date product information, order status and individual customer data. This allows chatbots to deliver highly contextual and personalized responses, improving customer satisfaction and reducing the workload on human support agents.

4. Customer self-service access to information:

Public-facing RAG-enabled chatbots offer 24/7 access to marketing, sales, product or service information. These systems can quickly navigate vast knowledge bases to provide users with relevant, up-to-date information at any time. This not only improves customer experience but also reduces the volume of basic inquiries that human staff has to handle, allowing them to focus on more complex issues.

How does RAG work and what do teams need to deploy a RAG framework?

Client/App UI

End users interact with the knowledge base, typically through a chat interface or question-answering system.

Context Repository

Relevant data sources are aggregated, governed and continuously updated to provide an up-to-date knowledge repository. This includes preprocessing steps like chunking and embedding the text.

Search

A vector store maintains the numerical representation (embeddings) of the knowledge base. Semantic search is used to retrieve the most relevant chunks of information based on the users’ query.

LLM inference

The system embeds the user’s question and retrieves relevant context from the vector store. This context is then used to prompt an LLM, which generates a contextualized response based on both the question and the retrieved information. 

To truly build an enterprise-grade RAG, organizations must consider additional components:

  • Embedding model: Used to convert text into vector representations for both the knowledge base and user queries.

  • Data pipeline: Ensures the continuous update and maintenance of the knowledge base.

  • Evaluation and monitoring: Tools to assess the quality of responses and system performance.

How does RAG work and what do teams need to deploy a RAG framework?

Client/App UI

End users interact with the knowledge base, typically through a chat interface or question-answering system.

Context Repository

Relevant data sources are aggregated, governed and continuously updated to provide an up-to-date knowledge repository. This includes preprocessing steps like chunking and embedding the text.

Search

A vector store maintains the numerical representation (embeddings) of the knowledge base. Semantic search is used to retrieve the most relevant chunks of information based on the users’ query.

LLM inference

The system embeds the user’s question and retrieves relevant context from the vector store. This context is then used to prompt an LLM, which generates a contextualized response based on both the question and the retrieved information. 

To truly build an enterprise-grade RAG, organizations must consider additional components:

  • Embedding model: Used to convert text into vector representations for both the knowledge base and user queries.

  • Data pipeline: Ensures the continuous update and maintenance of the knowledge base.

  • Evaluation and monitoring: Tools to assess the quality of responses and system performance.

RAG apps and Snowflake

From RAG to rich LLM apps in minutes with Snowflake Cortex AI

  • Rich AI and data capabilities: Developing and deploying an end-to-end AI app using RAG is possible without integrations, infrastructure management or data movement using three key features: Snowflake Cortex AI, Streamlit in Snowflake and Snowpark.
  • Cortex Search for hybrid search: Cortex Search is a key feature of Snowflake Colist--blue-bulletsrtex AI, enabling advanced retrieval capabilities by combining semantic and keyword search. As part of the Snowflake Cortex AI platform, it automates the creation of embeddings and delivers high-quality, efficient data retrieval without the need for complex infrastructure management.
  • Create a RAG UI quickly in Streamlit: Use Streamlit in Snowflake for out-of-the box chat elements to quickly build and share user interfaces — all in Python.
  • Context repository with Snowpark: The knowledge repository can be easily updated and governed using Snowflake stages. Once documents are loaded, all of your data preparation, including generating chunks (smaller, contextually rich blocks of text), can be done with Snowpark. For the chunking in particular, teams can seamlessly use LangChain as part of a Snowpark User Defined Function
  • Cortex Search for hybrid search: Cortex Search provides hybrid search (vector and keyword search) quickly, without having to worry about embedding, infrastructure maintenance, search quality parameter tuning or ongoing index refreshes. 
  • Secure LLM InferenceSnowflake Cortex completes the workflow with serverless functions for embedding and text completion inference (using Mistral AI, Llama,  Gemma, Arctic or other LLMs available within Snowflake).

How Snowflake customers are using RAG

Real Snowflake customers are saving time for their teams, boosting productivity and cutting costs by using RAG apps in Snowflake.