Generative AI is transforming how enterprises extract insights from sensitive data like documents and images — but cost and performance remain key challenges. Organizations are seeing real ROI by choosing platforms that support both large-scale batch NLP inference and real-time conversational AI.
In this webinar, Snowflake AI experts will share insights and demos on how to:
- Develop high-quality, conversational apps faster for self-service analytics
- Optimize natural language processing (NLP) pipeline performance with cost-effective LLM batch inference
- Serve open-source LLMs and custom embedding models for inference with managed GPUs
Don’t miss this opportunity to see how Snowflake makes gen AI easy, efficient and enterprise-ready with built-in trust.
In Partnership with:
Speakers

Grace Adamson
AI Senior Product Marketing Manager
Snowflake
Joseph Toma
Microsoft UK Financial Services Data & AI Leader
Microsoft

Tom Christian
AI Specialist
Snowflake

Michael Taylor
Sr. Manager, AI Specialist Team
Snowflake
Fabian Gampfer
Principal AI Specialist
Snowflake
Varun Khandelwal
Principal AI Specialist
Snowflake
Dash Desai
Senior Lead Developer Advocate
Snowflake