Data warehouses are at the center of any modern data platform. They have to integrate well with various data sources, business applications, analytics engines, business intelligence, and ETL tools. The sheer number of connections and integration points makes integrating structured and semi-structured data nearly impossible for legacy on-premise and cloud data warehouses. In this track, hear from fellow Snowflake customers and learn how to simplify your data platform and reduce latency for your analytics with Snowflake. You’ll learn what’s possible with migrating, loading, and integrating varying data and data sources into a single locations – Snowflake.
Integrating and Streaming Data
Integrating and Streaming Data Sessions
DocuSign: Getting the Most out of Your Migration to the Cloud
Brought to you by Matillion
Have you achieved all of your data analytics goals by moving from on-premises to the cloud? DocuSign needed to efficiently move large amounts of data and ensure its data remained secure in transit. After an in-depth evaluation, DocuSign selected Snowflake and Matillion ETL for Snowflake to facilitate its journey to the cloud and meet its well-defined goals around performance, scale, security, and cost savings. Attend this session to learn how Snowflake and Matillion helped DocuSign improve its business intelligence infrastructure, lower costs, meet data governance requirements, and improve performance.
SPEAKER
Robert Parker
Senior Director, Business Intelligence, DocuSign
Minted: Moving to Snowflake for Advanced Data Capabilities
Brought to you by Fivetran
Are you getting the most out of your data analytics platform? Learn why Minted, an online marketplace, chose to migrate its data to Snowflake and the positive impact that decision has had on its business decisions and culture, and how Fivetran made the migration easy and possible. Minted will also discuss how Salesforce and MySQL connectors from Fivetran added value to Minted’s modern data stack and how the Snowflake and Fivetran combination has improved data access across the organization.
SPEAKER
Nick James
Senior Data Engineer, Minted
PACCAR: Trucking Your Data Into a Single Source of Truth
Brought to you by Attunity
Migrating all your data types to a single repository is likely one of the main reasons you chose Snowflake. PACCAR is a global technology leader in the design, manufacturing, and customer support of premium light, medium, and heavy-duty trucks under the Kenworth, Peterbilt, and DAF nameplates. Learn how PACCAR is continuously migrating data to Snowflake from a wide variety of sources, including SQL Server, DB2 on iSeries, DB2 on zOS, IMS, Oracle, SAP, and AWS sources, then using it to enable analytics, master data management, and application modernization workloads.
SPEAKER
Dallas Thornton
Director Digital Services, PACCAR
REI: Taking Real-time Data for a Hike
Brought to you by Attunity
REI is a national outdoor retail co-op dedicated to inspiring, educating, and outfitting its members and the community for a lifetime of outdoor adventure and stewardship. Attend this session to learn about REI’s journey into near real-time and self-service analytics, describing why the organization chose Snowflake and Attunity for the foundation of the co-op’s next-generation analytics platform. REI will also discuss Attunity’s change data capture (CDC) technology to move real-time data from multiple sources (Oracle, SQL, and DB2) to Snowflake on AWS, and then use it for supply chain and SKU analysis.
SPEAKER
Robert Dobak
Senior Database Engineer, REI
Talroo Migrated from MySQL to Snowflake and Databricks with Staggering Results
Brought to you by Databricks
Talroo was limited by their MySQL system for analytics. It was slow, not user friendly, unscalable, and it presented many points of failure. Attend this session and learn how they:
- Lowered the personnel necessary to manage a production data pipeline
- Can now process 10x more data than before
- This new architecture has enabled us to deliver hugely impactful insights to our customers
SPEAKER
Aaron Smith
Data Engineer, Talroo
White Ops: Delivering Data at Speed for Critical Use Cases
Do you need to transform data at speed for critical business use cases? Find out how White Ops, a leading provider of cyber security services, continually transforms disparate data into a single structured format to detect and characterize new fraud patterns. Attend this session to learn how White Ops consumes and transforms data from its Kafka data bus to Snowflake though its ETL pipeline. White Ops uses AWS’s “serverless” architecture coupled with Spark to extract data from Kafka to S3, and uses Snowflake’s native load utility to upload the data fast into Snowflake. White ops will also discuss other Snowflake features it uses, including compression techniques, clustering methods, and best use of Snowflake’s independent and near-infinite compute power.
SPEAKER
Ram Narayanan
Manager Data Engineering, White Ops
Best Practices for Troubleshooting Integrating and Streaming Data
Snowflake Support will show you the various components involved in making a successful and secure connection in the cloud to Snowflake. This session covers best practices for Snowflake drivers such as ODBC, JDBC, Python, and Spark. It also presents common connectivity errors and troubleshooting and debugging techniques aided by effective parsing of driver logs. Learn about data ingestion scenarios using Snowpipe and COPY, and discuss how to detect common issues quickly. In addition, you’ll learn best practices on low latency and high-volume data loads.
SPEAKERS
Kelvin Youk
Customer Technical Support Engineer, Snowflake
Jimmy Anderson
Lead Customer Support Engineer, Snowflake
Continuous Change Data Capture at Scale In A Fluid World
The session “How to Ingest Streaming Data Continuously Using Snowpipe,” showed how to stream data into Snowflake. This session goes a step further by extending the use case to Change Data Capture (CDC). This theater session features live demonstrations that show how CDC data can be materialized into Snowflake using stored procedures and ecosystem partner products that simplify the code structure and accelerate time to value.
SPEAKER
Craig Warman
Sr. Sales Engineer, Snowflake
Ryan Templeton
Sr. Sales Engineer, Snowflake
Data Modeling: Making Sense of Schema on Read
With the increasing prevalence of semi-structured data from IoT devices, web logs, and other sources, data architects and modelers must learn how to interpret and project data from formats like JSON. Although the concept of loading data without upfront modeling is appealing to many, to make sense of the data and use it to drive business value, we have to turn that schema-on-read data into a real schema. That means data modeling. This session will walk through both simple and complex JSON documents, decompose them, and then turn them into a representative data model. It will show you how they might look using both traditional 3NF and data vault styles of modeling.
SPEAKER
Kent Graziano
Chief Technical Evangelist, Snowflake
Get the Most Out of Snowflake’s Partner Ecosystem
Snowflake unlocks the full potential of data warehousing in the cloud for a broad array of tools and partners. From data management to analytics, our partnerships and integrations enable customers to leverage Snowflake’s flexibility, performance, and ease of use to deliver more meaningful data insights. If you are a Snowflake customer, attend this session to get an overview of Snowflake’s technology partner ecosystem. If you’re considering becoming a partner, learn about the latest enablement tools available to you.
SPEAKER
Harsha Kapre
Senior Product Manager, Snowflake
How to Ingest Streaming Data Continuously Using Snowpipe
Snowflake sales engineers will show you how to stream data directly into Snowflake using Snowpipe and stored procedures, and use this data as a metric for business processes. This session will examine use cases that capture and enrich industrial IoT data at scale using supervisory control and data acquisition (SCADA) systems and provide an overview of complementary partner solutions.
SPEAKERS
Craig Warman
Sr. Sales Engineer, Snowflake
Ryan Templeton
Sr. Sales Engineer, Snowflake
How to Troubleshoot Python Connectivity Errors
Learn from Snowflake Support how to troubleshoot Python connectivity errors and how to debug, including parsing driver logs.
SPEAKERS
Mehul Jain
Customer Technical Support Engineer, Snowflake
Tushar Garg
Customer Technical Support Engineer, Snowflake
Ingesting and Transforming Data Using Snowflake Pipelines
Modern data warehouse users can’t afford stale data. New data constantly arrives in various formats and business users expect up-to-date insights in reports and ad hoc query results. Keeping up requires smooth operational pipelines from ingestion to transformation and reporting. This session shows you how to quickly ingest data from various sources, transform it into canonical forms in your warehouse, and use it to derive insights. You will learn how to express your transforms as SQL and build data pipelines that run 24×7 in Snowflake. It will also explain how to gather Change Data Capture (CDC) information to power your Snowflake data pipelines. Join this session to see how businesses are already using these capabilities and how partners are building innovative products on top of Snowflake.
SPEAKERS
Dinesh Kulkarni
Product Manager, Snowflake
Kevin McGinley
Field CTO, Snowflake
Near Real-Time Analytics with Snowflake and Azure Services
The ability to perform analytics over the latest data has become critical in an ever-connected world. Snowflake supports ingestion of streaming data using a variety of techniques and connectors that integrate well with other Azure data services in support of near real-time analytic scenarios for IOT, log and streaming data sources. In this session, we’ll review the options for ingesting data into Snowflake from Blob Store, Event Hub, Kafka, Spark Streaming with a focus on Cosmos DB with Change Feed. We’ll also cover how Snowflake’s unique cloud-native architecture and features guarantee transactional consistency, performance SLA’s, and security on streaming data ingestion alongside concurrent query processing.
SPEAKERS
Craig Collier
Principal Solutions Architect, Snowflake
Simon Field
Field CTO, Snowflake
Data Ingestion with Snowpipe
Attend this lab to familiarize yourself with data ingestion using Snowflake’s Snowpipe service. Users will load data files into an external stage, create a Snowpipe with the auto-ingest feature, configure SQS notification, and validate data in target table. You will also investigate common issues and errors and learn strategies to resolve them.
SELF-PACED
HOW TO Integrate Streaming Data into Snowflake
Brought to you by Fairway Technologies
Snowflake provides easy mechanisms to integrate data, and it can handle ingesting streaming data three different ways. This session covers the easiest and best ways to integrate batch and streaming data into Snowflake, and demonstrates how to use Snowflake’s Snowpipe service, Databricks/Spark, and Confluent/Kafka.
SPEAKER
Frank Bell
President & Founder, ITS/Fairway
How to Unload Data from Snowflake
In this lab, you will unload different types of structured and semi-structured data from a Snowflake data warehouse into a stage. To achieve this, you will need to review the following concepts: internal stages, semi-structured files (such as JSON and Parquet), structured files (CSV), and file format objects.
SELF-PACED
Loading Structured and Semi-Structured Data into Snowflake
In this lab, you will load structured and semi-structured data into Snowflake. You will use local files, internal stages, and external stages to source and load data. You will also leverage Snowflake’s VALIDATION_MODE and ON_ERROR parameters to troubleshoot and handle errors.
SELF-PACED
Modern Reusable Data Load Accelerator for Snowflake using Talend
Brought to you by Slalom
The world is moving towards Data Discovery, Analysis, Machine learning and Predictive Analytics. However, a constraint to perform data analysis is the need to load and maintain your data efficiently, requiring extensive efforts and resources. Slalom’s Data Load Accelerator provides you a Talend & Snowflake based re-usable Data Ingestion framework that productionalizes datasets in days instead of weeks/months! It is a single source code managed by profile tables that drive the processing dynamically. Accelerator helps you ingest structured and semi-structured data sets from S3/blob much faster and adopt a consistent and sustainable data ingestion standard.
SPEAKER
Ricky Sharma
Solution Architect, Slalom
Soumya Ghosh
Solution Principal, Slalom
Troubleshooting Snowflake Connectors
This lab covers troubleshooting Python, JDBC, ODBC, SnowSQL connectivity errors and debugging techniques to make successful connections, including parsing driver logs.
SPEAKERS
Mehul Jai
Customer Technical Support Engineer, Snowflake
Kelvin Youk
Customer Technical Support Engineer, Snowflake
Ayushi Bhagat
Customer Data Operations Engineer, Snowflake
Validating Your Data Loads
In this lab, you will learn how to validate data that has been loaded into Snowflake.
SELF-PACED
WebUI and SnowSQL
Learn how to use UI wizards to complete various tasks. Understand how to open and use worksheets to query data in Snowflake. Learn how to explore objects available for query in the WebUI, where to download and install SnowSQL, and connect to Snowflake from the command line tool.
SELF-PACED