The explosion in demand for analytics, machine learning (ML), and apps has brought growth in the diversity and scale of data. To meet these growing demands, many companies build their data pipelines with PySpark but soon find it increasingly complex and expensive to manage the data infrastructure and governance across siloed clusters. Snowpark changes this by bringing processing to the data using Spark-like DataFrames and native Python code functions directly in Snowflake.
During this webinar, Snowflake customer OpenStore will share its Snowflake journey and how it saw an 87% decrease in end-to-end runtime, 25% increase in throughput, and an 80% decrease in engineering maintenance hours after switching to Snowpark.
Join this webinar to hear OpenStore discuss its journey and to learn about:
- Overview and comparison of Snowpark and PySpark DataFrame operations
- Advantages of a single platform that natively supports SQL, Python, Java, and Scala without separate cluster management
- Considerations and best practices for Spark to Snowpark migrations
Ponentes
-
Grant Murray
Staff Software Engineer - Data Infrastructure, OpenStore
-
Phani Raj
Senior Data Cloud Architect - GSI Partners
Snowflake -
Ganesh Krishnamurthy
Consulting Manager, Snowflake