Distributed training has become crucial for handling increasingly complex deep learning models and massive datasets. With Snowflake Notebooks on Container Runtime, ML developers can leverage multiple GPUs to accelerate PyTorch development, by sharding, distributing, and training on Snowflake data in parallel. Models can then be easily productionized in Snowflake through seamless integration to model serving with GPUs and observability. In this session, we will show you how easy it is to work with any open source package, configure resources, and build scalable end-to-end workflows.
Join this demo with ML expert, Vinay Sridhar, to learn how to use Snowflake Notebooks on Container Runtime to:
- Build and deploy scalable computer vision PyTorch model for anomaly detection
- Speed up training and inference on large datasets with distributed GPU pools
- Develop ML workflows using any open source Python package from PyPi or Huggingface
講演者
Vinay Sridhar
Senior Product Manager, Snowflake
Register Here