As enterprises race to roll out artificial intelligence, often overlookModel training requires extensive computational and GPU resources. When training models on AWS, loading data from S3 often becomes a major bottleneck, wasting valuable GPU cycles. Optimizing data loading can greatly reduce GPU idle time and increase GPU utilization.
In this webinar, Greg Palmer will discuss best practices for efficient data loading during model training on AWS. He will demonstrate how to use Alluxio on EKS as a distributed cache to accelerate PyTorch training jobs that read datasets from S3. This architecture significantly improves the utilization of GPUs from 30% to 90%+, archives ~5x faster training, and lower cloud storage costs.
What you will learn:
- The challenges of feeding data-hungry GPUs in the cloud
- How to accelerate model training by optimizing data loading on AWS
- The reference architecture for running PyTorch jobs with Alluxio cache on EKS while reading data from S3, with benchmark results of training ResNet50 and BERT
- How to use TensorBoard to identify bottlenecks in GPU utilization
As enterprises race to roll out artificial intelligence, often overlookModel training requires extensive computational and GPU resources. When training models on AWS, loading data from S3 often becomes a major bottleneck, wasting valuable GPU cycles. Optimizing data loading can greatly reduce GPU idle time and increase GPU utilization.
In this webinar, Greg Palmer will discuss best practices for efficient data loading during model training on AWS. He will demonstrate how to use Alluxio on EKS as a distributed cache to accelerate PyTorch training jobs that read datasets from S3. This architecture significantly improves the utilization of GPUs from 30% to 90%+, archives ~5x faster training, and lower cloud storage costs.
What you will learn:
- The challenges of feeding data-hungry GPUs in the cloud
- How to accelerate model training by optimizing data loading on AWS
- The reference architecture for running PyTorch jobs with Alluxio cache on EKS while reading data from S3, with benchmark results of training ResNet50 and BERT
- How to use TensorBoard to identify bottlenecks in GPU utilization
Video:
Presentation slides:
Complete the form below to access the full overview:
Videos
TorchTitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase.
In this talk, Tianyu will share TorchTitan’s design and optimizations for the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its performance, composability, and scalability.
As large-scale machine learning becomes increasingly GPU-centric, modern high-performance hardware like NVMe storage and RDMA networks (InfiniBand or specialized NICs) are becoming more widespread. To fully leverage these resources, it’s crucial to build a balanced architecture that avoids GPU underutilization. In this talk, we will explore various strategies to address this challenge by effectively utilizing these advanced hardware components. Specifically, we will present experimental results from building a Kubernetes-native distributed caching layer, utilizing NVMe storage and high-speed RDMA networks to optimize data access for PyTorch training.