Alluxio is an open source Data orchestration platform that can be deployed on multiple platforms. However, it can require a lot of thinking and experience to integrate Alluxio into an existing Data Architecture adhering to minimally required DevOps principles meeting Organizational standards.
The presentation talks about the best practices to set up and techniques to build a cluster with open source Alluxio on AWS EKS, for one of our clients, which made it Scalable, Reliable, and Secure by adapting to Kubernetes RBAC.
Our speaker Vasista Polali will show you how to :
- Bootstrap EKS cluster in AWS with Terraform.
- Deploy open source Alluxio in a Namespace with persistence in AWS EFS.
- Scale up and down the Alluxio worker nodes as Daemon sets by Scaling the EKS nodes with Terraform.
- Accessing data with S3 mount.
- Controlling the access to Alluxio with Kubernetes port-forwarding, “setfacl” functionality, and Kubernetes service accounts.
- Re-using the data/metadata in the persistence layer on a new cluster.
Alluxio is an open source Data orchestration platform that can be deployed on multiple platforms. However, it can require a lot of thinking and experience to integrate Alluxio into an existing Data Architecture adhering to minimally required DevOps principles meeting Organizational standards.
The presentation talks about the best practices to set up and techniques to build a cluster with open source Alluxio on AWS EKS, for one of our clients, which made it Scalable, Reliable, and Secure by adapting to Kubernetes RBAC.
Our speaker Vasista Polali will show you how to :
- Bootstrap EKS cluster in AWS with Terraform.
- Deploy open source Alluxio in a Namespace with persistence in AWS EFS.
- Scale up and down the Alluxio worker nodes as Daemon sets by Scaling the EKS nodes with Terraform.
- Accessing data with S3 mount.
- Controlling the access to Alluxio with Kubernetes port-forwarding, “setfacl” functionality, and Kubernetes service accounts.
- Re-using the data/metadata in the persistence layer on a new cluster.
Video:
Slides:
Complete the form below to access the full overview:
Videos
TorchTitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase.
In this talk, Tianyu will share TorchTitan’s design and optimizations for the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its performance, composability, and scalability.
As large-scale machine learning becomes increasingly GPU-centric, modern high-performance hardware like NVMe storage and RDMA networks (InfiniBand or specialized NICs) are becoming more widespread. To fully leverage these resources, it’s crucial to build a balanced architecture that avoids GPU underutilization. In this talk, we will explore various strategies to address this challenge by effectively utilizing these advanced hardware components. Specifically, we will present experimental results from building a Kubernetes-native distributed caching layer, utilizing NVMe storage and high-speed RDMA networks to optimize data access for PyTorch training.