We are thrilled to announce the release of Alluxio 2.5!
Alluxio 2.5 focuses on improving interface support to broaden the set of data driven applications which can benefit from data orchestration. The POSIX and S3 client interfaces have greatly improved in performance and functionality as a result of the widespread usage and demand from AI/ML workloads and system administration needs. Alluxio is rapidly evolving to meet the needs of enterprises that are deploying it as a key component of their AI/ML stacks.
At the same time, Alluxio continues to integrate with the latest cloud and cluster orchestration technologies. In 2.5, Alluxio has new connectors for Google Cloud Storage and Azure Data Lake Storage Gen 2 as well as better operability functionality for Kubernetes environments.
In this Office Hour, we will go over:
- JNI Based POSIX API
- S3 Northbound API
- ADLS Gen 2 Connector
- GCSv2 Connector
ALLUXIO COMMUNITY OFFICE HOUR
We are thrilled to announce the release of Alluxio 2.5!
Alluxio 2.5 focuses on improving interface support to broaden the set of data driven applications which can benefit from data orchestration. The POSIX and S3 client interfaces have greatly improved in performance and functionality as a result of the widespread usage and demand from AI/ML workloads and system administration needs. Alluxio is rapidly evolving to meet the needs of enterprises that are deploying it as a key component of their AI/ML stacks.
At the same time, Alluxio continues to integrate with the latest cloud and cluster orchestration technologies. In 2.5, Alluxio has new connectors for Google Cloud Storage and Azure Data Lake Storage Gen 2 as well as better operability functionality for Kubernetes environments.
In this Office Hour, we will go over:
- JNI Based POSIX API
- S3 Northbound API
- ADLS Gen 2 Connector
- GCSv2 Connector
Video:
Slides:
Complete the form below to access the full overview:
Videos
TorchTitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase.
In this talk, Tianyu will share TorchTitan’s design and optimizations for the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its performance, composability, and scalability.
As large-scale machine learning becomes increasingly GPU-centric, modern high-performance hardware like NVMe storage and RDMA networks (InfiniBand or specialized NICs) are becoming more widespread. To fully leverage these resources, it’s crucial to build a balanced architecture that avoids GPU underutilization. In this talk, we will explore various strategies to address this challenge by effectively utilizing these advanced hardware components. Specifically, we will present experimental results from building a Kubernetes-native distributed caching layer, utilizing NVMe storage and high-speed RDMA networks to optimize data access for PyTorch training.