Driven by strong interests from our open source community, the Alluxio core engineering team re-designed things to come up with a more efficient and transparent way for users to leverage data orchestration through the POSIX interface. This enables much better performance for ML workloads where data is accessed via the POSIX interface.
In this 20 minute community session, you’ll hear from Lu Qiu, one of Alluxio’s lead engineers on the POSIX implementation project.
In this session, you’ll learn:
- How Alluxio’s new JNI-based FUSE implementation supports more efficient POSIX data access
- How improvements to multiple data operations, including distributedLoad, optimizations on listing or calculating directories with a massive amounts of files, etc., improve performance. In model training
- How these latest enhancements improve performance on TensorFlow and PyTorch training workloads, even with GPU-based training and compute
ALLUXIO WEBINAR
Driven by strong interests from our open source community, the Alluxio core engineering team re-designed things to come up with a more efficient and transparent way for users to leverage data orchestration through the POSIX interface. This enables much better performance for ML workloads where data is accessed via the POSIX interface.
In this 20 minute community session, you’ll hear from Lu Qiu, one of Alluxio’s lead engineers on the POSIX implementation project.
In this session, you’ll learn:
- How Alluxio’s new JNI-based FUSE implementation supports more efficient POSIX data access
- How improvements to multiple data operations, including distributedLoad, optimizations on listing or calculating directories with a massive amounts of files, etc., improve performance. In model training
- How these latest enhancements improve performance on TensorFlow and PyTorch training workloads, even with GPU-based training and compute
Video:
Slack with speakers, experts, and community members.
Join the Alluxio Global Online Meetup Group.
ALLUXIO WEBINAR
Driven by strong interests from our open source community, the Alluxio core engineering team re-designed things to come up with a more efficient and transparent way for users to leverage data orchestration through the POSIX interface. This enables much better performance for ML workloads where data is accessed via the POSIX interface.
In this 20 minute community session, you’ll hear from Lu Qiu, one of Alluxio’s lead engineers on the POSIX implementation project.
In this session, you’ll learn:
- How Alluxio’s new JNI-based FUSE implementation supports more efficient POSIX data access
- How improvements to multiple data operations, including distributedLoad, optimizations on listing or calculating directories with a massive amounts of files, etc., improve performance. In model training
- How these latest enhancements improve performance on TensorFlow and PyTorch training workloads, even with GPU-based training and compute
Video:
Slack with speakers, experts, and community members.
Join the Alluxio Global Online Meetup Group.
Videos:
Presentation Slides:
Complete the form below to access the full overview:
Videos
In the rapidly evolving landscape of AI and machine learning, Platform and Data Infrastructure Teams face critical challenges in building and managing large-scale AI platforms. Performance bottlenecks, scalability of the platform, and scarcity of GPUs pose significant challenges in supporting large-scale model training and serving.
In this talk, we introduce how Alluxio helps Platform and Data Infrastructure teams deliver faster, more scalable platforms to ML Engineering teams developing and training AI models. Alluxio’s highly-distributed cache accelerates AI workloads by eliminating data loading bottlenecks and maximizing GPU utilization. Customers report up to 4x faster training performance with high-speed access to petabytes of data spread across billions of files regardless of persistent storage type or proximity to GPU clusters. Alluxio’s architecture lowers data infrastructure costs, increases GPU utilization, and enables workload portability for navigating GPU scarcity challenges.
TorchTitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase.
In this talk, Tianyu will share TorchTitan’s design and optimizations for the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its performance, composability, and scalability.