On-Demand Videos
TorchTitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is a repo that showcases PyTorch's latest distributed training features in a clean, minimal codebase.
In this talk, Tianyu will share TorchTitan’s design and optimizations for the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its performance, composability, and scalability.
In this talk, Sandeep Manchem discussed big data and AI, covering typical platform architecture and data challenges. We had engaging discussions about ensuring data safety and compliance in Big Data and AI applications.
As large-scale machine learning becomes increasingly GPU-centric, modern high-performance hardware like NVMe storage and RDMA networks (InfiniBand or specialized NICs) are becoming more widespread. To fully leverage these resources, it’s crucial to build a balanced architecture that avoids GPU underutilization. In this talk, we will explore various strategies to address this challenge by effectively utilizing these advanced hardware components. Specifically, we will present experimental results from building a Kubernetes-native distributed caching layer, utilizing NVMe storage and high-speed RDMA networks to optimize data access for PyTorch training.
Streaming systems form the backbone of the modern data pipeline as the stream processing capabilities provide insights on events as they arrive. But what if we want to go further than this and execute analytical queries on this real-time data? That’s where Apache Pinot comes in.
OLAP databases used for analytical workloads traditionally executed queries on yesterday’s data with query latency in the 10s of seconds. The emergence of real-time analytics has changed all this and the expectation is that we should now be able to run thousands of queries per second on fresh data with query latencies typically seen on OLTP databases.
Apache Pinot is a realtime distributed OLAP datastore, which is used to deliver scalable real time analytics with low latency. It can ingest data from streaming sources like Kafka, as well as from batch data sources (S3, HDFS, Azure Data Lake, Google Cloud Storage), and provides a layer of indexing techniques that can be used to maximize the performance of queries.
Come to this talk to learn how you can add real-time analytics capability to your data pipeline.
With the advent of the Big Data era, it is usually computationally expensive to calculate the resource usages of a SQL query. Can we estimate the resource usages of SQL queries more efficiently without any computation in a SQL engine kernel? In this session, Chunxu and Beinan would like to introduce how Twitter’s data platform leverages a machine learning-based approach in Presto and BigQuery to estimate query utilization with 90%+ accuracy.
This talk introduces the three game level progressions to use Alluxio to speed up your cloud training with production use cases from Microsoft, Alibaba, and BossZhipin.
- Level 1: Speed up data ingestion from cloud storage
- Level 2: Speed up data preprocessing and training workloads
- Level 3: Speed up full training workloads with a unified data orchestration layer
OceanBase Database, is an open-source, distributed Hybrid Transactional/Real-time Operational Analytics (HTAP) database management system that has set new world records in both the TPC-C and TPC-H benchmark tests. OceanBase Database starts from 2010, and it has been serving all of the critical systems in Alipay. Besides Alipay, OceanBase has also been serving customer from a variety of sectors, including Internet, financial services, telecommunications and retail industry.
In this tech talk, we will talk about the architecture of OceanBase and some typical use cases. This talk will include some technical topic such as Paxos replication, 2PC commit, LSM-Tree like storage, SQL optimizer and executor, city-level disaster recovery, etc.
As more and more companies turn to AI / ML / DL to unlock insight, AI has become this mythical word that adds unnecessary barriers to new adaptors. Oftentimes it was regarded as luxury for those big tech companies only – this should not be the case.
In this talk, Jingwen will first dissect the ML life cycle into five stages – starting from data collection, to data cleansing, model training, model validation, and end at model inference / deployment stages. For each stage, Jingwen will then go over its concept, functionality, characteristics, and use cases to demystify ML operations. Finally, Jingwen will showcase how Alluxio, a virtual data lake, could help simplify each stage.
Alluxio foresaw the need for agility when accessing data across silos separated from compute engines like Spark, Presto, Tensorflow and PyTorch. Embracing the separation of storage from compute, the Alluxio data orchestration platform simplifies adoption of the data lake and data mesh paradigm for analytics and AI/ML. In this talk, Bin Fan will share observations to help identify ways to use the platform to meet the needs of your data environment and workloads.
越來越多的企業架構已轉向混合雲和多雲環境。雖然這種轉變帶來了更大的靈活性和敏捷性,但也意味著必須將計算與存儲分離,這就對企業跨框架、跨雲和跨存儲系統的數據管理和編排提出了新的挑戰。此分享將讓聽眾深入了解Alluxio數據編排理念在數據中台對存儲和計算的解耦作用,以及數據編排針對存算分離場景提出的創新架構,同時結合來自金融、運營商、互聯網等行業的典型應用場景來展現Alluxio如何為大數據計算帶來真正的加速,以及如何將數據編排技術用於AI模型訓練!
*This is a bilingual presentation.
As data stewards and security teams provide broader access to their organization’s data lake environments, having a centralized way to manage fine-grained access policies becomes increasingly important. Alluxio can use Apache Ranger’s centralized access policies in two ways: 1) directly controlling access to virtual paths in the Alluxio virtual file system or 2) enforcing existing access policies for the HDFS under stores. This presentation discusses how the Alluxio virtual filesystem can be integrated with Apache Ranger.
This talk will discuss the process and technical details behind a responsible vulnerability disclosure of an issue detected in Alluxio recently. I will share some of the lessons I’ve learned as a security researcher dealing with multiple open-source vendors and my thoughts about the actions organizations and projects should take to ensure successful vulnerability management and disclosure programs. Learn more about creating more secure software.
Shawn Sun from Alluxio will present the journey of using Alluxio as the storage system for Kubernetes through Container Storage Interface (CSI) plugin and Alluxio CSI driver. This talk will cover the challenges we are facing with traditional setup in the AI/ML training jobs, and how Alluxio CSI driver manages to address them. It will also talk about a recent change to the driver that made it more sturdy and robust.
Shopee is the leading e-commerce platform in SouthEast Asia. In this presentation, Tianbao Ding and Haoning Sun from Shopee will share their Data Infra team’s recent project on acceleration with Presto and storage servitization. They will share the details on how Shopee leverages Alluxio to accelerate Presto query and provide standardized method of accessing data through Alluxio-Fuse and Alluxio-S3.
This presentation will include information about how Alluxio and NetApp StorageGRID helps enterprises accelerate the adoption of cloud and optimize their resource spend on a modern hybrid big data architecture. The conversation will cover use case and architecture info from a variety of enterprises and some of the high level technical details of how these business solutions are constructed.
In this talk, Lei Li and Zifan Ni share the experience of applying Alluxio in their AI platform to increase training efficiency at bilibili. The talk also includes technical architecture and specific issues addressed.