Products
Self-serve Data Architecture with Presto and Alluxio Across Clouds
February 8, 2022
This article highlights synergy between the two widely adopted open-source projects, Alluxio and Presto, and demonstrates how together they deliver a self-serve data architecture across clouds.
What makes an architecture self-serve?
Condition 1: Evolution of the data platform does not require changes
All data platforms evolve over time, including addition of a new data store, compute engine, or a new team which needs to access shared data. In either case, a data platform is self-serve if it does not require changes to accommodate evolution.
Condition 2: Isolation across Teams
Business units don’t step on each other with a self-serve platform. When a new team is introduced, data access by one team should have no impact on existing usage of the shared data infrastructure.
The combination of the above two offers agility, which oftentimes is more important than the cost of physical infrastructure.
Data Platform Considerations
Below, we introduce some considerations when designing a self-serve platform, and architectural patterns for simple solutions.
Consideration 1: Data is shared
- Between Compute Frameworks
- There are a large number of specialized compute engines. Each engine is better suited for a specific task, which means there is a need to share data between engines. For example, ETL in a batch processing followed by Presto for interactive queries.
- Between Different Teams
- For example, a team is responsible for collection of operational data which is then consumed by multiple other business units.
- Between Data Centers Across Regions and Cloud Providers
- This offers the flexibility to choose the most optimal service across environments.
The solution for shared data is to have an abstraction layer across heterogeneous compute. Alluxio provides such an abstraction across clouds for seamless sharing of data between Presto and other compute engines regardless of the data store.

Consideration 2: Data has ownership domains and processing in place is simple
- Although replication provides isolation, governance becomes complex as the owner of data enforces strict policies about the consumption of data.
- Copies introduce redundancy, which is error-prone and has high resource requirements.
It may seem obvious that a solution is to not make copies of data, but what about performance when we don’t move data? This calls for a single abstraction layer which takes care of governance, performance and movement of data across ownership domains.
The architecture below shows Presto using the Alluxio layer for access to data regardless of the location.

The above design can be broken down in a few simple cases
- All within a single cloud or a datacenter
- Shared across multiple datacenters or a hybrid cloud
In all these cases, the separation of the CONSUMER from the PRODUCER of data is enabled by an abstraction layer which provides more than a simple cache. Advanced preloading and write capabilities guarantee SLAs even with the separation of data from compute.

Conclusion:
With a self-serve data architecture across clouds, we construct a solution that stands the test of time as a data platform evolves. Learn more from the whitepaper Presto with Alluxio Overview – Architecture Evolution for Interactive Queries, and see how companies including Facebook, TikTok, Electronic Arts, Walmart, Tencent, Comcast, etc level up their current Presto platform leveraging Alluxio.
.png)
Blog

Make Multi-GPU Cloud AI a Reality
If you’re building large-scale AI, you’re already multi-cloud by choice (to avoid lock-in) or by necessity (to access scarce GPU capacity). Teams frequently chase capacity bursts, “we need 1,000 GPUs for eight weeks,” across whichever regions or providers can deliver. What slows you down isn’t GPUs, it’s data. Simply accessing the data needed to train, deploy, and serve AI models at the speed and scale required – wherever AI workloads and GPUs are deployed – is in fact not simple at all. In this article, learn how Alluxio brings Simplicity, Speed, and Scale to Multi-GPU Cloud deployments.

Alluxio's Strong Q2: Sub-Millisecond AI Latency, 50%+ Customer Growth, and Industry-Leading MLPerf Results
Alluxio's strong Q2 featured Enterprise AI 3.7 launch with sub-millisecond latency (45× faster than S3 Standard), 50%+ customer growth including Salesforce and Geely, and MLPerf Storage v2.0 results showing 99%+ GPU utilization, positioning the company as a leader in maximizing AI infrastructure ROI.
