What if you could manage your data lake just like you manage code? With rollback, versioning, and branching capabilities on top of your existing data lake?
lakeFS is an open-source project that provides a Git-like version control interface for data lakes, with seamless integration to most data tools and frameworks. lakeFS enables you to easily implement parallel pipelines for experimentation, reproducibility, and CI/CD for data.
And now you can start playing around with lakeFS in a fully functional lakeFS environment, with your own data and all the tools that you are already using. Get your isolated environment, integrate it with the tools you use, and see how it works in an environment similar to your own.
Please note: playground environments will be deleted after one week.
See these features in action:
- Full reproducibility of data and code
- Git-like operations: branch, commit, merge and revert
- Instant reversion of changes to data
- Petabytes scale version control
- Zero-copy branching for frictionless experimentation
- Seamless integration with most data tools and frameworks (Spark, Hive, AWS Athena, Presto)
- Support for AWS S3, Azure Blob Storage, and Google Cloud Storage (GCS)
Start playing here (no email or any additional information required)
One you have experienced the value in lakeFS, go ahead and install lakeFS open source, or sign up for lakeFS Cloud Beta.
And don’t forget to share your feedback with us via our community on Slack or GitHub – this is the best way to let us know if any feature you need is missing!
Read Related Articles.

lakeFS Top 10 Defining Product Milestones in 2025
2025 was a defining year for lakeFS. Across open source and Enterprise editions, we shipped major capabilities that expanded lakeFS from a powerful data versioning

When AI Models Enter Healthcare, Lack of Reproducibility Becomes Expensive
Over the last few years, AI and machine learning have moved from research projects into the core of the medical industry. Models now influence diagnosis,

Introducing Metadata Search in lakeFS
Making Sense of Large-Scale Data Through Metadata Picture this: Your ML team needs to find all images labeled “defective” from Q3 production runs tagged by

