If you think about it, lakeFS is about two things — version control and big data. We see ourselves as bringing version control to big data. This bridges a workflow gap that currently exists when working with data and working with code.
This gap is purely artificial — there’s no conceptual reason why different workflows should be required for each. In fact, since they are almost always related in the form of executed code running over bytes of data, in many cases it makes sense to treat them as one entity from a source control perspective.
This is what we are making possible with the lakeFS project — version control at scale. We believe this is a crucial piece of what’s missing from the developer experience of the most performant data platforms.
Processing data at scale isn’t enough anymore, we want to do so with grace.
If I’ve managed to piqued your interest, check out the demo video below, which goes into more detail and shows lakeFS in action!
About lakeFS
The lakeFS project is an open source technology that provides a git-like version control interface for data lakes, with seamless integration to popular data tools and frameworks.
Our mission is to maximize the manageability of open source data analytics solutions that scale.
Read Related Articles.

How CytoReason Streamlined Nextflow with lakeFS for Smarter Data Pipelines
TL;DR CytoReason is a technology company transforming biopharma’s decision-making—from trial and error to data-driven—through its AI platform of computational disease models. Leveraging an extensive database

Metadata Quality: Types, Processes, and Best Practices
Data practitioners rarely need convincing to prioritize data quality – it’s already a top-of-mind concern for most. If data is messy, incomplete, or outdated, teams

Iceberg REST Catalog Alternatives: Top Options & How to Choose The Best One For Your Team
Preparing data for AI projects is about more than fast storage or shiny new table formats – it all starts with selecting the right data


