If you think about it, lakeFS is about two things — version control and big data. We see ourselves as bringing version control to big data. This bridges a workflow gap that currently exists when working with data and working with code.
This gap is purely artificial — there’s no conceptual reason why different workflows should be required for each. In fact, since they are almost always related in the form of executed code running over bytes of data, in many cases it makes sense to treat them as one entity from a source control perspective.
This is what we are making possible with the lakeFS project — version control at scale. We believe this is a crucial piece of what’s missing from the developer experience of the most performant data platforms.
Processing data at scale isn’t enough anymore, we want to do so with grace.
If I’ve managed to piqued your interest, check out the demo video below, which goes into more detail and shows lakeFS in action!
Read Related Articles.
Introduction An important part of developing an open source project like lakeFS is assisting and advising our users. When they run into an issue and
Databricks has come a long way since growing out of a Berkeley Lab in 2013 with an open-source distributed computing framework called Spark. Fast forward