Ready to dive into the lake?
lakeFS is currently only
available on desktop.

For an optimal experience, provide your email below and one of our lifeguards will send you a link to start swimming in the lake!

lakeFS Community

Develop Spark ETL pipelines with no risk against production data

Delivering high-quality data products requires strict testing of pipelines before deploying those into production. Today, to test on production data, one either needs to use a subset of the data, or is forced to create multiple copies of the entire data. Testing against sample data is not enough. Testing environment must enable you to test your end-to-end data pipeline against production data. With lakeFS, you get the entire production data set with zero-copy. However, the complexity, scale and variety of the production data environment significantly challenges testing the performance of an ETL.

You will learn:

  1. Set up your environment in under 5 minutes
    • Integrate lakeFS and Spark
    • Execute Git-like Action using lakeFS Python Client
  2. Create multiple isolated testing environments without copying data
  3. Easily run multiple test on your environment using git-like operations (such as commit, branch, revert, etc.)



Amit Kesarwani

Director Solution Engineering, lakeFS

Iddo Avneri

VP Customer Success, lakeFS

Git for Data – lakeFS

  • Get Started
    Get Started
  • Did you know that lakeFS is an official Databricks Technology Partner? Learn more about -

    lakeFS for Databricks