Develop Spark ETL pipelines with no risk against production data

Develop Spark ETL pipelines with no risk against production data

Description

Delivering high-quality data products requires strict testing of pipelines before deploying those into production. Today, to test using quality data, one either needs to use a subset of the data, or is forced to create multiple copies of the entire data. Testing against sample data is not good enough. The alternative, however, is costly and time consuming.
We will demonstrate how to get the entire production data set with zero-copy. You will learn:
  1. Set up your environment in under 5 minutes
  2. Create multiple isolated testing environments without copying data
  3. Easily run multiple test on your environment using git-like operations (such as commit, branch, revert, etc.)

 

Speakers:

LakeFS

  • Get Started
    Get Started
  • Git for Data - What, How and Why Now?

    Read the article
    +