Ready to dive into the lake?
lakeFS is currently only
available on desktop.

For an optimal experience, provide your email below and one of our lifeguards will send you a link to start swimming in the lake!

lakeFS Community
The lakeFS team
The lakeFS team Author

lakeFS is on a mission to simplify the lives of...

June 19, 2023

Baker’s Dozen for the top 13 orchestration tools in 2023

According to Gartner, over 87% of businesses fail to make the most of their data. The primary reasons behind such a low level of business intelligence and analytics maturity are siloed data and the complexity of turning data into useful insights.

Companies find it challenging to utilize their data due to the sheer complexity of managing data pipelines that extract business value from the raw data gathered by the organization. Data pipelines are the basis of every insight extracted from the data, whether it’s simple analytics or ML/AI pipelines, managing high-quality data pipelines is a must. This is where data orchestration can help, bringing you closer to data maturity.

There is a growing number of tools for data orchestration. Let’s take a look at what this space looks like in 2023 and review the data orchestration tools teams use today.

Quick recap: What is data orchestration all about?

Data practitioners use Data Pipeline Orchestration as a solution to centralize the administration and oversight of end-to-end data pipelines. The process of automating the data pipeline is known as data orchestration. Companies use data orchestration to automate and expedite data-driven decision-making.

What does an orchestrated process look like?

Wrapping entire data operations into a single solution is possible thanks to the Infrastructure as Code approach, which lets you specify all resources required for a data pipeline. 

For example, you can define not just the pipeline resources that change the data, such as Spark or Trino, but also data storage resources, notification settings, and alarms for those services. 

Data orchestration tooling
Source: stacktonic.com

A Directed acyclic graph (DAG) is a graphical representation of your data models and their relationships. Essentially, the DAG is a graphical representation of the data pipeline that the orchestration tool orchestrates. The interfaces of orchestration tools offer an easy way to build, update, duplicate, and monitor the data pipeline through its DAG representation.  

12 data orchestration tools teams use in 2023

1. Apache Airflow

Apache Airflow orchestration tooling
Source: Airflow docs

Links: Website | Docs | GitHub

What is it?

Apache Airflow is an open-source dataflow orchestration tool for authoring, scheduling, and monitoring processes programmatically. It provides a comprehensive collection of operators for a variety of data processing systems, including Hadoop, Spark, and Kubernetes. It also comes with a web-based user interface for organizing and monitoring processes.

What can you do with it?

When specified as code, workflows become more manageable, versionable, testable, and collaborative. This is what Airflow helps with. Teams use it to create workflows as directed acyclic graphs (DAGs) of activities. 

The Airflow scheduler runs your tasks on an array of workers while adhering to the requirements you specify. The intuitive user interface makes it simple to see pipelines in production, monitor progress, and fix issues as they arise.

Airflow works well in workflows that are primarily static and change slowly. When the DAG structure is consistent from run to run, it explains the unit of work and ensures continuity. Although Airflow isn’t a streaming solution, it’s frequently used to handle real-time data by extracting data in batches from streams.

2. Astronomer

Astronomer data orchestration
Source: Astronomer docs

Links: Website | Docs | GitHub

What is it?

Astronomer makes managing Airflow cost-effective with a managed Airflow service designed to increase developer productivity and data maturity.

What can you do with it?

Astronomer has a scheduler that allows you to create Airflow environments and manage DAGs, users, logs, alarms, and version upgrades with a single click. This opens the door to executing DAGs consistently and at scale.


Astronomer also helps engineers write DAGs faster thanks to notebook and command-line interfaces that simplify the process of writing, deploying changes, and automating data testing before it goes live. 

The solution also lets you remove technical debt and learn how to push Airflow’s best practices across the organization.

3. Dagster

Dagster orchestration tooling
Source: Dagster docs

Links: Website | Docs | GitHub

What is it?

Dagster is a tool that features an intuitive user interface for orchestrating workflows for machine learning, analytics, and ETL (Extract, Transform, Load). 

What can you do with it?

It doesn’t matter if you develop your pipelines in Spark, SQL, DBT, or any other framework. You may install the pipeline locally or on Kubernetes using the platform. You can even build your deployment infrastructure. 

Dagster will provide you with a single view of pipelines, tables, ML models, and other assets. It offers an asset management tool for tracking workflow outcomes, enabling teams to create customized self-service solutions.


The online interface (Dagit) allows anyone to view and explore generated task items. It alleviates the misery of dependence – since codebases are separated by repository models, one process doesn’t impact another one. 

4. Prefect

Prefect data orchestration
Source: Prefect docs

Links: Website | Docs | GitHub

What is it?

Prefect is an automated workflow management application available built on top of the open-source Prefect Core workflow engine. Prefect Cloud is a fully hosted and ready-to-deploy backend for Prefect Core. Prefect Core’s server is a lightweight, open-source alternative to Prefect Cloud. 

What can you do with it?


This workflow management system helps you add semantics to data pipelines, such as retries, logging, dynamic mapping, caching, or failure alerts.

To simplify workflow orchestration, Prefect provides UI backends that automatically augment the Prefect Core engine with the GraphQL API. 

With the cloud version, you get features like permissions and authorization, performance upgrades, agent monitoring, secure runtime secrets and parameters, team management, and SLAs. Everything is automated; all you have to do is transform jobs into processes, and the solution will take care of the rest.

5. Mage

Mage orchestration tooling
Source: Mage docs

Links: Website | Docs | GitHub

What is it?

Mage is a free and open-source data pipeline tool for data transformation and integration.

What can you do with it?

Mage offers a simple development experience for folks who develop in Airflow. You can start programming locally with a single command or use Terraform to establish a dev environment on your cloud.

For maximum flexibility, Mage allows you to write code in Python, SQL, or R in the same data pipeline. Each stage in the pipeline is a separate file that contains reusable, tested code with data validations. No more DAGs filled with spaghetti code!

And you’ll receive feedback immediately in an interactive notebook UI without having to wait for your DAGs to complete testing. On top of that, every code block in your pipeline generates data that can be versioned, partitioned, and cataloged for later use.

6. Luigi

Luigi data orchestration
Source: Luigi docs

Links: Docs | GitHub

What is it?

Luigi is a lightweight, open-source Python tool designed for workflow orchestration and batch job execution, great for handling complex pipelines. It provides a variety of services for controlling dependency resolution and workflow management, allowing visualization, error management, and command-line integration. 

What can you do with it?


Luigi comes in handy for long-running, sophisticated batch operations. It handles all workflow management tasks that may take a long time to complete, allowing engineers to focus on real jobs and their dependencies. It also comes with a toolkit that includes popular project templates. 

All in all, Luigi gives you cutting-edge file system abstractions for HDFS and local files, making all file system operations atomic. If you’re looking for an all-Python solution that handles workflow management for batch task processing, Luigi is the tool for you.

7. Apache Oozie

Apache Oozie orchestration tooling
Source: Apache Oozie

Links: Website | Docs | GitHub

What is it?

Apache Oozie is an open-source solution that provides several operational functions for a Hadoop cluster, especially cluster task scheduling. It enables cluster managers to construct complicated data transformations from several component activities. This gives you more control over your jobs and makes it easy to repeat them at preset intervals. At its heart, Oozie helps teams get more out of Hadoop.

What can you do with it?
Oozie is basically a Java web application used to manage Apache Hadoop processes. It successively integrates many jobs into a single logical unit of labor.

Oozie is connected with the Hadoop stack, with YARN as its architectural center, and it supports Apache MapReduce, Apache Pig, Apache Hive, and Apache Sqoop jobs. It can also schedule system-specific tasks such as Java applications or shell scripts.

8. Flyte

Flyte data orchestration
Source: Flyte docs

Links: Website | Docs | GitHub

What is it?

Flyte is a structured programming and distributed processing platform with machine learning and data processing processes that are highly concurrent, scalable, and maintainable. 

What can you do with it?
Flyte’s architecture allows you to establish a separate repo, deploy it, and grow it without affecting the rest of the platform. It’s based on Kubernetes and provides portability, scalability, and reliability. 

Flyte’s user interface is adaptable, straightforward, and simple to use for various tenants. It offers users parameters, data lineage, and caching to help them organize their processes, as well as ML orchestration tools.

The platform is dynamic and adaptable, with a large range of plugins to help with workflow building and deployment. You can repeat, roll and experiment with workflows – and share them to help the entire team speed up the development process.

9. DAGWorks

DAGWorks orchestration tooling
Source: DAGWorks website

Links: Website | Docs | GitHub

What is it?

DAGWorks is an open-source SaaS platform that helps companies speed the creation and management of ML ETLs in a collaborative, self-service way, leveraging existing MLOps and data infrastructure.

What can you do with it?

Self-service allows domain modeling specialists to iterate on ML models quickly and without handoff. DAGWorks helps engineers create workflow code and maintain integrations with MLOps tools of their choosing, all in a self-service way.

DAGWorks allows you to easily write unit tests and integration tests and validate your data at runtime. Once done, you can run your dataflow anywhere Python can: batch, streaming, online, etc.

10. Shipyard

Shipyard data orchestration
Source: Shipyard docs

Links: Website | Docs | GitHub

What is it?

Shipyard is a platform for data engineers looking to build a robust data infrastructure from the ground up by linking data tools and processes and improving data operations.

What can you do with it?

Shipyard provides low-code templates that you can set visually, eliminating the need to write code to design data workflows and allowing you to get your work into production faster. If using current templates isn’t an option, you can always use scripts written in your preferred language to integrate any internal or external procedure into your workflows.

The Shipyard platform incorporates observability and alerting to ensure that breakages are spotted before they are noticed downstream by business teams.

All in all, Shipyard helps data teams do more without relying on other teams or worrying about infrastructure difficulties, while simultaneously guaranteeing that business teams trust the data made accessible to them.

11. Kestra

Kestra orchestration tooling
Source: Kestra docs

Links: Website | Docs | GitHub

What is it?

Kestra is an open-source, event-driven orchestrator that fosters communication between developers and business users by simplifying data processes. It enables you to develop dependable processes and maintain them with confidence by integrating Infrastructure as Code best practices into data pipelines.

What can you do with it?

Everyone who stands to benefit from analytics can participate in the data pipeline construction process thanks to the declarative YAML interface for building orchestration logic in Kestra. 

When you make modifications to a process using the UI or an API call, the YAML specification is automatically updated. As a result, even if certain workflow components are adjusted in other ways, the orchestration logic is stated declaratively in the code.

12. Datorios

Datorios data orchestration
Source: Datorios docs

Links: Website | Docs 

What is it?

Datorios is a tool that provides engineers with total deployment freedom in a collaborative interface with event-level transparency across all pipelines via in-depth analytics and built-in auto-rectification for rapid feedback loops and hastened outcomes.

What can you do with it?

Datorios offers a collaborative, developer-first interface where you can easily identify and isolate errors in pipeline development. You can also test the impact of changes in real-time and test pipeline components sequentially prior to deployment. 

The solution lets you clean, join, or duplicate data from any source. It’s available for cloud and on-premise deployment.

13. MLtwist

Links: Website

What is it? 

MLtwist integrates data across disparate data labeling and annotation systems, with the goal of allowing data practitioners to streamline their work and be more productive. The solution has 75+ integrated data labeling systems. 

What can you do with it?

MLtwist provides integrations that transfer data assets to the best labeling systems for the job, monitor their performance, and convert the ready annotations into the unique JSON file format required by ML teams.

The benefit of using MLtwist lies in handling all the tasks that go into data labeling operations: developing workflow, testing the correct data labeling platforms, establishing guidelines and training workforces, and quality control. The solution offers a good deal of flexibility, so it’s a good match for practitioners who don’t feel comfortable delegating the entire scope of the job and want to stay involved. 

Data versioning is a key part of any data orchestration workflow

In any data orchestration workflow, one critical aspect that cannot be overlooked is data versioning. Data versioning refers to the practice of systematically managing and tracking different versions of data assets throughout the entire data pipeline. It plays a crucial role in ensuring the consistency, reproducibility, and traceability of data-driven processes.

Here are some key benefits of incorporating data versioning into your data orchestration workflow:

  • Reproducibility: Data versioning allows you to precisely reproduce and recreate past results by maintaining a record of all the data versions used in a particular analysis or model training. This is essential for auditability, compliance, and debugging purposes.
  • Risk Management: Data versioning helps mitigate risks associated with data errors or unexpected changes. If an issue arises at any stage of the pipeline, having a complete history of data versions allows you to identify the problem quickly, immediately rollback to a previous version of the data, and take corrective actions more effectively.
  • Reprocessing: Orchestration tools allow you to easily reprocess data by re-executing data pipelines. However, these can be long-running, compute-heavy, expensive pipelines. Therefore, it could be beneficial to reprocess from a specific point of the data pipeline (commit), as opposed to reprocessing the ENTIRE dataset in cases like late arrival data, or bug fixes. 
  • Isolation & atomicity at the complete data lake level: leveraging the ability to promote data, makes it possible to ensure that all changes to the data lake are performed as a single, consistent unit – i.e. run a long-running ETL in isolation, and only once completed, atomically promote all the changes across multiple tables and data sets. 
  • CI/CD for data: Data version control system enables efficient and reliable software development practices that are key stepping stones for CI/CD; Enabling data teams to iterate rapidly, collaborate effectively, and deploy data changes with confidence. By implementing best practices, DataOps teams can enable automation that will continuously ensure data quality.

By integrating with the data pipeline, a data version control tool provides a versioning layer for large-scale data sets. It enables data teams to track changes, manage multiple “parallel” versions of the data (branches), and create snapshots of data at different stages of the pipeline. 

This capability allows for precise control over data versions, ensuring reproducibility, collaboration, and risk management. This way, data practitioners can easily roll back to previous data states, compare different versions, and accurately reproduce results, thus enhancing the overall reliability of data processing workflows.

Need help getting started?

Git for Data – lakeFS

  • Get Started
    Get Started