In Apache Airflow, the workflows are saved as a code. It It uses such elements as operator, DAG to build complex workflow. DAG is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. The deweloper can describe the relationship in several ways. Task logic is saved in operators. Apache Airflow has many ready operators that integrate with many services, but often you need to write your own operators. It is possible to communicate between different tasks using the xcom metabase.
- A page for how to create a DAG
- Revamping the page related to scheduling of DAG
- Tips for specific DAG conditions, such as rerunning a failed task
- A page for developing Custom Operators
- Describing mechanisms that are important when creating an operator such as template fields, UI color, hooks, connection etc.
- Describing the responsibility between the operator and the hook
- Things to keep in mind when dealing with shared resources (e.g. connections, hooks)
- A page that describes how to describe the relations between tasks
- >> <<
- chain helpers method ex. chain
- A page that describes the communication between tasks
- Revamping the page related to macros and XCOM
There are two relevant documents: CONTRIBUTING.md and BREEZE.rst. But at the end we can think about different structure.
- On-boarding documentation chapter/page that will be easily discoverable for new developers joining Apache Airflow community or someone who wants to start working on Apache Airflow development on a new PC. Ideally that could be a step-by-step guide or some kind of video guide - generally something easy to follow. Specifically it should be clear that there are different local development environments depending on your needs and experience - from local virtualenv through docker image to full-blown replica of CI integration testing environment. Maybe some kind of interactive tutorial would be good as well.
- Good practices on how to ensure continuous and trouble-free operation of the system
- Ways and mechanisms for ensuring system monitoring
- Description of the SLA mechanism
- Monitoring a running Apache Airflow instance. Doing health checks, etc.
- Setting up Prometheus and Grafana to monitor Apache Airflow metrics
- Instructions and step-by-step guide how to setup monitoring for Apache Airflow - including the two most common monitoring tools - Prometheus and Grafana