DUE TO SPAM, SIGN-UP IS DISABLED. Goto Selfserve wiki signup and request an account.
Status
| Page properties | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Summary
The multi-team feature described here allows to have a single deployment of Airflow for several/multiple teams that will be isolated from each other in terms of:
- access to team-specific configuration (variables, connections)
- execute the code submitted by team-specific DAG authors in isolated environments (both parsing and execution)
- allow different teams to use different set of dependencies/execution environment libraries
- allow different teams to use different executors (including multiple executors per-team following AIP-61)
- allows to link DAGs between different teams via a “dataset” feature. Datasets can be produced/consumed by different teams - without AIP-82 "External event-riven scheduling" and without configuring credentials for authorisation between different Airlfow instances
- allow the UI users to to see a subset of single team or multiple teams DAGs/ Connections / Variables/ DAGRuns etc.
- reduce (and allows to distribute) maintenance/upgrade load on Devops/deployment managers where they have to provide services to different teams
Airflow Survey 2023 shows that multi-tenancy is one of the highly requested features of Airflow. Of course multi-tenancy can be understood differently, this document aims to propose a multi-team model chosen by Airflow maintainers, not a "customer multi-tenancy", where all resources are isolated between tenants, but rather propose a way how Airflow can be deployed for multiple teams within the same organization, which we identified as what many of our users understood as "multi-tenancy". We chose to use "multi-team" name to avoid ambiguity of the "multi-tenancy". The ways how some levels of multi-tenancy can be achieved today is discussed in the “Multi-tenancy today” chapter and differences between this proposal and those current ways is described in the “Differences vs. current Airflow multi-team options”.
Motivation
The main motivation is the need for the users of Airflow to have a single deployment of Airflow, where separate teams in the company structure have access to only a subset of resources (e.g. DAGs and related tables referring to dag_ids) belonging to the team. This allows to share the UI/web server deployment and scheduler between different teams, while allowing the teams to have isolated DAG processing and configuration/sensitive information. It also allows a separate group of DAGs that SHOULD be executed in a separate / high confidentiality environment, also allows to decrease the cost of deployment by avoiding having multiple schedulers and web servers deployed.
This covers the cases where currently multiple Airflow deployments are used in several departments/teams by the same organization and where maintaining (even if more complex) single instance of Airflow is preferable over maintaining multiple, independent instances.
This allows for partially centralized management of airflow while delegating the execution environment decisions to teams as well as makes it easier to isolate the workloads, while keeping the option of easier interaction between multiple teams via shared dataset feature of Airflow.
High level goal summary
There was a lot of debate on what **really** are the goals of the proposal and there are - of course - varying opinions on how it fits but as a "north star" of the proposal the following three main goals are most important:
* less operational overhead for managing multi-team (once AIP-72 is complete) where separate execution environments are important
* virtual assets sharing between teams
* ability of having "admin" and "team sharing" capability where dags from multiple teams can be seen in a single Airflow UI (requires custom RBAC an AIP-56 implementation of Auth Manager - with KeyCloak Auth Manager being a reference implementation)
Wording/Phrasing
Note, that this is not a formal specification, but where emphasised by capital letters, The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in the RFC 2119
Considerations
Multi-tenancy today
There are today several different ways one can run multi-team Airflow:
A. Separate Airflow instance per-tenant
The simplest option is to implement and manage separate different Airflow instances, each with own database, web server, scheduler, workers and configuration, including execution environment (libraries, operating system, variables and connections)
B. Separate Airflow instance per-tenant with some shared resources
Slightly more complex option is to reuse some of the resources - to save cost. The Database MAY be shared (each Airflow environment can use its own schema in the same database), web server instances could be run in the same environment, and the same Kubernetes Clusters can be used to execute workloads of tasks.
In either of the solution A/B, where multiple Airflow instances are deployed, the AuthUI access of Airflow (especially with the new Auth Manager feature AIP-56) can be delegated to a single Authentication proxy (for example KeyCloak Auth Manager could be implemented that uses single KeyCloak authentication proxy to implement a unified way of accessing the various UI webserver instances of Airflow that can be exposed in a unified URL scheme).
C. Using a single instance of Airflow by different teams
In the Airflow as we have it today he execution and parsing environment “per-team” could be - potentially - separated for different teams.
You could have separate set of DAG file processors per each folder in Airflow DAG folder, and use different set of execution environments (libraries, system libraries, hardware) - as well as separate queues (Celery) or separate Kubernetes Pod Templates used to separate workload execution for different teams. Those are enforceable using Cluster Policies. UI access “per team” could also be configured via custom Auth Manager implementation integrating with organization-managed authentication and authorization proxy.
In this mode, however, the workloads still have access to the same database and can interfere with each other’s DB including Connections and Variables - wmanhich means that there is no “security perimeter” per team, that would disallow DAG authors of one team to interfere with the DAGs written by DAG authors of other teams. Lack of isolation between the team group is the main shortcoming of such a setup that compared to A. and B. this AIP aims to solve.
Also it is not at all "supported" by Airflow configuration in an easy way. Some users do it by putting limitations on their users (for example only using Kubernetes Pod Operator), some might implement custom cluster policies or code review rules to make sure DAG authors from different teams are not mixed with other teams, but there is no single, easy to use mechanism to enable it
Differences of the multi-team proposal vs. current Airflow multi-tenant options
How exactly the current proposal is different from is possible today:
- The resource usage for Scheduling and execution can be slightly lower compared to A. or B. The hardware used for Scheduling and UI can be shared, while the workloads run “per-team” would be anyhow separated in the similar way as they are when either A. or B. is used. Single database and single schema is reused for all the teams, but the resource gains and isolation is not very different from the one achieved in B. by utilizing multiple, independent schemas in the same database. Decreasing resource utilization is a non-goal of the proposal.
- The proposal - when it comes to maintenance - is a trade-off between complete isolation of the execution environment available in options A. and B. and being able to centralize part of the maintenance effort from option C. This has some benefits and some drawbacks - increased coupling between teams (same Airflow version) for example, but also better and complete workload isolation than option C.
- The proposed solution allows to - easier and more complete than via cluster policies in option C. - manage a separation between teams. With this proposal you can trust that teams cannot interfere with each other’s code and execution. Assuming fairness in Scheduling algorithms of the scheduler also execution efficiency between the teams SHOULD be isolated. Security and isolation of workflows and inability to interference coming from DAG authors belonging to different teams is the primary difference that brings the properties of isolation and security available in options A. and B. to single instance deployment of Airflow.
- The proposal allows to have a single unified web server entrypoint for all teams and allows to have a single administrative UI interface for management of the whole “cluster”.
- Utilizing the (currently enhanced and improved) dataset feature allowing teams to interface with each other via datasets. While this will be (as of Airflow 2.9) possible in options A. and B. using the new APIs that will allow sharing dataset events between different instances of Airflow, having a multi-team single instance Airflow to allow for dataset driven scheduling between teams without setting up authentication between different, independent Airflow instances.
- Allowing users to analyse their Airflow usage and corellation/task analysis with single API/DB source.
Credits and standing on the shoulders of giants.
Prior AIPs that made it possible
It took quite a long time to finalize the concept, mostly because we had to build on other AIPs - funcationality that has been steadily added to Airflow and iterated on - and the complete design could only be based once the other AIPs are in the shape that we coud "stand on the shoulders of giants" and add multi-team layer on top of those AIPs.
The list of related AIPs:
- AIP-12 Persist DAG into DB- initial decision to persist DAG structure to the DB (Serialization) - that allowed to have OPTIONAL DAG serialization in the DB.
- AIP-24 DAG Persistence in DB using JSON for Airflow Webserver and (optional) Scheduler - that allowed to separate "Webserver" nd extract it to "common" infrastructure.
- AIP-43 DAG Processor separation - it allowed to separate DAG Parsing and Execution environment from Scheduler and move Scheduler to "common" infrastructure
- AIP-48 Data Dependency Management and Data Driven Scheduling - introducing the concept of DataSets that could be used as "Interface" Between teams
- AIP-51 Removing Executor Coupling from Core Airflow - that decoupled executors and introduced clear executor API that allowed to implement Hybrid executors
- AIP-56 Extensible user management - allowing to have externally managed and flexible way to integrate with Organization's Identity services
- AIP-60 Standard URI representation for Airflow Datasets - implementing "common language" that teams could use to communicate via Datasets
- AIP-61 Hybrid Execution - allowing to have multiple executors - paving the way to have separate set executors per team
- AIP-66: DAG Bundles & Parsing - allowing to group dags together in bundles which allows us to associate DAGs with teams with bundle → team mapping.
- AIP-72 Task Execution Interface aka Task SDK - allowing to use GRPC/HTTPS communication usig new Task SDK/API created for Airflow
- AIP-73 Expanded dataset awareness - and its sub-aips, redefinind datasets into data assets
- AIP-82 External event driven scheduling in Airflow. - which implements external driven scheduling for datasets
Initially the proposal was based on earlier (Airflow 2) approach for DB access isolation based on AIP 44, but since we target this AIP for Airflow 3, it will be based on AIP-72 above
AIP-44 Airflow Internal API - (in progress) allows to separate DB access for components that can also execute code created by DAG Authors - introducing a complete Security Perimeter for DAG parsing and execution
Design Goals
Structural/architectural changes
The goal of the implementation approach proposed is to minimize structural and architectural changes in Airflow to introduce multi-team, features and to allow for maximum backwards compatibility for DAG authors and minimum overhead of Organization Deployment Managers. All DAGs written for Airflow 2, providing that they are using only the Public Interface of Airflow SHOULD run unmodified in a multi-team environment.
DAG bundles are used as a way to determine team id.
Minimum database structure needs to be modified to support multi-team mode of deployment:
- new table with "teams" is defined
- new table where Bundle-name → Team Id is kept (many-to-one relationship) - this table is used to look up team_id for the specific dag being processed (when multi-team is enabled).
- Connection and Variable have additional "team_id" optional field. Presence of the team_id indicates that only DAGs belonging to bundle, that maps to that team_id can access the connection via Task SDK. Connections and Variables without team-id can be accessed by any DAG being parsed or executed. This "team_id" will be also available for the Auth Manager to make decision on granting access for UI users.
- Pools will have additional "team_id" optional field. Presence of the team_id indicates that the pool should only be used in DAGs that are mapped to the team_id via bundle. DAG File Processor will fail parsing the DAG that uses pool that belongs to another team than the DAG is mapped to.
- DAG File processor can be started with "--team-id" flag. When that flag is passed, the DAG file processor will only parse DAGs that beloing to bundle that maps tothe team_id
No other structural DB changes are needed. There is no "ripple" effect on the DB schema of Airflow.
Security considerations
The following assumptions have been made when it comes to security properties of multi-team Airflow:
- Reliance on other AIPs: Multi-team mode can only be used in conjunction with AIP-72 (Task Execution Interface aka Task SDK) and Standalone DAG file processor, and AIP-66 Dag Bundles and Parsing.
- Resource access security perimeter: The security perimeter for parsing and execution in this AIP is set at the team boundaries. The security model implemented in this AIP assumes that once your workload runs withing a team environment (in execution or parsing) it has full access (read/write) to all the resources (DAGs and related resources) belonging the same team and no access to any of those resources belonging to another team.
- Task SDK authentication: the JWT token passed to DAG file processor or Worker contains bundle_name claim, so that the worker or parsing process cannot modify it, this provides protection against accessing connections and variables for another team.
- DAG authoring: DAGs for each team are stored in team-specific bundles
- Execution: Both parsing and execution of DAGs SHOULD be executed in isolated environment - i.e. have differnet executors configured - with different queues for Celery for example. Note that Organization Deployment Managers might choose different approach here and colocate some or all teams within the same environments/containers/machines if the separation based on process separation is good enough for them.
- Triggerer: The triggerer gets optional
--team-idparameter which means that it will only accept deferred tasks coming from DAG id that belongs to bundles belonging to specific teams. Triggerer without `--team_id` parameters will process all the deferred events for DAGs belonging to DAGS that belong to bundles belonging to specific teams. The queries to select task MIGHT be optimised away for non-multi-team environment to avoid that additional selection criteria. - Data assets: Data assets containing URI are shared between teams in terms of assets events produced by one team might share the events by another team if they share the same URI. DAG Authors can produce and consume events from between teams if both are using the same URI , however DAG authors of consuming events from Datasets will be able to specify which other team's events will be consumed by them. That allows us to effectively share assets using URIs between different teams in organisation while giving control to the team which other teams can produce asset events that they react to.
- UI access: Filtering resources (related to DAGs) in the UI is done by the auth manager. For each type of resource (including DAGs, DagRuns, Task Instances, Connections and Variables), the auth manager is responsible for filtering these given the team(s) the user belongs to. Auth Managers gets team_id as a field when they determine whether access is granted and can decide which users have access to it. We currently do not plan to implement such filtering in the FAB Auth Manager which we consider as legacy/non-multi-team capable. KeyCloak Auth Manager integration should allow mapping team_id to groups of users defined in KeyCloak.
- Custom plugins: in Multi-team environment only future UI plugins should be used - the legacy plugins from Airflow 2 are not supported (they require FAB auth manager). The plugins will have team _id passed to them (for resources that have team_id and when mutli-team is enabled) and plugin creators will be able to use Auth manager access mechanisms to determine access to specific resources.
- UI controls and user/team management: with AIP-56, Airflow delegated all responsibility to Authentication and Authorisation management to Auth Manager. This continues in case of multi-team deployment. This means that Airlfow webserver completely abstracts-away from knowing and deciding which resources and which parts of the UI are accessible to the logged-in user. This also means for example that Airflow does not care nor manages which users has access to which team and whether the users can access more than one team or whether the user can switch between teams while using Airflow UI. All the features connected with this are not going to be implemented in this API, particular implementations of Auth Managers might choose different approaches there. It MAY be that some more advanced features (for example switching the team for the user logged in) might require new APIs in Auth Manager (for example a way how to contribute controls to Airflow UI to allow such switching). This is outside of the scope of this AIP and if needed should be designed and implemented as a follow up AIP.
Design Non Goals
It’s also important to explain the non-goals of this proposal. This aims to help to get more understanding of what the proposal really means for the users of Airflow and Organization Deployment Managers who would like to deploy a multi-team Airflow.
- It’s not a primary goal of this proposal to significantly decrease resource consumption for Airflow installation compared to the current ways of achieving “multi-tenant” setyp. With security and isolation in mind, we deliberately propose a solution that MAY have small impact on the resource usage, but it’s not a goal to impact it significantly (especially compared to option B above where the same Database can be reused to host multiple, independent Airflow instances. However isolation trumps performance whenever we made design decision and we are willing to sacrifice performance gain in favour of isolation.
- It’s not a goal of this proposal to increase the overall capacity of a single instance of Airflow. With the proposed changes, Airflow’s capacity in terms of total number of DAGs and tasks it can handle is going to remain the same. That also means that any scalability limits of the Airflow instance apply as today and - for example - it’s not achievable to host multiple 100s or thousands of teams with a single Airflow instance and assume Airflow will scale it’s capacity with every new team
- It’s not a goal of the proposal to provide a one-stop installation mechanism for “Multi-team” Airflow. The goal of this proposal is to make it possible to deploy Airflow in a multi-team way, but it has to be architected, designed and deployed by the Organization Deployment Manager. It won't be a turn-key solution that you might simply enable in (for example) Airflow Helm Chart. However the documentation we provide MAY explain how to combine several instances of Airflow Helm Chart to achieve that effect - still this will not be a "turn-key", it will be more of a guideline on how to implement it.
- It’s not a goal to decrease the overall maintenance effort involved in responding to needs of different teams, but it allows to delegate some of the responsibilities for doing it to teams, while allowing to maintain “central” Airflow instance - common for everyone. There will be different maintenance trade-offs to make compared to the multi-team options available today - for example, the Organization Deployment Manager will be able to upgrade Airflow once for all the teams, but each of the teams MAY have their own set of providers and libraries that can be managed and maintained separately. Each deployment will have to add their own rules on maintenance and upgrades to maintain the properties of the Multi-team environmet, where they all share the same Airflow version but each of the teams will have their own set of additional dependencies. Airflow will not provide any more tooling for that than those existing today - constraint files, reference container images and documentation on how to build, extend and customise container images based on Airflow reference images. This might mean that when single Airflow instance has 20 teams, there needs to be a proper build and customisation pipeline set-up outside of Airflow environment that will manage deployment, rebuilds, upgrades of 20 different container images and making sure they are properly used in 20 different teame-specific environments deployed as parts of the deployment.
- It's not a goal to support or implement the case where different teams are used to support branching strategies and DEV/PROD/QA environments for the same team. The goal of this solution is to allow for isolation between different groups of people accessing same Airflow instance, not to support case where the same group of people would like to manage different variants of the same environment.
Architecture
The Multi team Airflow extends “Sepearate DAG processing” architecture described in the Overall Airflow Architecture.
Current "Separate DAG processing" architecture overview
The "Separate DAG processing" brings a few isolation features, but does not addresses a number of those. The features it brings is that execution of code provided by DAG authors can happen in a separate, isolated perimeter from scheduler and webserver. This means that today you can have a deployment of Airflow where code that DAG author submits is never executed in the same environment where Scheduler and Webserver are running. The isolation features it does not bring - are lack of Database access isolation and inability of DAG authors to isolate code execution from one another. Which means that DAG authors can currently write code in their DAGs that can modify directly Airflow's database, and allows to interact (including code injection, remote code execution etc.) with the code that other DAG authors submit. Also there are no straightforward mechanisms to limit access to the Operations / UI actions - currently the way how permissions are managed are "per-individual-DAG" and while it is possible to apply some tooling and permissions syncing to apply permissions to groups of DAGs, it's pretty clunky and poorly understood.
Proposed target architecture with multi-team setup
The multi-team setup - provides workload isolation while DB isolation will be provider by AIP-72.
Once AIP-72 is in place, this is a relatively small change comparing to the current Aiflow 3 proposal, that mostly focuses on isolating Dag File Processing and Task/Triggerer environment so that the code from one tenant can have different dependencies for different teams, the teams can have separate executors configured and their code can execute in isolation from other teams.
The multi-team setup provides:
- allows for separate set of dependencies for different teams
- it isolates credentials and secrets used by each team
- isolation of code workloads between teams - where code from one team is executed in a separate security perimeter
Implementation proposal
Managing multiple teams at Deployment level
Multi-team is a feature that is available to Organization Deployment Managers who manage the whole deployment environment and they SHOULD be able to apply configuration, networking features and deploy airflow components belonging to each team in an isolated security perimeter and execution environment to make sure they “team” environment does not interfere with other teams environment.
It’s up to the Organization Deployment manager, to create and prepare the deployment in a multi-team way. Airflow components will perform consistency checks for the configuration - but it will not provide separate tools and mechanisms to manage (Add / remove / rename teams). Process of adding/removing teams will require manual deployment reconfiguration.
We do not expect nor provide tooling/UI/CLI to manage the teams, the whole configuration and reconfiguration effort required to reconfigure such team deployment should be implemented as part of deployment changes.
Executor support
The implementation utilizes AIP-61 (Hybrid Execution) support where each team can have their own set of executors defined (with separate configuration). While multi-team deployment will work with multiple Local Executors, it SHOULD only be used with Local Executor for testing purposes, because it does not provide execution isolation for DAGs.
When scheduler starts, it instantiates all executors of all teams configured.
In the case of Celery Executor, several teams will use a separate broker and result backend. For the future we MAY consider using a single broker/result backend and use `team:` prefixed queues, but this is out of scope from this implementation.
Changes in configuration
Multi-team of scheduler and webserver is controlled by a "core/multi-team" bool flag (default False). Since the access to Connections and variables can be controlled via JWT Token created by TaskSDK, there is no particular need to have several different configuration sets - one per team. The configuration should be stored in a single configuration file and in case in the future we will want to pass team-specific configuration, we will add such feature to Task SDK - task sdk will be able to pass per-team configuration to workers, triggerers and DAG processor. However with the first iteration of multi-team this is not needed.
Existing multi-executor configuration will be extended to include team prefix. The prefix will be separated with ":", entries for different teams will be separated with ";"
[core]
executor= team1:KubernetesExecutor,my.custom.module.ExecutorClass;team2:CeleryExecutor
The configuration of executors will also be prefixed with the same team:
[team1:kubernetes_executor]
api_client_retry_configuration = { "total": 3, "backoff_factor": 0.5 }
The environment variables keeping configuration will use ___ (three underscores) to replace ":". For example:
AIRFLOW__TEAM1___KUBERNETES_EXECUTOR__API_CLIENT_RETRY_CONFIGURATION
Connections
and
Variables
access
control
In multi-team deployment, Connections and Variables might be assisgned to team_id by specifying team_id in their definition. In case connection or variable has task_id defined, Task SDK will only provide connection and variable information to the task and dag file processor when the task / DAG belongs to the team_id that the connection / variable has set. Connections/ Variables without team_id are accessible to all tasks/dags being parsed.
Pools
Pools will get additional team_id field which will be nullable. This means that Pools can be either shared, or specific "per-team". For example (depending on the deployment decisions) - default_pool might be "common" for all teams or each team can have their own default_pool (configurable in team configuration file/env variable). DAGFileProcessor will fail parsing DAG files that will use pools belonging to other teams. while scheduling will remain unchanged for pools.
Dataset triggering access control
While any DAG can produce events that are related to any data assets, the DAGs consuming the data assets will - by default - only receive events that are triggered in the same team or by an API call from a user that is recognized to belong to the same team by Auth Manager. DAG authors could specify a list of teams that should additionally be allowed to trigger the DAG (joined by the URI of the Data Asset) by adding a new asset parameter and the triggering will happend. Also this AIP depends on AIP-73 - Expanded Data Awareness - and the exact approach will piggiback on deailed implementation of Data ASsets.
allow_triggering_by_teams = [ "team1", "team2" ]
Note that there is a relation between AIP-82 ("External Driven Scheduling") and this part of the functionality. When you have multiple instances of Airflow, you can use shared datasets - "Physical datasets" - that several Airflow Instances can use - for example there could be an S3 object that is produced by one airflow instance, and consumed by another. That requires deferred trigger to monitor for such datasets, and appropriate permissions to the external dataset, and you could achive similar result to cross-team dataset triggering (but cross airflow). However the feature of sharing datasets between the teams also works for virtual assets, that do not have physically shared "objects" and trigger that is monitoring for changes in such asset.
Changes in the metadata database
Changes to the database are much smaller than in the previoous version of this AIP and they avoid "ripple effect" through airflow's codebase.
- new table with "teams" is defined
- new table where Bundle-name → Team Id is kept (many-to-one relationship) - this table is used to look up team_id for the specific dag being processed. UI configuration for it should be added.
- Connection and Variable have additional "team_id" optional field. Presence of the team_id indicates that only DAGs belonging to bundle, that maps to that team_id can access the connection via Task SDK. Connections and Variables without team-id can be accessed by any DAG being parsed or executed.
- Pools will have additional "team_id" optional field. Presence of the team_id indicates that the pool should only be used in DAGs that are mapped to the team_id via bundle. DAG File Processor will fail parsing the DAG that uses pool that belongs to another team than the DAG is mapped to.
UI modifications
Changes to the UI, filtering is done by AuthManager based on presence of team_id:: the way how to determine which team prefixes are allowed for the user SHOULD be implemented separately by each AuthManager. Each user served by AuthManager can belong to one or many teams and access to all resources in the UI will be decided based on the team the resource belongs to.
Per-team deployment
Since each team is deployed in its own security perimeter and with own configuration, the following properties of deployment can be defined per-team:
- set of dependencies (possibly container images) used by each team (each component belonging to the team can have different set of dependencies)
- credential/secrets manager configuration can be specified separately for each team separately in their configuration
Roles of Deployment managers
There are two kinds of Deployment Managers in the multi-team Airflow Architecture: Organisation Deployment Managers and team Deployment Managers.
Organization Deployment Managers
Organization Deployment Managers are responsible for designing and implementing the whole deployment including defining teams and defining how security perimeters are implementing, deploying firewalls and physical isolation between teams and figure out how to connect the identity and authentication systemst of the organisation with Airflow deployment. They also manage common / shared Airflow configuration, Metadata DB, the Airflow Scheduler and Webserver runtime environment including appropriate packages and plugins (usually appropriate container images), Manage running Scheduler and Webserver. The design of their deployment has to provide appropriate isolation betweeen the security perimeters. Both physical isolation of the workloads run in different security perimeters but also implementation and deployment ot the appropriate connectivity rules between different team perimeters. The rules implemented has to isolate the components running in different perimeters, so that those components which need to communicate outside of their security perimeter can do it, and make sure the components cannot communicate with components outside of their security perimeters when it's not needed. This means for example that it's up to the Organisation Deployment manager to figure out the deployment setup where Internal APIs / GRPC API components
running inside of the team security perimeters are the only componets there that can commmunicate with the metadata Database - all other components should not be able to communicate with the Database directly - they should only be able to communicate with the Internal API / GRPC API component running in the same security perimeter (i.e. the team they run in).
When it comes to integration of organisation's identity and authentication systems with airflow such integration has to be performed by the Deployment Manager in two areas:
- Implementation of the Auth Manager integrated with organization's identity system that applies appropriate rules that the operations users that should have permissions to access specific team resources, shoudl be able to do it. Airflow does not provide any specific implementation for that kind of Auth Manager (and this AIP will not change it) - it's entirely up to the Auth Manager implementation to assign appropriate permissiosn. The Auth Manager API of Airflow implemented as part of AIP-56 providers all the necessary information to take the decision by the implementation of such organisation-specific Auth Manager
- Implementation of the permissions for the DAG authors. Airflow completely abstract away from the mechanisms used to limit permissions to folders belonging to each team. There are no mechanisms implemented by Airflow itself - this is no different than today, where Airflow abstracts away from the permissions needed to access the whole DAG folder. It's up to the Deployment Manager to define and implement appropriate rules, group access and integration of the access with the organisation's identity, authentication, authorisation system to make sure that only users who should access the team DAG files have appropriate access. Choosing and implementing mechanism to do that is outside of the scope of this AIP.
- Configuration of per-team executors in "common" configuration.
Team Deployment Managers
Team Deployment Manager is a role that might be given to other people, but it can also be performed by Organization Deployment Managers. The role of team Deployment Manager is to manage configuration and runtime environment for their team - that means set of pacakges and plugins that should be deployed in team runtime environment (where Airflow version should be the same as the one in the common environment) and to manage configuration specific "per team". They might also be involved in managing access to DAG folders that are assigned to the team, but the scope and way of doing it is outside of the scope of this AIP. team Deployment managers cannot on their own decide on the set of executors configured for their team. Such configuraiton and decisions should be implemented by the Organization Deployment Manager.
Team Deployment Managers can control resources used to handle tasks - by controlling the size and nodes of K8S clusters that are on the receiving end of K8S executor, or by controlling number of worker nodes handling their specific Celery queue or by controlling resources of any other receiving end of executors they use (AWS/Fargate for example). In the current state of the AIP and configuration team Deployment Manager must involve Organization Deployment Managers to change certain aspects of configuration for executors (for example Kubernetes Pod templates used for their K8S executor) - however nothing prevents future changes to the executors to be able to derive that configuration from remote team configuration (for example KPO template could be deployed as Custom Resources at teh K8S cluster used by the K8S executor and could be pulled by the executor.
Team Deployment Managers must agree the resources dedicated for their executors with the Organization Deployment Manager. While currently executors are run inside the process of Scheduler as sub-processes, there is little control over the resources they used, as a follow-up to this AIP we might implement more fine-grained resource control for teams and executors used.
Why is it needed?
The idea of multi-tenancy has been floating in the community for a long itme, however it was ill-defined. It was not clear who would be the target users and what purpose would multi-tenancy serve - however for many of our users this meant "multi-team"
This document aims to define multi-team as a way to streamline organizations who either manage themselves or use a managed Airflow to setup an instance of Airflow where they could manage logically separated teams of users - usually internal teams and departments within the organization. The main reason for having multi-team deploymnent of Airflow is achieving security and isolation between the teams, coupled with ability of the isolated teams to collaborate via shared Datasets. Those are main, and practically all needs that proposed multi-team serves. It's not needed to save resources or maintenance cost of Airflow installation(s) but rather streamlining and making it easy to both provide isolated environment for DAG authoring and execution "per team" but within the same environment which allows for deploying "organisnation" wide solutions affecting everyone and allowing everyone to easily connect dataset workflows coming from different teams.
Are there any downsides to this change?
- Increased complexity of Multiple Executors, new database model
- Complexity of deployment consisting of a number of separate isolated environment.
- Increased responsibilty of maintainers with regards to isolation and taking care about setting security perimeter boundaries
Which users are affected by the change?
- The change does not affect users who do not enable multi-team mode. There will be no database changes affecting such users and no functional changes to Airflow.
- Users who want to deploy airflow with separate subsets of DAGs in isolated teams - particularly those who wish to provide a unified environment for separate teams within their organization - allowing them to work in isolated, but connected environment.
- Database migration is REQUIRED as well as specific deployment managed by the Organization Deployment Manager has to be created
What defines this AIP as "done"?
- All necessary Airflow components expose –team flags and all airflow components provide isolation between the team environment (DAGs, UI, secrets, executors, ....)
- Documentation describes how to deploy multi-team environment
- Implemented simple reference implementation “demo” authentication and deployment mechanism in multi-team deployment (development only)
What is excluded from the scope?
- Sharing broker/backend for celery executors between teams. This MAY be covered by future AIPs
- Implementation of FAB-based multi-team Auth Manager. This is unlikely to happen in the community as we are moving away from FAB as Airflow's main authentication and authorisation mechanism.
- Implementation of generic, configurable, multi-team aware Auth Manager suitable for production. This is not likely to happen in the future, unless the community will implement and release a KeyCloak (or simillar) Auth Manager. It's quite possible however, that there might be 3rd-party Auth Managers deployed that will provide such features.
- Per-team concurrency and prioritization of tasks. This is unlikely to happen in the future unless we find limitations in the current Airlflow scheduling implementation. Note that there are several phases in the process of task execution in Airflow. 1) Scheduling - where scheduler prepares dag runs and task instances in the DB so that they are ready for execution, 2) queueing - where scheduled tasks are picked by executors and made eligible for running 3) actual execution where (having sufficient resources) executors make sure that the tasks are picked from the queue and executed. We believe that 1) scheduling is suffificiently well implement in Airflow to avoid starvation betweeen teams and 2) and 3) are handled by separate executors and environments respectively. Since each team will have own set of executors, and own execution environment, there is no risk of starvation between teams in those phases as wel, and there is no need to implement separte prioritisation.
- Resource allocation per-executor. In the current proposal, executors are run as sub-processes of Scheduler and we have very little control over their individual resource usage. This should not cause problems as generally resource needs for executors is limited, however in more complex cases and deployments it migh become necessary to limit those resources on a finer-grained level (per executor or per all executors used by the team). This is not part of this API and will likely be investigated and discussed as follow-up.
- Turn-key multi-team Deployment of Airflow (for example via Helm chart). This is unlikely to happen. Usually multi-team deployments require a number of case-sepecific decisions and organization-specific integrations (for example integration with organization's identity services, per-team secrets managements etc. that make it unsuitable to have 'turn-key' solutions for such deployments.
- Running multiple schedulers - one-per team. While it should be possible if we add support to select DAGs "per team" per scheduler, this is not implemented in this AIP and left for the future
...

