Discussion ThreadAIP-26 Discussion

AIRFLOW-5606 - Getting issue details... STATUS

Production image status
Helm Chart statusReleased!


As cloud goes Kubernetes native, Docker (or more precisely containers) becomes the default mechanism for packaging and running applications. We are currently using Docker images for Continuous Integration (AIP-10 Multi-layered and multi-stage official Airflow CI image) and for local development environment (AIP-7 Simplified development workflow). There are several images that are not maintained directly by the Airflow Community but are used by users to run Airflow via Docker image. 

The images often used are:

The chart (and corresponding puckel image) is quite ok for the past but if we want to move forward, we need to make sure that the image, charts etc. are driven and managed by the community following release schedule and processes of Apache Software Foundation. 

The current helm chart uses the Puckel image which was good for quite a while but it was not really part of the Apache official community effort. For example one of the rules of releasing software is that any software formally released by the project must be voted by PMC (  

By bringing the official image to apache/airflow repository and making sure it is part of the release process of Airflow we can release new images at the same time new versions of Airflow get released. Additionally we can provide more maintainability - for example add some more detailed instructions and guidelines on how to run Airflow in the production environment. We can also make sure we have some optimisations in place and support wider set of audience - hopefully we can get some feedback from people using the official Airflow image/chart and address it longer term. Once we incorporate it to our community process, it will be easier for everyone to contribute to it - in the same way they contribute to the code of Airflow.


What change do you propose to make?

The proposal is to update the current CI-optimised Docker images of Airflow to build production-ready images. This image should retain properties of the current image but should be production-optimised (size, simplicity, execution speed) rather than CI-optimised (speed of incremental rebuilds). The properties to maintain:

1) It should be build after every master merge (so that we know if it breaks quickly)

2) It should contain:

  • libraries needed to run Apache Airflow
  • client libraries required to connect to external services (databases, etc.)
  • Apache Airflow itself with all production-needed extras

3) It should be available in all the Python flavours that Apache Airflow supports

4) It should be incrementally rebuilt whenever dependencies change.

5) Whenever new version of Python base image is released with security patches, the master image should be rebuilt using it automatically.

6) Whenever new versions of Python base image is released, the released images should be re-built using the latest security patches. 

7) Running `docker build .` in The Airflow's main directory should produce production-ready image

8) The image should be published at

9) It uses the same build mechanisms as described in AIP-10

10) The naming convention proposed (following AIP-10 - python 3.6 set as default image).

Master-build images: airflow:master-python3.5, airflow:master-python3.6, airflow:master-python3.7, airflow:master==airflow:master-python3.6

Release images: airflow:1.10.6-python3.5airflow:1.10.6-python3.6, airflow:1.10-python3.6, airflow:latest==airflow:1.10.6-python3.6

11) No NPM in the final image (just the compiled assets)

12) The official helm chart for the Apache Airflow should use the official Docker production-ready images.

13) The official image is used in the places that are prominent way of distributing the image (, possibly Bitnami etc.).

Draft PR with POC of production image is available here 

What problem does it solve?

  • Lack of officially supported production-ready image of Airflow
  • Possibility of running Airflow in Kubernetes using helm chart immediately after releasing Airflow officially
  • Possibility of running Airflow using docker-compose immediately after releasing Airflow officially

Why is it needed?

Users need to have a way to run Airflow via Docker in production environments - this should be part of the release process of Airflow. 

Are there any downsides to this change?

We will have to make sure as community to document the usage of Airflow image and to maintain it for the future.

Which users are affected by the change?

All users that are using Airflow using Dockerised environments.

How are users affected by the change? (e.g. DB upgrade required?)

New image will need to be used.

Other considerations?


What defines this AIP as "done"?

1) Image is regularly built and published at

2) Release process is updated to release the images as well as pip packages

3) Documentation on using the image is published

4) We have an official helm chart to install Airflow using this image.

5) The image follows guidelines of and is present in the official images list.

6) We know the process of updating security-patches of base python images for Airflow and follow it.

7) The Official Helm Chart uses the image

8) Helm Hub uses the image


  1. Other considerations I'd like to see added:

    • It should not contain node/npm in the final image, just the compiled assets (mostly for size and "attack surface" reasons)
    • I would also probably extend the list of tags we create for releases to include one or both of airflow:1.10-python3.5 and airflow:1.10-latest-python3.5 – i.e. so users can stick with a "release branch" but get updates.
  2. I checked out the official images a while ago and there appears to be some process: I'm not entirely sure what must be done, but it seems all other official images have a separate GitHub repo for docker image releases (Flink example: Back then I registered If you want, I can give you access/control or simply remove it for you to create a new repository?

    1. Thanks Bas Harenslak . Good point about the official images. I quickly scanned it and it seems we already fulfill many of the requirements there (smile).  I will add a point about that we should make sure we make the airflow image "official" once it's ready.

      I know various projects have sometimes separate repos for official images, but I think there is a big value in having Dockerfile as part of the main repository of Airflow rather than separate one. The main point is that by using the same Dockerfile that we use for daily builds, it will be automatically built and checked whenever we make any changes to Airflow. This is especially important for adding new dependencies: changes for example will be automatically checked and the image will be tested including running all tests. We have now one "source of truth"  - single Dockerfile that is going to be used by Travis CI, Local Development environment and Official image and we can build all the automation around this - including making sure that whenever we make a release, the image will be working (because it is build and checked daily by CI). I can't think of any disadvantages of keeping the Dockerfile in the main repo (except a little added complexity of the Dockerfile to handle those different uses). 

  3. Note, that many don't consider Docker itself production ready (Fedora wiped out Docker support in version 31) and there is a tendency to move away from Docker in deployments (which makes it a development purpose product only, which Docker does perfectly well).

    1. It's a bit of a shortcut you make. Docker as execution engine might be indeed not production ready. But docker containers (or containers in general) are definitely the production present and future. The docker image we are building should be usable by any container execution environment (notably Kubernetes) that uses their own (containerd-based) container execution environment. This is how helm chart will be used for example.

      Still Docker is the most mature and convenient way to build container images that are Container-OCI standard. So we will build Docker images however they can be run using any OCI-compliant container engine.

    2. I don't think Fedora wiped out Docker support, they replaced it with a higher-level, Docker-compatible tool.

      The Docker package has been removed from Fedora 31. It has been replaced by the upstream package moby-engine, which includes the Docker CLI as well as the Docker Engine. However, we recommend instead that you use Package-x-generic-16.pngpodman, which is a Cgroups v2-compatible container engine whose CLI is compatible with Docker's. Fedora 31 uses Cgroups v2 by default. 

  4. Jarek Potiuk

    What is the current state of this AIP? If you are planning to do any other work, could I ask you to migrate your ticket to Github Issue.  If we've done all the work, can I request a status update for this AIP.

  5. The status of Production image is kept and updated in . Daniel Imberman  I think something similar should be created for the Helm Chart. For sure we should solve the licencing/image issues before, and there are likely more tests neded and official release of the helm chart in the way that it will be remotely installable without sources and a process to release it officially with PMC approvals.

    Or maybe we should split-off Helm Chart from the image itself? Daniel Imberman ?  

  6. Jarek Potiuk  Can we mark this done now (smile)? or is something you'd like to do?

  7. i  had one thing that prevented it - makin it "official" image of docker and security scans. But

    • it's not that crucial any more to be "official image"
    • we've learned that "official" status bears some additional consequences - for example people expecing security fixes to be fixed in old images by maintainers (where we advise people to upgrade to latest images and use our image as "reference" image and rebuild it with latest fixes if they want
    • the image has been proven and it's THE image used over Puckel (which was the de-facto standard when we started)
    • it's maintained and versatile and generally USEFUL for quite some time.
    • the security features related can (and likely wil be) tracked, funded and followed independently from "having" the image

    So yes. I can very confidently say now we can close it.