Application build pipeline

The following diagram explains the process to go from an application’s source code to a deployment on OpenAppStack.

Application build process

These are the steps in more detail:

  • Build container (this process should be maintained by application developer by providing a Dockerfile with the application)
    1. Get application package (source code, installation package, etc.)
      1. If not part of the package: get default configuration for the application
    2. Build container with application package installed
      1. Install application dependencies
      2. Install application package
      3. Setup default configuration
      4. Setup pluggable configuration override, can be:
        • Reading environment variables
        • Extra configuration file mounted into the container elsewhere
  • Helm chart
    • Deployment configuration to specify:
      • The container(s) that should be deployed.
      • The port(s) that they expose.
      • Volume mounts for configuration files and secrets.
      • Live/readyness probes
      • Persistent storage locations and methods
      • A lot of other things
    • Service configuration to specify:
      • Ports exposed to the user of the application
    • Ingress configuration to specify:
      • How to proxy to the application (which hostname or URL)
      • Some authentication plugins (http auth, for example)
    • Custom files:
      • Add file templates for mountable application configuration files
      • Files that specify integrations with other services
  • Deploy
    1. Create values.yaml file with the variables for the Helm deployment to the Kubernetes cluster
    2. “Manually” add secrets to the Kubernetes cluster.
    3. Run helm install to install the customised application.

Configuration

As can be seen in the images, applications are expected to have two different types of configuration. Containers should provide a default configuration, that at least configures things like the port the application runs on, the locations for log files, etc.

What we call the external configuration is provided by the user. This includes overrides of the default application, as well as variables like the hostname that the application will run on and listen to and the title of the web interface.

OpenAppStack will use Helm charts to provide the external configuration for the “Deploy” step. Helm charts can contain configuration file templates with default values that can be overridden during the installation or upgrade of a helm chart.

Application containers

For inclusion in OpenAppStack, it is required that the application developers provide Docker containers for their applications. There are several reasons for this:

  • If application developers do not provide a container, chances are they also do not think about how their application would update itself after a new container is deployed. This can lead to problems with things like database migrations.
  • Maintaining the containerisation for an application can, in most cases, not be fully automated.

Container updates

When an application update is available, these updates need to be rolled out to OpenAppStack instances. This will be done according the following steps:

  1. Application container is built with new application source and tagged for testing.
  2. Helm chart for application is updated to provide new container.
  3. Helm chart is deployed to an OpenAppStack test cluster following the steps in the diagram above.
  4. Application is tested with automated tests
  5. If tests succeed, new container is tagged for release.
  6. OpenAppStack automated update job fetches new Helm chart and upgrades current instance using Helm.

Most of these steps can be developed by configuring a CI system and configuring Kubernetes and Helm correctly. The automated update job that will run on OpenAppStack clusters will be developed by us.

Persistent data

The basic idea of persistent data is that it should not be present within a Docker container. It is possible to add “volumes” for data storage. More research should go into “volume drivers” that make sure data can be shared over different containers. This is especially important if containers run on different Kubernetes worker nodes.

It is also possible to encrypt the contents of volumes.